160 likes | 276 Views
Methodological introspection: how not to talk past one another. When common sense knowledge is neither common nor sensible enough Or How five experts can be neither more nor different from their sum, just less. Science has little in terms of methodological introspection.
E N D
Methodological introspection: how not to talk past one another When common sense knowledge is neither common nor sensible enough Or How five experts can be neither more nor different from their sum, just less
Science has little in terms of methodological introspection Many basic concepts are rarely taught or discussed explicitly: e.g. model, representation, theory, simulation etc. Their meanings have large tacit components and belong to common sense. The cogs of the machinery behind scientific theories are more like crafts than like theories.
This is often all well. Common sense is highly economical as long as it is: • Common – in the sense that everybody does indeed mean roughly the same thing • Sensible – the concept is (still) suitable for the application When this does not hold, the inarticulate nature of common sense becomes a great problem.
≠ ”Model” ”Model” ”Model” Fields have been conceptually quite isolated for a long time: • There is no canonical definition of ”model”. • The field-local agreed meaning is largely unarticulated. • The field-local agreed meaning adapts over time to the needs of the field • Yet, it remains anchored to some basic features of its epistemological role. • We get insidiously just-slightly-different versions of concepts: they are still called the same thing and remain syntactically identical.
That is… We can disagree without even realizing that we disagree and projects can fail for no apparent reason. …the chance of this happening increases in cross scientific work – which is of course inherent to complex systems science. The problem could become rampant when non-scientists add to the mix, as is the case in policy-related work.
So, this was to argue the importance of engaging in analysis of methods and tools. To provide a short example of what I mean, let us consider the concept of ”simulation” Everybody knows what a simulation is…
But what is it? This depends on who you ask. First of all, simulation ”is” nothing specific: there is hardly an original and archetypal essence waiting to be discovered. Simulation is potentially all the things we call simulation, but we might be well served by taxonomizing and finding more specific labels…
For example: • Natural scientists want to avoid strong assumptions and prefer abstract models in simulation as in other cases. • Many social scientists see simulations as means for animating detailed expert knowledge. Physicists look like dillettantes, unaware of the details of the subject area and unwilling to go into in them. The social scientists seem like they haven’t grasped the basic tenets of sound theoretical development. Both are of course often both right and wrong in these judgements.
Let’s define one type of simulation: ontological simulation We mimic the ontology of a system: a microlevel of entities, properties, interaction, etc. The model ontology is seen as a mimic of that of a real system. Thereby, we hope, the dynamics of the model system will also be a mimic of that of the target system. Some categorizations that follow are…
Ontological simulation… …does not include numerical solutions to equations. …is akin to e.g. neural networks and genetic algorithms. It is yet distinct from these in important ways. …includes agent-based models. What are the possibilities and pitfalls of using such a method?
Direct ontological simulation We simply assume that we know the microlevel and use the model to predict what the target system will do. This archetypic situation is however quite rare, as you probably all know. It is sort of what we think simulation ought to be, what we think others expect it to be.
Inverse ontological simulation When we use the outcome of the dynamics to find a good ontological model. Ontologies and parameters are viewed as educated hypotheses of how the target system works. The dynamical behavior is used, along with a microlevel argument, to judge whether we have succeeded and to guide us in modifications. This is done for a variety of reasons…
Getting a feel for a system This is common in policy related work: we do not trust the ontology model enough to use it for outright prediction But, given reasonable parameters that can be tuned and a possbility of changing the ontology… We may use it to get a feel for what sorts of problems that may arise. This may direct our attention to the need for further research, and so on. Or, for integrating knowledge from different fields into the same global dynamical system: proper operation corroborates that we have succeeded in putting this knowledge into the same context.
Epistemologically We must concentrate on the general approach: We use a model ontology as a hypothesis about how the world works, and its consequences to falsify and corroborate. This is not new. It has been thoroughly analyzed by e.g. Tarski, Popper, Lakatos and Kuhn (the analytic content of theories, theory articulation) But what does seem to be new with this class of simulations is this:
We can deal quantitatively with qualitative hypotheses based on more or less verbal descriptions of entities • We have great freedom in varying these hypotheses as we see fit: we can home in smoothly. • The model itself maps well to the empirical case: we deal with properties, events etc in both cases. • We have a perfect access to states (compared with empirical experiments). • We can model systems that are impossible to investigate fully empirically (for reasons of scale, ethics, cost and so on)
Conclusions: The unarticulated nature of many fundamental scientific concepts become problematic when they no longer work properly: they are hard to discuss explicitly. These things need to be investigated in a thoughtful way. Doing so can minimize misunderstandings and make collaborations more efficient. It can also, like any theory, be enlightening: it can make visible not only problems but also potentials that can be further developed. In general, it is usually a good idea to figure out more in detail what it is one is in fact doing, such as when simulating.