[Abstract] [1. The debate about reductionism] [2.About inter-level relations] [3. The seven-layer OSI model] [4. How can a Laplace’s Demon be upgraded to detect the existence of persons] [5. Layered ontology] [References]
Reductionism is hard to affirm if it implies that things are “nothing but” particles. It is also hard to deny without giving up the respectable physicalistic ontology. Is it possible to espouse a materialistic ontology, with strong inter-level relations and explanatory autonomy of biology and psychology?
In other words, in what sense could be said that physics does not exhaust all that can be said about matter? The answer would be that physics does not account for structure, and that structure defines a hierarchy of levels. In order to discuss the nature of inter-level relations I will consider the case of a Laplace Demon trying to detect high-level entities. He could detect them if his abilities were upgraded with a set of conditions regarding relations between components of structures. The definition of structures introduces different layers of complexity which, according to Medawar’s approach, can be seen as an enrichment. Structures in layers can exhibit features such as multiple realizability, circular causality and strong inter-level relations with adjacent layers. Due to multiple realizability, the later is not transitive. This hypothesis is illustrated with the case of telecommunications networks and the OSI layering schema. A layered schema is attempted. Discussion covers physics, chemistry, engineering, biology and psychology. The materialist thesis that there is not anything apart from physical particles in space, does not entail the physicalist thesis that all that can be said about ensembles of particles is fully described by the formalism of physics.
[Abstract] [1. The debate about reductionism] [2.About inter-level relations] [3. The seven-layer OSI model] [4. How can a Laplace’s Demon be upgraded to detect the existence of persons] [5. Layered ontology] [References]
It seems that reductionism is as hard to deny as to affirm.
It is hard to deny because nearly everybody wants to espouse the respectable physicalistic ontology. If everything is a compound of microphysical entities, the laws of physics should apply, and the possibility of reduction should be conceivable in spite of the difficulties of effectively carrying out the calculus. Denying the possibility of reduction would imply that, regardless of the new theories that can appear, physics will never be enough. It means that something different from the inventory of microphysical entities must be added if the living or the mental are to be understood. It should be noted that this inventory is not static or definitive. New entities can be admitted, as was the neutrino in 1930. New candidates for the most fundamental entities have been put forward, such as quarks or superstrings. These entities were always grounded in scientific hypotheses submitted to direct or indirect empirical tests.
What kind of entity could the antireductionist dualist propose? An entity which does not belong to the physical world and nevertheless interacts with it? The difficulties in conceiving and describing with positive properties such an entity encourages the materialist reductionist project. A materialist image of the world is found in the works of the presocratic philosophers, Democritus and Epicurus, and is reformulated in terms of mechanicism by Hobbes and La Mettrie. Modern chemistry, the theory of evolution of biological species, biochemistry, statistical quantum mechanics, theories about the origin and evolution of the universe, are major steps towards a unified scientific description of the world, covering the inorganic and the living. There remained the mental perceiving, believing and desiring, to be understood in a materialistic way. Descartes res cogitans had to be converted into res extensa.
Classical behaviourism cannot be considered as a possible solution to the problem as it does not define itself with regard to the existence or reduction of mental states. It refused to work with subjective introspection; the theory being confined strictly to observable stimuli and responses. Ryle’s analytical behaviourism does not deal with mind or matter. It handles the mental as a wrong category used in sentences referring to the behaviour of humans, while these sentences could also be interpreted as dispositional attitudes without the need of inferring a new category. So, the first attempt to give a materialistic account of the mental is the central-state theory. H. Feigl, D. Armstrong and J.J.C. Smart among others, developed the thesis that the reference of sentences concerning mental events, processes and states, was coextensive with the reference of sentences concerning neural events, processes and states (e.g. pain = C-fibers firing). The project would be successful if a correspondence list between mental events and the neural events could be completed. Here, two major kinds of difficulties are to be faced. First, even if such a correspondence list could be achieved, the introspective, first person’s report about his or her mental states, is not equivalent to the external, third person’s report about neural states. This leads to objections that can be developed through logical semantics (see Kripke 1972) or through the irreducibility of subjective perception (see Nagel 1974, and Jackson 1986). Some (Feyerabend, Rorty) claimed that the problem disappears if the discourse about the mental is considered a wrong, obsolete theory, leaving the discourse about the neural as the only one that is legitimate. The second major difficulty arises from the implausibility of one-to-one mapping between mental types and neural types. As Putnam (1975) stated, it is easy to suppose that neural configurations corresponding to pain differ from one species to another. The so called multiple realizability argument points out that, although mental events are realized through neural events, what is relevant about a neural state is not the concrete neurophysiological configuration instantiated, but its relationship with prior and after states, very much in the same way that computing processes are analysed in terms of what is to be done with certain input in order to produce a certain output. Thus, a mental state is characterised by its functional role within a series of events and processes. Functionalism has been the dominant view in Philosophy of Mind since the 1970s.
If reductionism is hard to deny, it is also hard to affirm. The two difficulties pointed out with regard to Central-State Identity theory are still matter of controversy within a functionalist point of view. It is not clear how functional analysis of mental states can give an account of consciousness and qualia phenomenology. I think that Searle (1992) is quite right when although he espouses a physicalistic ontology he says that we cannot claim to have solved the mind-body problem and to have naturalised the mental, just by making subjectivity disappear. And it is not clear either given that one-to-one mapping is not plausible, what kind of conditions should be required in order to ensure that low level facts (neural events and processes), realize high-level facts (mental events described in terms of functional roles). But even if we leave aside these problems, there is something unsatisfactory in the reductionist approach. Let us forget about consciousness-qualia, and let us suppose that we can map high-level items into microphysical-ones. What would be the status of high-level items? Are they nothing but fortuitous aggregates of microphysical fundamental entities? Are living beings, persons, nothing but mere spatial rearrangements of microphysical fundamental entities, like drops forming clouds in a variety of shapes? (1) Popper defines himself against the point of view that “there is nothing new under the sun” by affirming that evolution is creative, and brings forth real novelties (Popper 1977, 7).
Is it possible to espouse a materialistic ontology without reducing high-level entities and properties to a cluster of particles ruled by the laws of microphysics? Non-reductive materialism is appealling, and several approaches have been tried. Davidson’s anomalous monism (1970) insisted in all events being physical, and proposed supervenience, meaning that “there cannot be two events alike in all physical respects but differing in some mental respect” (Davidson 1970, p. 214), but denied the existence of psychophysical laws. Without psychophysical laws, there was no theory to be reduced into physics. The concept of supervenience was developed by Horgan and others. The condition that “there could be no difference of any sort without difference in physical respects” can be formulated locally, or globally (in terms of a possible worlds). The non-reductive materialist option could be achieved if supervenience were to be formulated without a strict mapping from the mental to the physical, the kind of mapping, for instance, that mechanical statistics achieves with thermodynamics. This should preserve the autonomy of the mental. Mental events would supervene on physical ones, but exert their own causal efficacy. That means that, although the mental and the neurological explanatory frameworks are compatible, psychophysical laws would not reduce to neurological ones (Horgan 1994). The problem with non-reductive materialism is the causal closure of physics. As Kim argues: “Suppose that a certain event, in virtue of its mental property, causes a physical event. The causal closure of the physical domain says that this physical event must also have a physical cause. We may assume that this physical cause, in virtue of its physical property, causes the physical event. The following question arises: What is the relationship between these two causes, one mental and the other physical?” (Kim 1989, p. 254) They should be the same, described at different levels. But this leads to bridge laws “enabling a derivational reduction of psychology to physical theory”. According Kim, causal relations involving macro-events are “supervenient causal relations,” causal relations that are supervenient on micro-causal processes. So, Davidson’s anomalous monism is dismissed for the absence of causal role in mental properties. The supervenience approach lacks a satisfactory explanation of the relation between the mental and the physical: “if a relation is weak enough to be non-reductive, it tends to be too weak to serve as a dependence relation; conversely, when a relation is strong enough to give us dependence, it tends to be too strongstrong enough to imply reducibility.” (Kim 1989, p. 251)
It seems that the debate should be focused on the nature of inter-level relations. If there is multiple realization with no one-to-one mapping, does this mean that there are no requirements that should be fulfilled by all possible realizations? Does the mental supervene the physical in a totally arbitrary manner? On the contrary, if such requirements do exist, can they be considered as bridge laws connecting types? Do these requirements allow us to speak of the reduction of the mental to the physical?
According Nagel (1961, Chap. 11, p. 352ss) there are two formal conditions for reducing one theory to another (e.g. macroscopic thermodynamics to kinetic gas theory). Let us qualify as “secondary” the special science to be reduced or explained in terms of the most fundamental one, which would be the “primary”. Reduction is achieved when the laws of the secondary science are shown to be the logical consequences of the theoretical assumptions of the primary science. As the secondary science will contain some terms ‘A’ that do not appear within the primary science, there is a condition of connectability:
(1) “Assumptions of some kind must be introduced which postulate suitable relations between whatever is signified by ‘A’ and traits represented by theoretical terms already present in the primary science.”
These relations can be logical connections between established meanings of expressions, conventions created by deliberately as coordinating definitions, or physical hypotheses, asserting that the occurrence of the state of affairs signified by a certain theoretical expression ‘B’ in the primary science is a sufficient (or necessary and sufficient) condition for the state of affairs designated by ‘A.’. In the case of thermodynamics of gases, such assumption would be T=2E/3k where E is the mean kinetic energy of the molecules and k the Boltzman constant.
This condition alone does not suffice to obtain the laws of the secondary science, hence the condition of derivability:
(2) “With the help of these additional assumptions, all the laws of the secondary science, including those containing the term ‘A’, must be logically derivable from the theoretical premises and their associated coordinating definitions on the primary discipline.”
In the case of weak supervenience, reduction is not feasible because we lack the “suitable relations” between particular types, as required by the condition of connectability. The mental, or other high level events, supervene globally on the physical. Considerations about causal efficacy lead Kim to postulate strong supervenience between types (first condition), with supervenient causal relations which would ensure the second condition.
Nagel insists that the question of whether one science is reducible to another, cannot be settled “by inspecting the properties or alleged natures of things, but must be ascertained by investigating the logical consequences of certain explicitly formulated theories (that is, systems of statements). For the conception ignores the crucial point that the natures of things, and in particular of elementary constituents of things, are not accessible to direct inspection”. Now, I should apologize or justify myself for having done just so in the preceding section. But it happens that we are not engaged in examining whether a given, finished, complete theory is reducible to another finished, complete theory; complete psychological or biological theories are not available. Rather we are laying an heuristical bet on how knowledge could be increased. We have a materialistic bet for the reduction project, and we have also a bet against epiphenomenalism. The later seems not to be compatible with the former.
The emergence of levels, with new properties not reducible to the properties of its components, is often raised as an argument against reduction. The fact that the whole is more than the sum of its parts is another frequent claim. With regard to emergence Nagel distinguishes between two issues, namely (3) and (4):
(3) Non-deducible properties. A thesis about the hierarchical organization of things and processes, and the consequent occurrence of properties at “higher” levels of organization which are not predictable from properties found at “lower” levels.” An example that has been suggested is that the properties of water cannot be drawn from the properties of oxygen and hydrogen. Again, Nagel, reminds us that if the property ‘translucent’ does not appear in the theory that describes oxygen and hydrogen, it is impossible to deduce it. The connectability condition fails.
(4) The emergence of novelty. A conception of an “evolutionary cosmology, according to which the simpler properties and forms of organization already in existence make contributions to the “creative advance” of nature by giving birth to more complex and “irreducibly novel” traits and structures … emergent evolution is the thesis that the present variety of things in the universe is the outcome of a progressive development from a primitive stage of the cosmos containing only undifferentiated and isolated elements (such as electrons, protons, and the like), and that the future will continue to bring forth unpredictable novelties.” This would be a matter of empirical historical inquiry. On the other hand there are not any clear criteria for determining whether a property is new. “It might of course be said that such novel types of dependence are not really novel but are only the realizations of potentialitites that have always been present in the natures of things; and it might also be said that, with sufficient knowledge of these natures, anyone having the requisite mathematical skills could predict the novelties in advance of their realization.”
Turning to the second objection against reductionism, Nagel considers “organic wholes”. A whole can be said to be “more than the sum of its parts” depending on the order or structure that is to be imposed on these parts. The whole assembled in a particular order and keeping certain relations between parts, is not the whole obtained from any unordered heap. For instance, if we think of a clock, “although the mass of a body is equal to the sum of the masses of its spatial parts, a body also has properties which are not the sums of properties possessed by its parts” (id. p. 389). But this should not imply that the properties of the whole are not analyzable in terms of its elements. “It is therefore plausible to construe the assertion as maintaining that, from the theory of mechanics, coupled with suitable information about the actual arrangements of the parts of the machine, it is possible to deduce statements about the consequent properties and behaviours of the entire system.”
It is to be noted that the key notion, when dealing with issues such as supervenience, connectability condition, emergence, or whole, is precisely the notion of arrangement, structure, organization. When there is no special structure, and the contribution of each part can be separated from the others, it could be said that the whole can be studied by “additive” analysis. This would be the case of the solar system. When the relations between parts modify their individual contributions, the analysis would be “non-additive”. Nagel concludes that “the mere fact that a system is a structure of dynamically interrelated parts does not suffice, by itself, to prove that the laws of such system cannot be reduced to some theory developed initially for certain assumed constituents of the system” (id. p. 397).
It seems that the “suitable relations” between the terms of secondary sciences and physics (and here we assume that physics is the primary science) could be formulated when the ensembles are additive, or uniform, with no particular structure. In cases where interdependence appears (cases described as “emergent” or “organic wholes”) we have the same basic elements (microphysical items, if we keep to a materialistic ontology) plus structure. Is it possible to account for structure from the point of view of the primary science?
Certainly not, if the primary science is understood as a n-body problem. Structure does not appear in its vocabulary. Nagel writes that “Laplace was thus demonstrably in error when he believed that a Divine Intelligence could foretell the future in every detail, given the instantaneous positions and momenta of all material particles as well as the magnitudes and directions of the forces acting between them. At any rate, Laplace was in error if his Divine Intelligence is assumed to draw inferences in accordance with the canons of logic, and is therefore assumed to be incapable of the blunder of asserting a statement as a conclusion of an inference when the statement contains terms not occurring in the premises.” (id. p. 366) Perhaps he (Laplace’s Demon) could do it if he had the “suitable information about the actual arrangements of the parts of the machine”.
In the next section I shall examine what kind of information should be furnished to Laplace’s Demon and try to find out in what sense there could be “something new under the sun”.
[Abstract] [1. The debate about reductionism] [2.About inter-level relations] [3. The seven-layer OSI model] [4. How can a Laplace’s Demon be upgraded to detect the existence of persons] [5. Layered ontology] [References]
2.1 What above microphysics?
Horgan (1984) posed the question “what information does the demon need, over and above the specification of a P-world’s total microphysical history, in order to complete the task of cosmic hermeneutics for that world?” Here “cosmic hermeneutics” means high-level predicates, sentences in ordinary English. A second question was “What is the metaphysical status of this supplementary information?” Using the Laplace’s Demon paradigm means that a complete microphysical description, and unlimited calculus power are available. There, Horgan rejected bridge laws and ontic identities. He proposed “meaning constraints.” As I understand it, this implies a mapping from high-level vocabulary into microphysical states. Three cases were mentioned: the mapping from the higher-level sciences into microphysics, the mapping from intentional creature’s items (desires, beliefs, language), and ordinary language’s platitudes. Horgan remarked that “non-reductive materialism is enormously attractive, given its promise of constituting a general world-view consonant with the knowledge of humans and their environment that is provided by contemporary science. A plausible explication of non-reductive materialism is the supervenience thesis, as supplemented by the claim that the only interpretative constraints at work in cosmic hermeneutics are meaning-constraints.” The problem with this approach, I think, is that the meaning-constraint assignment could be entirely arbitrary. There is a microphysical world, a set of meaning constraints is added and so high-level predicates are obtained.
Later, Horgan (1993) poses similar questions:
The Standpoint Question: What sorts of facts, over and above physical facts and physical laws, could combine with physical facts and laws to yield materialistically kosher explanations of inter-level supervenience relations, and why would it be kosher to cite such facts in these explanations?
The Target question: What facts specifically need explaining in order to explain a given inter-level supervenience relation, and why would a materialistic explanation of these facts constitute an explanation of that supervenience relation?
The Resource question: Do there exist adequate explanatory resources to provide such explanations?
These questions are no longer about “meaning constraints”, and instead focus on “facts”. I would like to modify the first approach and ask:
(5) “How can a Laplace’s Demon detect the occurrence of high-level entities?”
As working hypothesis I assume that the universe is an ensemble of particles in space. With this assumption I locate the debate in the strongest possible conditions. I assume also that the Laplace’s Demon (LD) can solve a general n-body problem (we leave quantum uncertainty aside): given the positions, momenta and charges of a set of particles, their interactions define a field in space which determines their evolution. LD can produce a basic report listing positions and momenta of particles for any time t. We are going to inquire about structures and inter-level relations in terms of LD reports which must be worked out of this basic report.
A particular case of our question (5) could be the following: Given a particular electron, how can LD ascertain whether this particular electron is within a atom, a molecule, a crystal lattice or a living cell? or how can he know whether the change over time that occurs to this electron is a jump between orbitals within an atom, a free movement within a metal, or a rearrangement of orbitals taking part in chemical bonds within the process of protein synthesis through RNA transcription. How can LD come to know if this electron is within a potassium ion K+ in a water solution, or a potassium ion K+, taking part in the activation of a neurone which transmits a signal to a muscle which executes the voluntary movement of cracking fingers?
Somehow we are asking LD to group particles in “relevant wholes” instead of producing a report listing all individual particles. So, we are asking LD to write supplementary reports which convey relations between particles or properties of ensembles of particles. To produce these reports about “wholes” we should feed LD with a set conditions that should be fulfilled. That is, we upgrade LD’s mission so that it can pay attention to mutual relations between particles and inquire whether they remain constant over a certain time. Once wholes are identified, queries about relations between wholes can be posed. Changes occurring to systems without disintegrating them can be considered, and thus processes can be identified. In some cases it would be quite straightforward whereas in others we have no idea of what those conditions could be. For instance, if a proton is surrounded by other protons within a certain radius, for a certain period of time, LD could infer that this particular proton is part of a nucleus. Similar conditions would hold for molecules, crystal lattices, and so on. Let us suppose that such conditions can be formulated for all high-level items:
(6) Primary compound objects can be detected by a Laplace’s Demon using a set of conditions referring to relations between the fundamental entities of microphysics that remain invariant during a certain time, or conditions referring to processes of those entities. New compound objects can be defined using primary compound objects and processes. All objects and processes are described by microphysics or could be related to it through primary, secondary, etc… sets of conditions.
These conditions would ensure the condition of connectability (1), and would play the role of bridge-laws accounting for the supervenient causal relations proposed by Kim. They account for structure, and allow LD to look for something for which he was not designed to grasp. Of course this is, at best, a working hypothesis. Crane and Mellor (1990) asked why physicalistic ontology should exclude any other possibility given that: 1) There is no evidence that everything is reducible in principle to microphysics, and 2) Causality (and laws) can be formulated in the psychological domain. So, whence the ontological primacy of physics? Again my answer would be that it is a matter of heuristical bet. And the inquiry about relations between levels seems more promising than postulating a separate ontology. Hypothesis (6) answers Horgan’s standpoint question about inter-level explanations, in terms of relations between compounds. The two issues raised by Nagel with regard to emergence (3), (4) could be formulated here as: (3′) What “glues” together the compounds in order to keep some relations constant over time? (That’s what Agazzi (1988, p. viii) called static causality, a “expression of the patterns, of the links which bond the elements or constituents of things together”) and (4′) How is to be explained the origin of these complex compounds (causality in dynamic sense, according to Agazzi). With regard to the first issue, we find, in the case of simple physical systems, that the condition for stability is that of being in a state of minimal energy, whereas in living systems, structure is sustained by a dissipation of energy. Every special science deals with relations between parts. Cosmology and the theory of the evolution of species handle with the second issue. Our heuristical bet is on these two kinds of inquiries. What I want to discuss now, is how this materialistic approach can avoid the “nothing-butness” problem we pointed out to before.
2.2 A Hierarchy of levels
Are the discourses of special sciences autonomous? Or are they a kind of abstract or approximation to the real account of things, which would be physics, an approximation due to our epistemic limitations? I believe that those supplementary reports about wholes that we can ask the LD to produce, are worthwhile regardless of the epistemic capabilities of the observer (Cots, 1997). Those reports inform about what remains invariant over the movements of particles, very much in the same manner that metric geometry informs about the properties that remain invariant under translations, rotations or inversions.
Medawar (1974) used this approach to draw a very interesting analogy between the hierarchy of geometries as conceived by Felix Klein and the various tiers of natural sciences. In the Erlanger Programme of 1872, Klein, when comparing the results of projective, affine and other geometries, introduced a concept to explain the production of different geometries based on transformations. Given an ensemble of transformations, which may be translations, symmetries, rotations, projections, it is inquired which properties (distance, angles, surface, area, parallelism, proportions, tangency, adjacency) remain invariant under these transformations. These define which objects can be considered as equivalent. For instance, in metrical geometry, we can consider as equivalent any two objects that can be superposed one over the other. In affine geometry, where transformations enlarge or contract figures, all ellipses can be seen as equivalent. In projective geometry any conic section can be obtained from a circumference applying the appropriate transformation. (If Klein had come to know fractal geometry, there is no doubt that he could have reinforced his argument. Fractal transformations provide an extraordinary tool to classify very complex forms in an astonishingly simple manner.) The most general approach is that of topological transformations where only adjacency between points is conserved. Topology allows us to classify surfaces in space according to insidness, or outsideness. It doesn’t make sense to speak of circularity or rectangularity. Departing from topology, we can obtain projective geometry by restricting the very general set of transformations considered, to a set which leaves invariant more properties other than adjacency (linearity, the anharmonic ratio, etc.). The same can be said about affine geometry with regard to projective, etc. Lowering the generality means that new concepts are introduced in geometry. “This progressive enrichment occurs not in spite of the fact that we are progressively restricting the range of transformations, but precisely because we are doing so.” (Medawar 1974, p. 61).
All properties that appear in basic levels are inherited by the ones that follow. Similar remarks can be applied to the list of empirical sciences: physics, chemistry, biology, ecology/sociology. “As we go down the line, the sciences become richer and richer in their empirical content and new concepts emerge at each level which simply do not appear in the preceding science. Furthermore, it seems to be arguable that each science is a special case of the one that precedes it. Only a limited class of all the possible interactions between molecules constitutes the subject matter of biology…” (id. p. 62). So, although chemistry is made out of physics and biology out of chemistry, “biology is not ‘just’ physics and chemistry, but a very limited, very special and profoundly interesting part of them.” Medawar concludes with the following remarks: “The entire notion of reducibility arouses a great deal of resentment among people who fell that their proprietary rights in a given science are being usurped, but if the parallel as outlined above were to be accepted, I think it would purge the idea of ‘reducibility’ of the connotation of diminishment or depreciation.” And in the sense of what I have called a materialistic heuristical bet: “experience shows that analytic reduction provides us with an immensely powerful methodological weapon which enables us not merely to interpret the world but also, if need be, to change it.”
The consequence of this argument is that the commitment with the physicalistic ontology does not imply the primacy of physics as explanatory resource. Special sciences, such as geometries, are not imperfect approximations we must ourselves resign to, lacking the calculus capability to solve the problem within physics. Owens (1989) makes similar remarks: “if we think about physics, chemistry, biology, psychology as different explanatory levels, it’s true that physics lies at the base of the building. But it is clear also, that as we climb the building, we find more explanatory features which could not be discerned by someone who confined himself to exploring the ground floor.” This is explained by Medawar’s analogy. Those “explanatory features” cannot be discerned because at first floor there is no condition defined to account for these features. My LD, designed to solve the n-body problem, cannot detect high-level properties unless it is upgraded with a set of conditions (6). To follow Medawar’s analogy, we could say that a “Topological Demon”, capable of accounting for all topological properties, would be ignorant of Projective Geometry unless it is upgraded with a set of definitions referring to the properties which remain invariant in Projective Geometry.
Perhaps this point of view could be named “nice reductionism”, meaning that high-level types can be analysed through its components, while the corresponding theories do not lose its explanatory autonomy:
2.3 Neutrality of realization, circular causality, novelty
But if an object A is defined as a compound of objects B with some structure AB, which again are defined as compounds of objects C with some structure BC , and so on, we could assume that these relations are transitive. So, why not refer everything to the most fundamental level and forget about the rest? The physicalist could say “OK, I agree that structures and relations cannot be explained from the point of view of the n-body problem. But why must you introduce this amount of different levels, and hierarchies? Why not limit us to a general science of structures and relations between microphysical entities, a kind of extended chemistry? After all, atoms and molecules are the kind of wholes defined by relations invariant in time.” I believe that the reason for introducing several levels with explanatory autonomy is what Horgan (1994) calls “neutrality of realization” in connection with what Varela (1991) calls “circular causality”.
High-level types (e.g. pain) can be realized by multiple low-level types (neural configurations). Multiple realization has been taken as an argument against reduction: “I deny that mentalistic psychology must be reducible to neurobiology; and in fact I very much doubt whether the former is in fact reducible to the latter. Reductionism will turn out to be false if it is physically possible for intentional mental state-types to be physically realized in a variety of different ways; and it seems very likely that multiple physical realization is indeed physically possible, at least as between different physically possible species of cognizers (for instance, Martians vs. humans).” (Horgan 1994, p. 240). But if multiple realization is only a disjunctive between two levels, as: H (high-level type) can be l1, l2, l3, … ln (low-level types), then there is no point in objecting to reduction: “If psychological states are multiply realized, that only means that we shall have multiple local reductions of psychology. The multiple realization argument, if it works, shows that a global reduction is not in the offing; however, local reductions are reduction enough, by any reasonable scientific standards and in their philosophical implications” (Kim, 1989, p. 250). On the other hand, it must be noted that the kinetic theory of gases uses precisely the feature that each macroscopic configuration (as described by pressure, temperature, etc.) is compatible with many microscopic configurations.
In disjunctive cases there could always be found a condition that should be fulfilled by all possible multiple realizations, or local reductions can be tried. It does not seem plausible to say that they are at random as sometimes it seems to be suggested: “Moreover, for all we now know (and I emphasize that we really do not now know), intentional mental states might turn out to be radically mutually realizable, at the neurobiological level of description, even in humans; even in individual humans; indeed, even in an individual human given the structure of his nervous system at a single moment of his life.” (Horgan 1994, p. 240). I believe that Horgan is right but I would not say that high-level events here, mental events are opaque with regard to analysis. My hypothesis is that analysis is feasible between adjacent layers. For instance, it should be feasible between mental events and certain general properties of neural networks. I shall try to explain this later, and I will illustrate it with the case of telecommunications networks. What I mean to say is that the explanatory autonomy of psychology cannot be defended by saying (i) the mental supervenes on the physical and (ii) it does not matter how it is realized. This would an adhoc argument to avoid supervenient causal relations.
The possibility of analysis can be seen as compatible with the feature of neutrality of realization, in the sense introduced by Horgan: “Higher-level theoretical concepts typically are, as I shall put it, strongly realization-neutral. By this I mean that they are neutral both (1) about how they are realized at lower theoretical levels (and ultimately at the level of physics), and (2) about whether or not they are uniquely realized at lower levels (and ultimately at the physics level).” (Horgan 1994, p. 241). I would like to put forward a analogy of neutrality of realization. Let us think of a scale model of a crane made out of a building set such as Meccano. The inventory of Meccano’s pieces plays the role of microphysics. But we could have a crane functionally identical, with the capability of elevating similar charges, and doing the same displacements, now made out of the pieces belonging to another brand (e.g. Lego). The pieces are different, with completely different ways to attach them. This would mean that we have the same macroscopic world built out of a different microphysical fundamental ontology. Whence the conclusion could be drawn that the microphysical ontology is not as fundamental as it seems (Cots, 1997a). The high-level properties that define the crane as an object are not established by the basic rules of assembling the parts. But this does not permit us to think that a crane can be realized by any low-level configuration. There is a set of conditions which fixes the requirements for an object to be a crane: an arm with rotating capability, a grasping device, etc. Again, the arm of the crane is defined as a rigid solid of certain size, in terms of the elementary parts of the building set (for one brand or another). Following Medawar’s remarks, we could say that the inventory of parts of the building set, and its rules of assembly are the fundamental theoretical level from which new levels can be defined.
The next concept to be considered is that of circular causality. There are compound objects which behave just as elementary parts in another scale. This would be the case of rigid spherical solids. When the distance between bodies is large enough as to dismiss the size of the bodies, the problem can be handled as if they were point-particles. Each body is described by its mass and charge, which is no more than the sum of the masses and charges of its parts. There is no structure to account for. But when compound objects are not homogeneous, new behaviours can appear. A simple machine such as a clock could be an example. The most interesting cases are those when the structure and processes of an object are dependent on another one and viceversa. Varela introduces de concept of autopoiesis to describe this kind of causality in the living. It seems that the reduction of life to chemistry is achieved, “mainly through the discovery of the genetic code and the notion of a cellular programming which is supposed to stand at the base of all development as it (literally) writes the organism as it unfolds in its ontogeny. However, after an initial phase of enchantment with the idea, it has become clear – and the molecular biologists were the first to point this out – that if one takes the notion of a genetic program literally one falls into a strange loop: one has a program that needs its own product in order to be executed. In fact, every step of DNA maintenance and transcription is mediated by proteins, which are precisely what is encoded. To carry on the program it must already have been executed!” (Varela 1991, p. 4). Another case of circular causality is the mutual dependence between (i) the metabolic processes of the cell, which are made possible by the membrane that individuates the cell and establishes a specific interchange with the surroundings, and (ii) the membrane which is produced by those metabolic processes. Now, if we want to maintain Kim’s supervenient causal relations, i.e. that macro-events are causally efficacious through micro-events, we have the following: No particular causal efficacy is found between the codons, nucleotide triplets that constitute DNA, and the aminoacids encoded by them. We can analyze in terms of components the structural properties of DNA or catalytic proteins, but we cannot trace its causal efficacy to its parts without its high-level structure. If we want to understand what happens to a particular aminoacid that is being grasped by tRNA in order to be assembled in the sequence specified by a mRNA segment, we need the whole process. Perhaps is this what made Popper speak of “backwards causation” which would be the opposite of strict supervenient causal causation. In The Self and its Brain Popper argues against the point of view that higher levels cannot influence lower ones, quoting the case of proton diffraction by a crystal lattice which would behave as a whole, or the case of a society and its influence over individuals (cf. op. cit. 7).
In the n-body problem, the effects of each part are simply additive. If we want to know what will happen to a charge located in a particular point of the space, we only need to sum the interactions in that point to obtain the field, forgetting about all the rest. But once we have begun to look for what remains and what changes, that is, once we have begun to look for invariants and structures, then we can pose the question of what causes make change or evolve structures. And these have to be formulated in terms of structures at the same level. Structures interact with structures. Its properties are more relevant than its realization at lower levels. For instance, if we want to explain why a screw attaches to a nut, we refer to the form of both. The actual realization by a particular chunk of matter is, to a certain extent, irrelevant, as long it has certain compressibility properties. So, neutrality of realization and circular causality account for a certain but not absolute degree of autonomy for higher layers with respect to lower ones.
Let us summarize. We have:
i) a heuristical bet for a materialistic ontology.
ii) an extra curiosity about which relations between particles remain or change. That means reports about microphysics as produced by our LD are not enough. LD must be upgraded with a set of condition (6) to account for compound wholes. (We are reductionists.)
iii) Medawar’s analogy: new objects and processes define properties that cannot be predicated from its parts. New and more complex properties form a hierarchy of layers. (We are nice reducctionists.)
Now, while (ii) aims to guarantee that everything is analyzable in terms of its components, the possibility of multiple realization, together with circular causality in several layers, makes it impossible to jump directly (in the sense of supervenient causal relations) from layer n to the fundamental layer. The reason is that at level n, we cannot know, for some lower level n-j on which it supervenes, which occurrence of the multiple realization disjunctive happens to be. And so we cannot know, either, which process of circular causality can be instantiated. To use the traditional examples, if we think of pain, as a very general type, we know that it must be instantiated in certain organisms. Multiple realization makes possible that these organisms may be terrestrial or Martian. Pain as a property can be defined in a general way for both (in terms of functional properties of behaviour, subjective qualia, etc.). But different occurrences of the disjunctive will instantiate different biologies, with different circular causality mechanisms. At lower levels this leads to different conditions defining what are the basic blocks of life in each case.
So when detailed inter-level relations are omitted and the general reduction (from layer n into physics) is attempted, the bifurcations of disjunctives may be so great and confusing that all that we have is global supervenience. Inter-level relations between adjacent layers can be more or less invariant. Or at least, there are some common properties between all alternatives that take part in multiple realization disjunctives. This invariance is not found in relations between layer n and physics.
Dennet makes a similar point when he remarks that Martians who could manage to know all physical facts just as our LD, could not detect the high-level event of a stockbroker ordering 500 shares of General Motors. They could “predict the exact motions of his fingers as he dials the phone and the exact vibrations of his vocal cords as he intones his order. But if the Martians do not see that indefinitely many different patterns of finger motions and vocal cord vibrations even the motions of indefinitely many different individuals could have been substituted for the actual particulars without perturbing the subsequent operation of the market, then they have failed to see a real pattern in the world they are observing.” (Dennet 1987, pp.25-6)
My point is that in order to detect this pattern, Martians or LDs should be able to produce reports about the immediately adjacent lower layer, which would be that of economic agents, market, etc. Each one of these objects can be detected using again conditions that refer to lower adjacent layer, which would go through a series that would end with human beings and basic community structures. In each step, multiple realization is found, but all possible instances have something in common which is precisely what is expressed by the detection conditions. However this kind of invariance is lost between high-level types and physical events, as they belong to very distant layers. Nevertheless, structures or events are always analyzable in terms of the lower adjacent layer. And it should be stressed that the property that allows us to qualify something as a “high-level item”, is not the absence of laws, not unanalyzabity, but the property of structural richness and or mutual causal dependency between structures. The absence of a simple mapping into physics is a side-effect. In next section, when the case of telecommunications protocols will be examined, this will become clearer.
The case put forward by Dennet is quoted by Owens (1989) in his paper Levels of Explanation. He takes physics as the fundamental science but this primacy “does not entail the hegemony of physical explanation”. Admitting supervenience, the reductionist thesis that “there is nothing of explanatory importance, no laws or nomologically interesting classifications of events which cannot be formulated in the language of physics” would only be true if all laws of special sciences would instantiate macrophysical laws. For instance, someone who knew all the laws of physics and all the physical facts could predict anything that an economist can explain or predict. In our case, this someone is a Laplace’s Demon, but he can explain economics only if he is upgraded with a convenient set of contidions. Owens considers the case of two economic events S(1) and S(2) instantiating an economic law, and supervening on two sets of physical conditions u and v. Now, the thesis of supervenient causal relations would say that S(1) explains S(2) because u explains v. But this is only possible if the relation ‘causally explains’ is agglomerative (what I called ‘additive’ in the sense that properties can be extended from parts to wholes (2) and transitive (3) .According Owens, this relation is neither agglomerative nor transitive, hence the explanatory autonomy of special sciences. My conclusions are similar. The absence of the agglomerative property is related to “circular causality” and the absence of transitiveness is related to the absence of invariance in non-adjacent layer relations.
The result is that:
(7) The description of the structures and properties of compound objects and its processes can be ordered in a hierarchy of levels, with possible disjunctives of multiple realization, and circular causality. This may prevent the formulation of supervenient causal relations directly from level n to microphysics. However, this can always be formulated between level n and level n-1. That is, supervenient causal relations are not transitive, for at each level several options can be instantiated.
Here the approach with regard to multiple realizability is different both from Horgan’s and Kim’s approaches. Instead of local reductions allowing a mapping from layer n to the fundamental layer, we have reduction only between adjacent layers. But instead of a global, non-analyzable, global supervenience, we have inter-layer analysis and explanation. What should we call this? Perhaps a “layered reductionism” (nice layered reductionism). If we had a single layer of complexity, with no multiple realization and no circular causality, it would be true that “there is nothing new under the sun”. This would be the case of a universe constituted by a uniform gas, or a universe constituted by rigid spheres of matter. But when new kinds of macroscopic behaviour appear, with autonomous circular causality, it could be said that there’s “something new under the sun”, although we have always the same materialistic ontology and the new structures can be analyzed and described by sets of conditions. Perhaps one way to express this novelty and the explanatory autonomy of special sciences is that “physics does not exhaust all than can be said about matter”. Special sciences could be seen as different discourses different reports produced by a LD about different degrees of complexity in matter arrangements. They would not be a poor substitute for physics. Instead, they focus on actual invariant properties of structures as expressed by sets of conditions (6) which cannot be accounted for from the point of view of physics. In that sense, their objects are not epiphenomenic, as structures can be said to have fundamento in re they could be detected by a LD upgraded with a convenient set of conditions. Later I shall try to give a sketch of layers but before that, it seems advisable to study a concrete case, the seven-layer OSI model for telecommunications networks. I have chosen this case because it exhibits all the features I have introduced while it is much more simple than cases where living or human beings are involved.
I will use this case to illustrate the following:
1. Items at each level are defined by conditions over the lower one.
2. There is a device, the Protocol Analyzer which would play the role of LD and can detect features at each level if it is furnished with those conditions.
3. According to Medawar’s concept, higher levels are defined, or built out of lower levels rather than deduced from them.
4. At each layer we find Horgan’s “neutrality of realization”, multiple realization, and circular causality.
5. There is supervenience and causal closure: there cannot be a difference in higher layers without a difference in lower ones.
What happens when an e-mail is sent? The e-mail application starts, a message with a title and an e-mail address is written, and when the “Send” button is clicked, your computer tries to connect to the sender’s mail server. If the connection is successful, the message is delivered, and sended to the destination mail server. The next time the targeted user starts his e-mail application, he connects to his mail server and recovers the message. Here the network used is the Internet.
What happens when money is withdrawn from an ATM? Some characters are typed on the keyboard, the magnetic band of the debit card is read, and a message is constructed and sent to the bank’s host. The transaction is processed and, if everything is right, a message is returned to the ATM authorising the withdrawal. Probably here we have a X.25 public network.
In both cases, telecommunications networks are used, and between the first computer that sends the message, and the last one that receives it, various devices and software applications take part in the whole process. The original data may be converted to suit better the transmission process. Extra headers may be added to describe whether the message is an answer or a request. Another header must be included, describing the destination address, and the path trough which it is supposed to be reached. With the aim of detecting transmission errors, some information is added at the beginning and the end of the message.
All this can be quite complex. The transmission may fail due to hardware problems, such as a link in bad condition, or a middle device which does not respond. As different devices may belong to different brands which use different software applications, the transmission may fail if one application is not compatible with another.
It would be very difficult to write a single protocol to handle all the tasks. So, the whole problem has been partitioned into subproblems with specific functions, in the same way that programming is divided into compiling, assembling, linking and loading. At the top there is a running application which has to send a message to another application, such as e-mail. At the bottom there are some physical events, such as an electromagnetic wave in a conductor wire linking sender and receiver. So, starting with the application program, the question is to establish, in a general conceptual way, which functions have to be performed by the communications software in order to cause the proper physical events in the transmitting media. When the destination device is reached, similar steps have to be executed, in the opposite order, to restore the original message which has to be delivered to the destination application. This suggests the idea of vertical layering: function n is located between tasks n+1 and n-1 in both source and destination devices. Now let us suppose that a new protocol dealing with an specific task is to be written. This protocol has to be compatible with data received from, or delivered by, other protocols supplied by other providers. It would help a lot if all vendors organised their protocols according the same layering scheme. This would allow to define some standards within each layer. When installing a piece of software covering one or several layers, attention should be paid to the compatibility with standards belonging to higher and lower layers.
In 1984, the International Standards Organization (ISO), published the OSI (Open Systems Interconnection) Reference Model (ISO 7498-1), with seven conceptual layers (Comer 1988). Other layering models exist, such as the IP model or IBM’s SNA. But with a little work, the OSI model can account also for the other layering schemes.
Layer 7. APPLICATION. Programs that use the network, such as Xwindows, Telnet, FTP (FTAM, a protocol for File Transfer) or e-mail (X.400 standard).
Layer 6. PRESENTATION. Different kinds of data are produced by application programs. Before delivering it to the network, they should be converted, sometimes compressed, to standard formats (when coming from the network, data is decompressed and presented to application programs).
Layer 5. SESSION. Applications seldom send a single stream of bits. Communication often consists in a series of messages, starting with a setup, an identification, and ending with a termination message. Protocols controlling sessions between applications are assigned to layer 5. For instance, it is agreed to send a particular initial message; then the system waits for an answer which guarantees that the other interlocutor is ready. Only after this initial exchange does the session begin. Another fixed message ensures that the session ends normally without messages pending.
Layer 4. TRANSPORT. There are programs that check the flow of data between sender and receiver, sending again streams of data if there are losses at lower levels. This is particularly necessary when several middle devices take part in the transmission.
Layer 3. NETWORK. There can be different possible paths, across different devices, between sender and receiver. The network layer routes the data. There are different strategies to do this. In one case, the path to access the destination is known from the beginning, just as when we dial a telephone number. Instead, the Internet strategy is analogue to sending a letter. The message is delivered to the network with an address. It goes to next node (the post office) from where it will be sent to another node. It may happen that the address does not exist, or the path is interrupted. Then a message is returned to the sender (that is when, surfing the web, you get the too frequent message “URL not found”). There can be optimisation strategies depending on the availability or saturation of certain parts of the network. Traffic is re-routed through the optimal path.
Layer 2. DATA LINK. This layer is concerned with synchronisation and controlling data flow between two adjacent nodes (It must be remembered that between sender and destination, data can cross many other devices in the network). Data is received from layer 3 and prepared to be sent. This consists, very often, in partitioning the data into packets of a certain size , called frames. Sometimes frames must have a minimum size, and blank data is added to small blocks to reach the right size. At the other end, the continuous stream of data delivered by layer 1 is converted again into frames, according to certain rules, and the prior message is reconstructed. As data is a continuous stream, the two devices have to agree about when (where) start to look for sequences of bits. In asynchronous communications, each block of data is delimited by a special sequence that allows to know where it begins and where ends. In synchronous communications, signals sent by device A, allow device B to adjust its adapter to be in bit phase with A (given the square wave, the adapter “looks” at the analogue signal to be converted to bits, at the same time intervals that are used by A), and starts “reading” bits at a certain moment. In local area networks (LANs) the most common protocols used for linking data are Ethernet and Token Ring. Another frequent protocol is the High Level Data Link Communications (HDLC).
Layer 1. PHYSICAL. A stream of bits is assembled with frames coming from layer 2, and some physical events are produced in the transmitting media. There are different standards depending on the media (several types of wire, optic fiber) and the protocol. They specify electrical characteristics of voltage and current, maximum length allowed, type of connectors (e.g. the RS232 9-25 pin connector for serial communications), modem features, etc.
Lower layers (1-4) are concerned with the interconnection of computers, ensuring data flow between source and target. They don’t know which data they are transmitting. Upper layers (5-7) are concerned with the dialogue between applications running in computers. They don’t know how their information items are going to be transmitted.
Each Layer n performs its functions adding extra headers to blocks of information that are delivered from layer n+1. So, when Layer 1 is reached, the headers from all preceding layers are accumulated. At the other end, those headers are processed in the reverse order, and the original message is reconstructed. The process can be more complicated, with partitions or assemblies of blocks of information. Middle devices between sender and destination may translate a protocol to another within a layer; in that case, operations performed at origin and destination are not mirrored exactly, that is, the protocols used at each level are not the same. This flexibility is precisely the aim of the layered design of telecommunications protocols. Different strategies are used to exchange data between several devices connected to a network. NETBIOS has no hierarchical addressing system. Thus the message goes around all possible physical connections until it reaches the destination. When different networks are connected and complexity increases, more efficient procedures have been developed. TCP/IP, has a four byte addressing system that targets networks, subnetworks and stations within a network (for instance, 22.214.171.124 is the address of Mind‘s server). This allows to optimize the path through which the destination can be reached.
Let us see an example. It could happen that someone in Amsterdam wants to send an e-mail to someone in Barcelona. Let us suppose that he is working with a PC within a Token Ring LAN. This protocol covers layers 1-2. All stations are interconnected sequentially with twisted wire, and the last one again with the first. So, a ring is formed. Each station has an adapter to be attached to the transmitting media. Each adapter has a unique address. In Token-Ring LANs, data travels in a unique “token” which can be seen as a “small truck” that goes around the whole ring. When a station has to send something, it waits until an empty token arrives. Then it fills the token with data and the address of another adapter in the ring. Each station looks at the token as it passes around. If it is addressed to itself, it collects the information. Every time a resource within the network is used, such as a data server or a printer server, something like that is performed. In order to send an e-mail, the message has to reach a mail server that is outside the Token-Ring LAN. The e-mail application (Layer 7) is started, a message and an e-mail address identifying the mail server in Barcelona is written. The message may, or may not, be converted by a protocol belonging to Layer 6 (Presentation). This message goes from an IP source address to an IP destination address. Internet has no protocol to handle session control. So, data is taken and converted in packets by lower layers 4 and 3 (Transport and Network) using the TCP/IP Protocol. As data has to travel within a Token Ring LAN, it is encapsulated (i.e. treated as message from an application at layer 7) in a Token-ring frame, which prepares data to be sent through the LAN, addressed to a device which connects the ring to the outside world. Data passes through several stations (NETBIOS Layers 4, Transport and Layer 3 Network), using the Token-Ring Protocol (Layer 2, data link between adjacent nodes and Physical Layer 1) until it reaches a special device, a router. TCP/IP packets are deencapsulated, and the destination address is read. The router can communicate to other routers. It has a table which indicates through which adjacent routers the destination address can be reached. If the destination address is not found in the table, a broadcast message is sent to all possible directions trying to locate the destination. This way, the path to the destination address can be found and added to the table. Let us suppose that the path attaches to a major Internet node through a public X25 line. The router prepares the packets of data to be delivered to X25 Network protocols (layer 3). Data travels through the public network, merged with packets coming from other users until it reaches the router addressed. Here the same procedure (finding the path, converting protocols if necessary) will be performed until the last router is reached. The sender’s mail server contacts the destination mail server in a similar way. This mail server may lie in a LAN controlled by Ethernet protocols. The router will reconstruct the message until layer 3. Data is delivered to Ethernet protocols. As a station within an Ethernet LAN, the router “listens” until there is no transmission, and then sends a frame addressed to the station that acts as a mail server. Here we do not have a ring with a unique token. It is possible that another station tried to send a frame at the same time as the router. In that case a collision occurs, and the stations receive an error signal. They wait for a random amount of time, and try again. The message is read by all stations (Ethernet encapsulating TCP/IP) and finally it reaches the mail server, where it is reconstructed until TCP/IP protocols recover the message and deliver it to the mail application. A minute later after the message was sent in Amsterdam, our man in Barcelona may connect to his mail server and recover it, hardly suspecting what has happened meanwhile.
The aim of the last paragraph was not, of course, to train the reader to manage telecommunications networks but to give a feeling of a real case of multiple realization. Let us consider the issue of inter-level relations between the message to be sent and the physical events that take place in the transmitting media. Of course we find “neutrality of realization” and multiple realizability. The standards were developed just with that aim. New application programs can substitute old ones with the same hardware. The type of LAN can be modified and the applications go on working, if the new protocol at layers 1-3 is compatible with pre-existing ones at upper levels. New segments of improved transmitting media can be introduced; apart from copper wire there can be optic fiber and radiotransmission, and all the rest can remain the same. In fact, the whole physical network and middle devices could change and everything would go on working if compatibility between protocols were preserved. Even with no changes in the hardware and the software, two consecutive messages may use different paths depending on the traffic. Here we could reproduce the same remarks made by Horgan (1994, p. 240) with regard to the multiple realizability of the mental. The e-mail example illustrates the fact that multiple realizability does not rule out strong inter-level relations (bridge-laws). Actually the whole system would not work without them. Compatible protocols in adjacent layers must match exactly. Now let us consider causality. At layer 7, we can say that the message sent from A, is the cause for the answer returned by the application running at the destination. The causal efficacy depends entirely on the existence of a compatible application running at the destination computer device. Inter-level relations can be tracked exactly from one layer to adjacent layers. That is what protocols do: receive data, convert it and deliver it to next layer. If we know which particular protocol is used at each level, we can track how this message is related to the physical events, e.g. an electromagnetic wave, in the transmitting media, and how these events cause other events in the interface device at the destination computer. Perhaps this would correspond to what Kim calls local reduction, but it can change in so many ways that the sense of reduction as explanation is completely lost. We cannot map the high-level message into physical events in a general way, without taking into account inter-adjacent-level relations, because it depends on which particular protocol is used at each layer, or what is the saturation status of different segments of the network at this particular time. Simply, it makes no sense; it contributes nothing to the explanation, as the message could be almost anything at the physical layer. This attempt would give, paradoxically, global supervenience, not strong supervenience.
Of course, we have physical causal closure: there cannot be a change at layer 7, e.g. modifying a single character in the message, without a change at layer 1, e.g. a different electromagnetic wave. But we cannot know which difference it is without exploring in detail what happens at each layer. So we have reduction, but only between adjacent layers. The word “reduction” can be misleading if it connotes that higher-level features are included in low-level ones. It seems more appropriate to say that they are built out of low-level features. As was the case in topology, the fundamental layer opens a range of possibilities for higher-level objects to be defined, but it does not predetermine it. Certainly they have an influence. For instance, new technologies in transmitting media, as optic fiber, may allow to transmit video in real time, and new applications could take advantage of this.
Here the equivalent of our Laplace’s Demon would be an oscilloscope looking at the transmitting media. It can give us a report about the electromagnetic wave. But with this information we cannot know whether messages are being transmitted successfully or not. We can see perhaps a square wave, but we would not know whether it is useless garbage, noise, or messages. In order to monitor the network, specialized vendors supply a device called “Protocol Analyzer”. This would be the equivalent of the LD plus his upgrades with the set of conditions (6) furnished to obtain reports about high level items. A good Protocol Analyzer can implement all most extended protocols. We could say that the “ontology” of a oscilloscope consists of electromagnetic waves. If monitoring at Layer 1 is needed, a particular protocol is chosen and the device looks for certain streams of data, frames delimited by specified sequences of bits (e.g. frames in X25, tokens in token ring LANs). The “universe” at Layer 1 (Physical) is made of identifiable frames, or noise. Within this layer, it cannot be ascertained whether the message is correct, or whether it has a valid destination. If the transmission is being carried out according to a certain protocol, and the Protocol Analyzer uses another, it cannot detect anything. At Layer 2 (Data link), the Protocol Analyzer looks for headers and trails delimiting frames, plus extra fields of control that are added in order to check the consistency of data transmitted. In its “ontology” there is noise, successful frames and unsuccessful ones. In addition to the frames that contain information originated at higher levels, Layer 2 introduces frames exclusively for controlling the reliability of the link, for instance a frame (SNRM) that must be answered by the destination device with another one (e.g. UA) within a certain time. At layer 3 (Network), the protocol (e.g. X.25 or SNA) looks for packets of information delimited by headers containing controlling fields and addressing information. This allows to merge packets with different destination addresses in the same transmitting line. The “ontology” for the Protocol Analyzer at this level could be noise, valid packets and not valid packets. In a similar way at the Session layer (Layer 5) we would have attempted sessions, sessions initiated, sessions ended abnormally, and completed sessions.
The Protocol Analyzer is a perfect example to the problem posed at the beginning: (5) how can a Laplace’s Demon detect the occurrence of high-level entities? Here an oscilloscope which registers electromagnetic signals will be able to detect items belonging to a higher-level if it is furnished with suitable protocols. This protocol has the “set of conditions” (6) that define these items. Conditions about voltage and current allow to obtain streams of bits out of electromagnetic signals. Conditions about certain sequences of bits allow to detect the occurrence of valid frames.
Now, let us go back to the real world.
What are the implications of the “layered reductionism” I have sketched?
i) A commitment to a materialistic ontology and the conviction that matter can be much more complex, interesting and structurally rich than we could guess from the explanations that Physics provides. I shall try to examine what Physics says about matter.
ii) The idea of building structures out of matter, in the sense expressed by Medawar. There are different levels of structures and complexity, each one built over another, with properties that exhibit multiple realization and demand autonomous explanations. This autonomy and multiple realization does not entail that bridge-laws between levels are impossible. There are such laws. The question is then: What conditions have to be formulated to allow Laplace’s Demon to detect high-level items? Or, if telecommunications protocols are organised in seven layers, which major layers could be put forward to account for what we find in our known universe?
In next section a few remarks about the ontology of a “layered world” will be discussed.
4.1 What does Physics say about matter?
I shall simplify drastically and thus assuming the big risks that simplifications bring with thema discussion that could be overwhelmingly difficult. What do we find in the universe according to physics? Before 1932 the answer would have been: space, time, and four kinds of particles, protons, electrons, neutrons and photons. Since then, and starting with the neutrino, the zoo of elementary particles has been enlarged with a bizarre variety of new specimens. New candidates as the most fundamental entities have been put forward, such as quarks, and more recently, superstrings. Let us stay in 1931. Particles have mass, electric charge, spin, decay properties, etc. There are four kinds of fundamental interactions according those properties: gravitation (attraction between masses), electromagnetic (attraction or repulsion between electric charges), strong (attraction between protons and neutrons within a radius of about 10-13 cm), and weak (which accounts for the decay of the neutron into a proton and an electron). If a particle is placed in some particular point in space, it will experience the interactions of all the others surrounding it. As this will happen to every particle, we can forget about which particle is placed in that point and think about the interactions as a property of that point. This defines what is called a field at every point in space. What can happen to a particle? Apart from decay processes in atom’s nuclei, basically particles change their state of motion due to the action of a field. A system is described by the energy associated with the movement of particles, i.e. kinetic energy T, and the energy U associated with the position it has in space in the presence of a field. These properties allow us to write a function, the Hamiltonian, in terms of the positions and momenta of the particles. With the Hamiltonian, the equations of motion can be formulated. In classical mechanics, those equations are precisely the tool that our Laplace’s Demon uses. Given a system of n particles, its positions and momenta, the Hamiltonian is obtained and the evolution of the system can be calculated backward or forwards. In certain cases the system reaches a stable state where it does not change. In other cases the system may change periodically, reaching the same state over a certain period of time, as is approximately the case of our solar system. Quantum physics introduces uncertainty but, above all, it introduces structure as a result of quantization and Pauli’s exclusion principle. Particles combine in more or less stable systems atoms where they can have only certain energy values, or states. The exclusion principle forbids two electrons being in the same state at the same time. The way electrons are distributed across different states of energy in atoms defines the periodic table of elements. It is worth remarking that, in establishing which Hamiltonian describes a system, great concern is paid to properties of symmetry and invariance, in the sense mentioned before with regard to Klein’s approach to geometry. For instance, if the field created by a particle depends only on the distance, it will exhibit symmetry under rotations, whence spherical systems of coordinates will be chosen to formulate the problem. And in the contemporary physics of elementary particles, “invariance” is probably one of the most important notions.
Very few cases can be treated through the n-body problem approach otherwise called mechanics of mass point systems apart from celestial mechanics in classical physics and simple atoms in quantum mechanics, although this provides an explanation of the periodic table of the elements and all observed results of nuclear, atomic and molecular spectra. How can Physics be extended from the atomic and subatomic domain, to the macroscopic one? The answer to this question is central to my argument, and would be the following.
Physics deals mainly with ensembles of particles arranged in a uniform way and, in most cases it deals with stationary states. This allows us to obtain macroscopic magnitudes as averages over microscopical magnitudes concerning single particles, through the methods of statistical mechanics. Uniform systems lack precisely the kind of “order” or “arrangement” that “organic wholes” which are “more than the sum of its parts” exhibit (Cf. Nagel, supra). Typical systems are gases and condensed matter. Now, in statistical mechanics, when dealing with systems in equilibrium (e.g. the kinetic theory of gases), uniformity is introduced explicitly as the hypothesis of equal a priori probabilities (4). And in condensed matter theory, a regular arrangement in space is assumed, e.g. crystal lattices (5). The goal is to obtain known macroscopic results in electromagnetism, thermic processes, and mechanical properties, as averages of the magnitudes ascribed to individual components of the system. A typical approach would be the deduction of the specific heat (the quantity of heat necessary to produce unit change of temperature in unit mass) from considerations about how those components can move (e.g. vibrations around a point, which may be quantified). And that means obtaining the process of disorganized energy (heat) being absorbed by a uniform arrangement of components. Now, the success of statistical mechanics is extraordinary. It is not strange that Nagel chose thermodynamics and specific heat to discuss the reduction of theories. This success is also a great achievement of quantum mechanics. The different levels of energy allowed to electrons, depending from the electronic structure of the atoms that constitute the solids, explain why there are conductors, semiconductors and insulators. And despite this enormous success we keep on insisting that “physics does not exhaust all that can be said about matter”. The success of this reduction is possible because it is performed between adjacent layers of organization of matter. Difficulties in reduction arise when higher layers of organisation are considered.
It is interesting to compare the process of energy exchange when a body is heated (thermodynamics), with the energy processes involved in a garage door activated by infrared rays. In the first case, radiation heats a uniform arrangement of matter in a uniform way. In the second one, the system has its own source of energy which remains inactive until it is activated by a specific signal, being absolutely indifferent to other kinds of energy interchanges. Now, physics can explain the properties of each part of the system, but it has no theory about arrangements. Non-uniform arrangements of matter and its processes lie outside the scope of physics. Physics deals with simple systems (a few mass points) or general uniform systems (lots of particles in uniform arrangements).
Let us do something fancy, just for a change, and imagine that the different sciences are compared to different manuals provided to an apprentice of Demiurge in order to help him in his task of building the universe. In that case, Physics would be the first volume, entitled “Building with matter. A primer. Getting started”. This should be understood not as a diminishment of physics but as a way of expressing the fact that more complex arrangements of matter can be found (6). The n-body problem dealing with any particles in any arrangement would be the most general case, as was topology in Medawar’s approach. As first extensions of the general case we would have stable structures in atomic domains and uniform arrangements of these structures in macroscopic domains. The set of conditions (6) to be supplied to Laplace’s Demon in order to detect those items can be easily conceived. It is worth remarking that the macroscopic approach that describes with few variables a huge ensemble of particles, is not a poor substitute of the elementary report given by LD, listing all the data for each particle. On the contrary, the theoretical model that describes the system with few parameters has achieved the goal of conveying what is really relevant about that system.
So, when the apprentice of Demiurge had finished with the first volume (when the LD has been upgraded to detect an inventory of atoms, molecules, gases and condensed matter), we could say to him: “There are more things than can be built with matter, than are dreamt of in your physics” (7). As we said before, the commitment to a materialistic ontology does not imply the monopoly of physics as explanation. And the latter does not imply that high-level items supervene over low ones at random, without bridge-laws. Which are the next layers? What kinds of conditions have to be formulated for a LD to be able to detect machines, chemical reactions, life and persons?
4.2 A sketch of layers
Here our job is similar to that of the ISO committee when they studied how to divide the problem of Telecommunications protocols. We have to divide different levels of organization of matter. The discussion will be, inevitably, too general and too speculative, but some kind of sketch must be done, in order to offer a more concrete image of what I have called “layered reductionism”.
Adjacent Layers: Chemistry and Engineering
The conditions furnished by physics would allow LD to detect and produce reports about two kinds of items, out of the basic report about particles. i) structures at atomic level (atoms, molecules), and ii) macroscopic ensembles of these structures in uniform arrangements, i.e., gases, fluids, condensed matter, and its thermal, electromagnetic, mechanic features. Chemistry can be seen as the layer built out of (i), and engineering as the layer built out of (ii). Once LD can recognize atoms and molecules, it can trace its changes, and it can undertake the task of ascertaining which regularities and invariances are present in processes where several compounds of type (i) are present. Those processes are what we call chemical reactions. Which conditions are to be supplied to detect if a system e.g. a part of the universe that has CaCO3, CaO and CO2 is in chemical equilibrium? Keeping track of individual changes, cannot give the answer, for changes keep going on, and the same atom of Ca passes from a CaCo3 molecule to CaO. This condition is easy. LD should be asked to calculate the concentration of the reactants within the volume of the system. This is an invariant property at this level, whereas the surroundings of particular atoms are not. At this level, we can forget about the fields in space, whether an electron is in the vicinity of a certain nucleus or another, or whether it is stroked by other particles. It is peculiar that chemistry deals very often with liquid solutions, where the distance between ions, molecules, or atoms allows the exchange of protons (acid-base reactions) or electrons (redox reactions). In typically physical systems we have condensed matter assemblies where parts are too tightly arranged to allow changes, or gases where the parts are too separated. The formalism of Physics faces difficulties when uniformity is not present, i.e., liquids and surface problems (although these cases are also studied). From a chemical point of view the world is described in terms of molecules, its arrangements (gas, solution, solid); its changes (chemical reactions where they intervene as reactants or catalysts), and the heat absorbed or liberated by the reaction. Chemical systems can be closed or open depending on the exchange of reactants and energy with the surroundings. Systems in equilibrium can be characterised by a minimum of energy and maximum uniformity (or disorder, Entropy and the 2nd Law of Thermodynamics). Chemistry exhibits some explanatory autonomy but as an adjacent level, the explanation in terms of physical compounds and processes is fundamental.
The Engineers Council for Professional Development, in the United States, defines engineering as “the application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works using them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behaviour under specific operating conditions; all as respects an intended function, economics of operation and safety to life and property.” Different macroscopic physical parts, with certain mechanical, thermal, and electromagnetic features, can be arranged as parts of certain size and shape to compound specific arrangements in order to obtain a certain effect. In architecture, mechanical properties are used to obtain stability using the minimum weight of structural materials. Rigid solids are used in arrangements which can include axis, rotation, levers, gearings, etc., to obtain transmission of force, movement or electric currents. Force, movement or electric current can also be obtained from special arrangements using chemical systems such as combustion or galvanic cells. In a very general way, it could be said that engineering devices transmit and transform energy. Thanks to invariance relations, instead of considering the field in space and the kinetic energy of each particle, we have the kinetic energy of macroscopic bodies, rigid solids, the mass of the whole body (an additive property), the general movement of the body, the temperature as a measure of the average of vibrational energy of each particle in the body, and voltage as the field due to the difference of electron concentration. As adjacent levels, engineering devices are explained through supervenient causal relations, but it cannot be said that they are deduced from physics. Macroscopic terms must be introduced as Nagel’s “suitable conditions” (1). Engineering reduces to physics if a description of the arrangement is introduced. Those conditions are the ones that have to be formulated in order to upgrade LD to the ability of recognizing clocks, electric motors, mills, bridges, or bicycles.
If systems are studied from a general point of view, as black-boxes, there is a novelty at this layer: autonomy. Particles from physics, which at macroscopic level could be seen as rigid spheres with a certain mass, have a kinetic energy which depends only on its mass and velocity. They can be in a field (i.e. an inclined plane) or receiving an impact, as billiard balls. The only thing that is needed in order to describe the system is its mass and the external forces. Now, if the system is viewed as an unity, compound devices such a vehicle powered by an electric motor, exhibit a new feature. The knowledge of the mass of the system (the vehicle), and the external forces, are not enough to describe the system. In absence of external forces it can move in any direction. Or the system can have an arrangement of parts such that external influences are compensated in order to remain in a particular state. Cybernetics, as a modern branch of engineering, deals with automation and control of machines. Systems with feedback loops not only can transform or transmit energy, but they can change the way they do it depending on surrounding conditions that are detected through sensors. There is a small quantity of energy involved in detection, which is a sample of surrounding conditions and is used to control the main energy process (e.g. heaters controlled by thermostats, valves of engines controlled by a sample of velocity at the output, etc.). Programming allows a more sophisticated control in case of devices with the capability of performing a variety of actions, such as a textile machine, or an assembling robot. The samplings of energy extracted at different parts of the process are substituted by a code that determines what is to be executed. In textile machines the code would be an inserted card which would configure the structure of the machine. The instructions can be also be codified as electric signals, which can be transmitted and stored as different states of energy of semiconductors (i.e. bits in the memory of computers). Through signal processing, the control of machines is exerted with a minimum of energy. We don’t work anymore with direct samples of energy but with a signal that only needs a threshold of energy in order to discriminate between two possible states. Thus, the existence of structure can make compound macroscopic objects different from big particles, such as a rigid sphere. Particles and rigid spheres interact with other particles. Here, systems interact with information. In this sense, it would be justified to speak of novelty. Here if we want the LD to report about machines, a set of conditions concerning the properties of parts and the structure of the arrangement must be supplied.
Now it seems justified to say that physics doesn’t exhaust all that can be said about matter. It is quite clear that the development of technology has influenced philosophy, modifying the view about matter. The most complicated arrangement made out of matter that could conceived by contemporaries of Descartes was a mechanical clock. And that was enough to suggest that animals could be complicated machines. Automation, signal processing, computing devices have stimulated a new debate, with landmarks such as N. Wiener’s Cybernetics, or A. Turing’s Computing machinery and intelligence. Recent developments in cognitive sciences obviously have had a profound impact in the philosophy of mind. All this reflection is about the possibility of matter making up rich and complex systems.
Life as a wave over matter
What kind of conditions should be formulated to upgrade the LD in order to detect life? The answer is as difficult as the task of defining life. The definition can be attempted in terms of physiology, metabolism, biochemical compounds, or genetic mutation. Actual exploring devices looking for extraterrestrial life have to face those definition difficulties. If the task is undertaken as a search for complex organic compounds, other kinds of life different from those on the earth could be missed. Certainly the LD could not accomplish his task simply by keeping track of chemical compounds as those are exchanged continuously with the surroundings. Lynn Margulis (1995) has coined the expression of life as a “wave over matter” going across it, changing it, and configuring it. Certainly the most complex structures of matter occur in living organism but, can life be defined just as complexity? The thermodynamic definition of life has to explain how complexity and order are possible if closed systems evolve towards a state of equilibrium and homogeneity (a maximum of entropy). The known answer is that living systems are not closed but open, and that energy is dissipated in order to fuel metabolic processes that build order. So, a special boundary is needed, a boundary which partly isolates the macromolecules of life from the surroundings and partly allows the flow of metabolic products, thanks to the control of concentration gradients. At first sight life would be an attempt to maintain complexity avoiding thermodynamic death, by building over and over compounds that otherwise would disintegrate. LD would be asked to look for ensembles of compounds and processes that use matter from the surroundings as fuel and as building blocks to repair itself. (I know this condition is not properly formulated here, but it can be assumed that with some work, and forgetting about general approaches, the expression for the conditions applying for specific, known prokaryotes, could be written). In our known biology that means that proteins are synthesised out of RNA transcription using DNA as blueprint.
The other essential features of life can also be seen as strategies to maintain complexity. The strategy of the “repairing system” is not enough as finally errors occur in the transcription and the organism would disintegrate. Obviously, reproduction is the strategy used to obtain a fresh copy of the organism and start the process over again. New conditions should be supplied to LD for this (self replicating structures), and thus we could ask for reports about species. As has been pointed out before, there is a certain neutrality of realization. For instance, all proteins involved in life processes are levogired. Dextrogired proteins are also possible but they don’t appear. We can imagine an alternative biology based on dextrogired proteins. If they don’t appear is probably because the initial complete cycle of DNA coding (using as catalysts precisely the results of this code), was levogire. And once this circular causality mechanism began, it had such an enormous advantage that it blocked any other possible alternative.
Again, metabolism and reproduction may be not enough as living organisms depend entirely on a suitable surrounding for obtaining metabolic supplies. The third strategy of life is genetic mutation, as a way of changing the structure and processes that define a species. Posing the question about conditions to define new steps would endless. But it is clear that analysis in terms of components and structure arrangements would be a suitable tool. Margulis has developed a theory about the emergence of eukaryiotes where organules such as mitochondria and chloroplastes are explained as symbiosis with bacteria absorbed by prokaryote cells. New conditions would be formulated to detect structures as pluricellular organisms. Otherwise, the rest of biology that could detect a LD upgraded to unicellular level, would be “bigger cells” (if that were possible) and colonies. Specialized cells, tissues, and organs would require a analysis similar to that of engineering, although much more complicated. Unicellular organisms depend on the presence of a environment where all nutrients can be found (they must be in contact with the kind of environment that sometimes is called “organic soup”). The logic of pluricellular organisms creates a suitable environment for each cell.
An account of evolution cannot be made by looking only at individual species. Each species evolves together with the surroundings and other species with which it interacts. So, conditions for detecting ecosystems should also be formulated.
Interacting with information
In simple forms of life, the interface with the environment is just chemical, thermal, and mechanical. What the organism does, or what happens to it, depends on the temperature, pressures exerted upon it, and the substances that reach it. When a nervous system and sensors are developed, the reactions of the organism can be grounded on signals. The adaptive advantages, such as anticipating the correct response towards a possible prey or danger, are obvious. This is a major qualitative change. Particles from physics interact with each other (or move in a field), exchanging energy. Chemical systems or simple cells perform chemical changes. The environment is not described anymore as a gravitational or electromagnetic field, but in terms of the reactions that can occur depending on the properties of the compounds. When systems have its own source of energy, and a kind of programming like machines in engineering, a new kind of causality appears. A very small amount of energy (e.g. the photons that are involved in visual perception) are responsible for a change (e.g. the movement of an animal) that involves a great amount of energy. We could say that the environment is information-coded, and speak of a “information interface” versus a “chemical interface” or “physical interface”. Those would be called “first-order” signals. Stored information about the environment is what is sometimes called a “cognitive map”. Now our task is to upgrade LD to the level of a behaviourist psychologist. This would mean to collect all the input energy that reaches the organism, to trace which part of this energy is captured by sense organs, and register the output. Of course, the problem is much more complicated than that. One of the main difficulties is to discern what constitutes a stimulus and what constitutes a response. Nevertheless, the method of posing the problem in terms of the conditions to be supplied to a device (LD) capable of detecting physics and chemical events from the environment, and physiological events in the organism would still be useful.
Our next step would involve upgrades to cognitive science notions. How can beliefs and desires be detected by LD? Our knowledge of this field is still too limited to formulate explicit conditions. There is no doubt that the connexionist paradigm has been a major advance in the understanding of brain processes although models are still too simplified to give a realistic account of the brain. Many features of perception, coordinated motion, memory retrieval, semantics, and problem solving, receive a better explanation in connexionist terms than in traditional AI. Rather than opposing the two approaches, it seems rather plausible that the symbolic level has the subsymbolic one as microstructure: “Parallel distributed processing models offer alternatives to serial models of the microstructure of cognition. They do not deny that there is a macrostructure, just as the study of subatomic particles does not deny the existence of interactions between atoms. What PDP models do is describe the internal structure of the larger units, just as subatomic physics describes the internal structure of the atoms that form the constituents of larger units of chemical structure… In general, from the PDP point of view , the objects referred to in macrostructural models of cognitive processing are seen as approximate descriptions of emergent properties of the microstructure” (McLelland and Rumelhart 1987, p. 13).
For instance, candidates for new emergent macrostructures could follow the steps described by Piaget’s genetic epistemology with regard to child learning (Piaget 1937). Again, the notion of invariance appears as a condition for defining higher-level entities. For instance, the acquisition of permanent objects is achieved through the invariance of perceptual and psychomotor experiences (and it seems feasible to describe these ones in connexionist terms). Invariances again would account for the acquisition of abstract concepts and causality. Invariances in the linguistic behaviour of surrounding people make possible the acquisition of language capabilities. Later, the child will be able to manipulate abstract concepts, build hypotheses, and take into account the implications of different states of affairs. I shall interpret this as a new kind of causality.
Let us suppose that through this kind of layered analysis, the job can be done, i.e., that mental events can be described in terms of patterns and processes of neural networks. If dealing with telecommunications problems demands seven layers of analysis, it can be guessed that dealing with mind will demand many more. That means that, although analysis can be done at each layer in terms of the adjacent lower one, direct mapping from the mental to the neurophysiological is not going to be feasible. This would lead to the multiple realizability and neutrality of realization Horgan referred to. And as happened in the application layer with respect to the physical layer in the case of telecommunications, a great deal of autonomy will be found. The brain exhibits a great deal of plasticity. For instance, in cases of Rasmussen’s encephalitis, children who had the right hemisphere of the brain removed, have been able to recover, with the left hemisphere only, functions once performed by the right one. But at the same time, it will be true that strong inter-level relations, in Kim’s sense, could be formulated between adjacent layers.
Could conditions for perception and belief events be formulated in terms of distributed representations (McLelland and Rumelhart 1987, chap. 3)? There are hints that the issue will be more complicated as is suggested by the debate about mental content. Moreover, it seems that representations and beliefs don’t map a fixed external world but are a step in a process of representative learning which aims at constructing regularities (cf. Enaction: alternatives to representations in Varela 1991, p. 250ss).
Would we be able to formulate conditions for an LD to detect consciousness? We want to formulate not only the conditions for ascertaining whether an animal or a human being is in a state of arousal or of sleep; we want LD to produce reports describing persons as well. The problem is the same as the one that was posed by Lewis (1974, quoted in Horgan 1984, p. 31):
Imagine we have undertaken the task of coming to know Karl as a person. We would like to know what he believes, what he desires, what he means, and anything else about him that can be explained in terms of these. And we want to know his beliefs and desires in two different ways. We want to know their content as Karl could express it in his own language, and also as we could express in our own language …. Imagine that we must start from scratch. At the outset we know nothing about Karl’s beliefs, desires, and meanings. Whatever we may know about persons in general, our knowledge of Karl in particular is limited to our knowledge of him as a physical system. But at least we have plenty of that knowledgein fact, we have all that we could possibly use. Now, how can we get from that knowledge to the knowledge that we want?
It is impossible to mention here even the major issues related to consciousness. I shall make only two remarks. First, some estimations rate as 109 bit/sec the amount of information that reaches sensors whereas there is only awareness of 10 or 102 bit/sec (Sibernagl and Despopoulos 1978, p. 254). This suggests that there is a lot of processing activity in the brain and that there must be some kind of selection procedure. The emergence of the subjective experience could be interpreted as a way selecting relevant information processes through the enlightenment of qualitative experience. Second, there are hints that there can be an epistemological limit with regard to reports about conscious states, as has been pointed out by Gunderson (1970), Nagel (1974), Jackson (1986), Searle (1992), McGinn (1989) and others. Perhaps all that we will achieve will be a list of correspondences between introspective reports about qualia and neurophysiological features. This should not be a tragedy. After all, quantum mechanics can find its way very well with its uncertainty relations. There is lot to learn even if the case were that we can only obtain reports from a third person’s point of view.
Language capability can also be seen as a tool to manage the huge amount of information that a complex environment supplies. As a system of second-order signals, language is an extremely powerful tool to refer to first order signals. With a few bytes of second-order signals, e.g. “Rome” there is vast number of bytes corresponding to first-order signals. With language we have the capacity to build a kind of theory about the world we live in. Many theories could be quoted with regard to this issue but they cannot be discussed here.
This capability introduces a new kind of causality or interaction. Physical, chemical and simple biological systems interact, or are in contact with, actual, present systems. First-order signals are about here-and-now events. When information about past events can be retrieved from memory, somehow, there can be an actual effect caused by information originated in the past. Furthermore, stored information can be used to develop new information items. Hypothetical situations can be considered, expectations about the future can be raised, new possibilities can be imagined, whether they are feasible or not. This is really a new kind of interaction, an interaction with things which do not exist! To put an example, let us consider how are to be explained the Stock Market Crash of 1929, and the Great Depression. A month later after “black Monday”, all physical and biological resources on earth were almost exactly the same as they were a month before. The technologies to exploit those resources were also the same. So what caused such an important change in the lives of so many people? The only thing that had changed were the expectations of people. A more or less optimistic theory about the future was replaced by a lack of confidence.
Particles of physics are stroked by other particles or influenced by a field; chemical compounds transform themselves depending on other compounds with which they are in contact; living beings grow and die within an ecosystem according their genetic programming. Human beings, apart from those changes, can experience cognitive changes. The external world, instead of simply striking its body, may leave everything unchanged except for tiny adjustments in the weights of its neural network. And those changes have a major influence. In this sense, McGinn speaks of “conceptual causation” (1991): “Once there were no thinking beings and hence no concepts. In those thoughtless days all the events in the world happened through nonconceptual causation. Then concepts found a home in the head region of certain organisms; thereafter concepts began to exert control over the course of nature. The proportion of effects due to conceptual causation and effects due to causation of other kinds gradually grew. When concepts of a scientific sort found their way into human thoughts conceptual causation really came into its own, technology being the most spectacular upshot.”
At this stage, LD has been upgraded to produce reports about different organisms at physiological level. He can also produce reports about the units belonging to a neural network, its activity states and the weights of its interconnections. This would account for the information stored in the system. Now, eliminativism can be understood as considering this kind of report as the only valid account for mental events. Instead of this, what I have been trying to say is that several invariances introducing new layers of organisation have to be considered. Here, the report about neural states and processes is the equivalent to the n-body problem in physics. Different sets of conditions have to be supplied to the LD in order to account for high-level features of mental events. This will introduce different layers with multiple realizability and, as we said before, although strong relations can be found between adjacent layers, direct mapping from high-level features to neurophysiological ones cannot be achieved because it is not invariant enough. So, it may very well turn out that Folk Psychology is wrong as was the theory of combustion in terms of the phlogiston. But if that were the case, it will be replaced by another high-level theory looking more similar to old Folk psychology than to neurophysiology. This assertion does not, of course, undermine the value of neurophysiological research, in the same way that stressing the explanatory autonomy of biology does not deny the fact that biochemistry is essential to its understanding. Real understanding can only be achieved through analysis in terms of lower adjacent layers and synthesis to structures belonging to higher layers. Which kind of LD report would correspond to that theory? We could ask for a report about all mental events of all human beings. We would obtain a description of the sequence of mental events belonging to a particular subject (emotions, perceptions, beliefs, desires, actions). Could we ask yet for further kinds of reports?
Biography, community, culture
I think that there are reasons for a positive answer. And even Paul Churchland can be quoted in that sense (1990, p.51). If “human-consciousness is not just the intrinsic character of the creature itself, but also the rich matrix of relations it bears to the other humans, practices, and institutions of its embedding culture” then “a reductionistic account of human consciousness and behaviour, insofar as it is limited to the microscopic activities in an individual’s brain, cannot hope to capture more than a small part of what is explanatorily important.” This would imply that “any adequate neuro-computational account of human consciousness must take into account the manner in which a brain comes to represent, not just the gross features of the physical world, but also the character of the other cognitive creatures with which it interacts, and the details of the social, moral, and political world in which they all live. The brains of social animals, after all, learn to be interactive elements in a community of brains, much to their cognitive advantage.” The kind of approach proposed is that there can be neural patterns for social items in the same way that there are neural patterns that account for the perceptions of forms. In my point of view this fails in recognising structural properties of “community” and “culture”. I think this approach is too much biased by the case of geometric forms recognition. The insertion of human beings in a community and in a context of cultural information belongs to a very different layer than the one of visual perception. Relations between structures (self, other selfs, community, cultural objects) sould be taken into account instead of looking for patterns only in the neural network of the subject. This approach cannot convey relations between patterns in different brains, or relations between patterns in brains and structures outside. (8)
If LD were asked to produce reports similar to actual biographies, in what sense would this be different with respect to a report about mental-event sequences? We asked before how could be ascertained whether a particular electron was within an solid or a living organism. Now we may ask to which kind of community a human being belongs, and which role it has in it. Conditions accounting for the vocabulary of cultural anthropology (Harris, 1983) should be formulated. Or, what kind of conditions would make it possible to detect civilizations, and their processes of rise and fall, as are depicted, for instance by Toynbee (1960)? Dennet (1987, pp.25-6) posed the case of Martians trying to understand event of a stockbroker ordering 500 shares of General Motors. My approach in terms of LD reports revisits the same issue. If an LD or Martians came to earth, what would be the prerequisites for grasping, for instance, which countries are there? which economic and political systems? which systems of values, moral convictions, aesthetic models, scientific theories? My hypothesis here is that those conditions could be formulated through a careful analysis of invariances in layered structures. But in none of these cases would elementary physics nor neurophysiology be enough.
This is not only to point out that there are further layers other than cognitive psychology. What I want to suggest is that reports about mental events cannot be understood without taking into account these other structures. There would be three major issues: biography, community and culture.
A human being in a particular instant, and the actual mental events occurring to him cannot be explained without referring to past events and actions, his actual recall of these events, and his actual expectations about future events and actions planned. The fact that the actions of human beings are inserted in a community should be a platitude. And the fact that a great deal of the cognitive mapping of the world is not acquired through abstraction over perceptual information, but as an already elaborated piece of cultural content, should also be a platitude. But it appears that the terms of community, culture, or anthropology are notoriously absent from literature about philosophy of mind. In fact, I would say that valid LD reports about human beings should resemble more what novelists and biographers actually write, like James Boswell’s Life of Johnson, than reports about neural configurationseven if Folk psychology as we know it was wrong. Again this assertion does not imply that strong inter-level relations cannot be found, but that accounts for relations between parts in structures and special causal relations between them have to be formulated.
Particles of physics “contact” with other particles through interactions and change their state of movement. Chemical compounds “contact” with other compounds and transform. Living systems exchange energy, nutrients and information “contacting” an environment an ecosystem involving physical things and other living beings; and they do it according a genetic program. Human “systems”, moreover, are in “contact” with their past and their expectatives, in “contact” with a community where they have some particular roles, and in “contact” with an ensemble of cultural content. All Popper’s remarks about World 3, its objectivity and independence, could be inserted here. (9)
A very interesting contribution to be mentioned with regard to the way we interact with cultural content is that of Dawkins (1976) and Dennet (1990) introducing the concept of meme. Memes are items of cultural content, ranging from musical themes, mathematical concepts, household procedures, to social habits or moral prejudices. In the same way that viruses can invade bodies, memes occupy minds, can propagate, and evolve. Again we could try to formulate conditions to allow a LD to produce reports about which memes are flowing through different minds, which ones are going to be extinguished, or which ones are growing. Complex systems like machines or living beings were possible because they had their own source of energy, i.e. they had procedures to “load” a kind of battery from the environment. This allowed a external programming (genetic programming in living beings). Now what we have is systems with their own source of information, self-programming systems, i.e. systems that have procedures to load a kind of database (mainly by consuming information produced by other similar systems), and manipulating it to obtain a cognitive map about the environment, themselves, their past, their future expectations, or whenever can they imagine.
I would like to suggest that those remarks could mean a change with regard to the use of computers as metaphors or theoretical resources to understand human beings. There is no doubt that from Turing’s paper to Hopfield’s Neural networks and physical systems with emergent collective computational abilities (1982) (again high-level features appear to emerge over lower ones) computer models have contributed a lot. But in general, the case that has been posed is that of a single, isolated computerthe human being, whether it happens to be sequential or parallel processing; a computer which receives data through peripheral sensorshuman senses contacting the environment, processes information and performs some actions through another peripheral units. Perhaps further research could be done if instead of a single computer with its peripheral units, a network of computers -human beings in a community- were to be considered. Cultural contents, or memes, would be equivalent to the data that is flowing through the network without being totally located in a particular unit. As Popper said, scientific theories, myths, aesthetic models, are not located specifically in particular minds. They occur, partially, in minds; they are also stored in books and films; they are transmitted by parents, teachers, newspapers, TV, etc. The single, isolated computer approach, can lead us to understand only “enfants sauvages”. To account for human beings we need a model resembling more what is called a network computer, a computer that obtains a major part of its software through the net. Division of work and social roles could be modelled as computers performing collective functions, such as printer servers, data servers, software servers, net monitoring, etc.
How could this layered analysis be applied? It would be a much more complicated task than that of the seven layers we saw with regard to telecommunications. We would not find a simple vertical schema. Let us take again Dennet’s example of ordering 500 shares of General Motors. He remarks that Martians who could manage to know all the physical facts just like our LD, could not detect the high-level event of a stockbroker ordering 500 shares of General Motors. They could “predict the exact motions of his fingers as he dials the phone and the exact vibrations of his vocal cords as he intones his order. But if the Martians do not see that indefinitely many different patterns of finger motions and vocal cord vibrations even the motions of indefinitely many different individuals could have been substituted for the actual particulars without perturbing the subsequent operation of the market, then they have failed to see a real pattern in the world they are observing.” (Dennet 1987, pp.25-6). Now, stock exchange practice defines what is a valid order. This can be used as a detection condition only if the LD has been already upgraded to detect economic properties or concepts such as goods, property, production, distribution, money, market, etc. It is clear that here some kind of circular causality is found. The physical event of pronouncing the words for ordering shares is only causally efficacious when a structure such as a market is present. The Stock exchange could have been implemented in many different ways; that is, we have multiple realizability. Again, conditions for detecting “market systems” could be formulated if the LD is able to detect “human communities” and its ways to survive in a given environment. Human communities would be structures defined upon conditions about cognitive and biological features of biological species, etc.
If the framework I have just sketched very roughly were feasible, it would offer a unified scientific approach, with all the benefits of reduction analysis and without entailing the nothing-but problem. Special sciences would not be poor substitutes of physics, and their objects would be not epiphenomenic but grounded on invariant properties. If this were feasible, and the condition (6) for detecting high-level entities could be formulated, Nagel’s conditions concerning connectability and derivability ((1), (2)) would be fulfilled. Emergence in the sense of not deducible properties (3) could be explained. Several serious issues would still remain to be studied. The explanation of emergence in the sense of an “evolutionary cosmology” (4) would be one of them. This should be a matter of scientific inquiry. The problem of understanding “free will”, “agency”, “value” etc., in short, of building a philosophical anthropology within this framework would be another major issue, a challenge open to philosophical inquiry.
In summary, we have:
i) A materialistic ontology, in the sense that there is nothing apart from the particles of physics assembled in structures of different complexity.
ii) Structures defined by invariance relations between its parts. Each layer is built out of lower ones, defining new kinds of objects and properties (cf. Medawar, supra). High-level objects and events can exhibit multiple realizability (or neutrality of realization) with regard to parts belonging to the adjacent lower layer.
iii) Conditions can be conceived for finding out whether these invariant relations occur. In principle, then, every “whole” should be, analyzable in terms of its parts and their relations. But as multiple realizability can be found at several layers, the invariance that can be found between adjacent layers cannot be found between non-adjacent layers.
iv) The science of physics does not exhaust all that can be said about matter. Different sciences account for structures at different layers of complexity. Features like neutrality of realization and circular causality lead to an explanatory autonomy of special sciences. It could be said that the different sciences interrogate matter in different ways. The terms that appear in the question are those that belong to this very layer and the adjacent ones. In the sense that macroscopic complex systems behave in a totally different way than the systems the science of physics deal with (they are not like big particles), we may speak of novelty or of emergent properties.
It is to be noted that the expression “physical systems” is somehow misleading, because it can be interpreted in two ways: (i) As a system made out of the elementary particles of physics, electrons, protons, etc. In that sense, a materialistic ontology would say that it is true that everything is a physical system. (ii) As the systems that appear in the books about physics, i.e. systems that can be explained by the formalism of physics: pendulums, celestial mechanics, atoms, gases, condensed matter, etc. In that sense, it cannot be said that everything that exists is a physical system. There are machines, living beings, human beings, cultural content, etc.
In a similar way, the term “reduction”, for instance “biology can be reduced to physics”, can also be misleading. It can be interpreted as (i) living systems are nothing but ensembles of particles of physics. This has the connotation that the so reduced high-level items are appearances and that the ultimate reality are particles. It would seem that all the diversity and richness that life exhibits is illusory because all that there is are no more than rearrangements of particles in space. Instead, the right interpretation would be (ii) that living systems can be analysed as structures of biochemical compounds performing processes of metabolism, reproduction and genetic evolution. And that biochemical compounds can be understood in terms of chemistry, and chemistry in terms of quantum mechanics. In that sense, “reduction” does not imply a diminishment but, on the contrary, an enlightenment. Life has happened to be more complex and fascinating that we could dream of before the birth of modern biology. In addition, this analytic approach has brought up new possibilities such as genetic engineering. Nanotechnology would be another result of the analytic approach (The precise assembling of matter, at molecular level, is achieved; a job that until now was reserved to ribosomes in cells.) Perhaps matter deserves us even more surprises and novelties. That is what I intended to say with the paraphrase “there are more things than can be built with matter, than are dreamt in your physics.”
It could be said that an infinite mind could predict, departing from the fundamental laws of physics, all possible arrangements and structures. As Poincaré (1964) and Ayer (1936) said, an infinite mind would grasp all mathematical truths at once, regarding all theorems as tautologies. But this does not deny that these tautologies could be classified in a system similar to that of Klein. Thus, perhaps the property of new depends on the limits of our understanding, but the property of invariance or structural richness does not (Cots, 1997).
In order to stress the second interpretation of “reduction” it could be said that the materialist hypothesis does not aim at “reducing spirit to nothing but matter”. Rather it tries to understand how matter can “reach spirit”, i.e. have the features that we once ascribed to spirit as a different ontological domain.
Now, I can see Ockham’s razor approaching menacingly to cut away most of the layers I have sketched before. What kind of ontology could be drawn out of what have been said? Can all this discussion be described as simply labelling physics, chemistry, biology, etc. as matter-1, matter-2, matter-n? Is this alternative worse than dualism or physicalism on the ground that simplicity is lost? Metaphysics was once understood as an inquiry about what was ultimately real, opposing reality to appearance. What is real according to this layered schema?
With regard to the loss of simplicity my answer would be that our universe is not simple, but rich and complex; without taking complex as synonym of “confusing” or lacking order. I would dare to say that dualism and physicalism (in the sense that physical systems are the only ultimate reality) are lazy solutions to the problem of understanding the universe. Dualism would be lazy because it refuses to inquiry how the mental is built out of the physical, so that it can not explain its interdependence. Strict physicalism, if it exists, would be lazy because it refuses to inquiry about invariant relations that define structures. It would be like a Laplace’s Demon solving a general n-body problem without looking for any more features.
A human being can be seen as a physical system weighing 160 pounds, a chemical compound, a living organism with certain genotype, a cognitive system that processes information, a person etc. How is to be answered the question “what is ultimately real”? I would say that all levels are. Let us consider a metaphor. In frequency modulation, a signal is coded over a carrier wave, changing its frequency while its amplitude is kept constant. This carrier wave is also a signal. Now, is the carrier wave the ultimate reality out of the two signals? Well, it is not for those who are listening to the radio. The signal transmitted over the carrier is real also, and it is so because the carrier has certain properties that can be detected by a LD, or a tuner. In fact, as Margulis suggested with regard to life, all the layers that have been sketched could be seen as signals coded again and again over other signals, in other words, lower adjacent layers. If the lowest is posed as the unique reality we won’t be able to tune the higher ones. Dualism would be equivalent to listening to the music coming from the radio, and looking at an oscilloscope monitoring the carrier wave without being able to relate the two events. So, if the criteria for ontology is obtained only from the constituents of things (the first sense of the expression “physical systems”), we have monism; if the criteria takes into account the structures built out of these constituents, we have pluralism.
It was not my intention to discuss realism/instrumentalism issues, as I assumed at the beginning, as working hypothesis, that reality was an ensemble of elementary particles in space. Nevertheless, the layered schema that I have proposed can offer an interpretation of instrumentalism as choosing the level of analysis that is more convenient to the problem to be solved, in a similar way as the Protocol Analyzer did with regard to telecommunications. Within each layer, a certain vocabulary or ontology is used depending on the invariant properties or structures that belong to it (very much in the same way that physics uses a certain coordinate system orthogonal, spherical, cylindrical depending on symmetry features of the system to be described).
This approach can hardly pretend to be original. In fact, Aristotle’s analysis of reality in terms of matter and form has already all the elements of a multi-layered reality: Matter is “is a purely relative termrelative to form. (194b9 de Physica). It is the materials of a thing as opposed to the structure that holds them together, the determinable as opposed to the determinant. And the distinction of matter and form may be drawn at many different levels within the concrete thing. In the realm of art, iron, which is the finished product of the smelter, is matter for the founder. And in the realm of nature, the elements, which are the determinate product of prime matter+primary contrarieties hot and cold, dry and fluid, are matter relatively to their simple compounds the tissues; these again are matter relatively to the organs, and these are matter relatively to the living body. Prime matter, it is to be observed, never exists apart; the elements are the simplest physical things, and within them the distinction of matter and form can only be made by an abstraction of thought” (ROSS 1923, p.73). In this sense, what we are beginning to obtain, after 400 years of science, are the quantitative laws regarding the form as “the plan of structure considered as informing a particular product of nature or of art.”
Agazzi, E. and Cordero, A. (eds.) 1988: Philosophy and the origin and evolution of the universe. Dordrecht: Kluwer.
Ayala, F.J Dobzhansky, T. 1974: Studies in the Philosophy of Biology. London: MacMillan.
Ayer, A.J. 1936: Language, Truth and Logic. London: V. Gollanz
Bateson, Gregory 1972: Steps to an ecology of mind. New York: Ballantine.
Churchland, P.M. and Churchland, P.S. 1990: “Intertheoretic reduction” in Warner and Szubka 1994, pp. 41-60. Originally published in 1990 in The Neurosciences 2, pp. 249-56.
Benacerraf, P. and Putnam, H. (eds.) 1964: Philosophy of Mathematics, Cambridge: Cambridge University Press.
Comer, E. 1988: Internetworking With TCP/IP. Principles, Protocols, and Architecture. New Jersey: Prentice Hall.
Cots, J. 1997: “Com és possible el nou i l’interessant” in A. Estany D. Quesada (eds.) Actas II Congreso Sociedad de Lógica metodología y Filosofía de la Ciencia en España Bellaterra: UAB.
Cots, J. 1997a: “Què hi ha després de la llei final” in A. Estany D. Quesada (eds.) Actas II Congreso Sociedad de Lógica metodología y Filosofía de la Ciencia en España Bellaterra: UAB.
Crane, T. and Mellor, D. H. 1990: “There is no Question of Physicalism”, Mind 99, pp. 185-206.
Davidson, Donald 1970: “Mental Events” in his Essays on Actions & Events. Oxford: Clarendon Press, 1980, pp. 207-25. Originally published in 1970 in Experience and Theory, Laurence Foster and J.W. Swanson (eds.), Boston: University of Massachussetts Press, pp. 79-101.
Davidson, D. and Harman, G. (eds.) 1972: Semantics of Natural Language. Dordrecht: Reidel.
Dawkins, R. 1976: The Selfish Gene. Oxford: Oxford University Press.
Dennet, D. C. 1987: The Intentional Stance. Cambridge, Mass.: MIT.
Dennet, D. C. 1990: “Memes and the exploitation of imagination”, The Journal of Aesthetics and Art Criticism, 48, pp. 127-135.
Harris, M. 1983: Cultural Anthropology. New York: Harper & Row.
Hopfield, J.J. 1982: “Neural networks and physical systems with emergent collective computational abilities”, Proceedings of the National Academy of Sciences, Usa, 79, pp. 2554-2558.
Horgan, T. 1984: “Supervenience and Cosmic Hermeneutics”, Southern Journal of Philosophy Supplement, 22, pp. 19-38.
Horgan, T. 1993: “From Supervenience to Superdupervenience: Meeting the Demands of a Material World”, Mind, 102, pp. 555-86.
Horgan, T. 1994: “Nonreductive materialism,” Warner and Szubka 1994, pp. 236-41.
Kim, J. 1989: “The Myth of Nonreductive Materialism” in Warner and Szubka 1994, pp. 242-59. Originally published in Proceedings and Addresses of the American Philosophical Association, 63 (3), pp. 31-47.
Kittel 1970: Introduction to Solid State Physics. New York: John Wiley.
Kripke, S. 1972: “Naming and Necessity” in Davidson and Harman 1972, pp. 253-355.
Gunderson, K. 1970: “Asymmetries and mind-body perplexities”, Minnesota
in the Philosophy of Science, 4, pp.273-309.
Jackson, F. 1986: “What Mary Didn’t Know”, Journal of Philosophy, 83, pp. 291-5.
McGinn, Colin 1989: “Can We Solve the Mind-Body Problem?”, Mind, 98, pp. 349-66.
McGinn, Colin 1991: “Conceptual Causation: Some elementary Reflections”, Mind, 100, pp. 573-86.
Mclelland, J. and Rumelhart, D. 1987: Parallel Distributed Processing, Explorations in the Microstructure of Cognition. Cambridge, Mass.: Bradford.
Margulis, L. and Sagan, D.1995: What is life. New York: Nevraumont.
Medawar, P. 1974: “A Geometric Model of Reduction and Emergence” in Ayala and Dobzhansky 1974, pp. 57-63.
Nagel, Ernst 1961: The structure of Science. London: Rouledge and Kegan: reprinted in 1974.
Nagel, T. 1974: “What Is It Like to Be a Bat?”, Philosophical Review, 83, pp. 435-50.
Owens, D. 1989: “Levels of Explanation”, Mind, 98, pp. 59-79.
Piaget, J. 1963: La construction du réel chez l’enfant. Neuchâtel: Delchaux & Niestlé.
Popper, K. 1972: Objective Knowledge. An Evolutionary approach. Oxford: Oxford University Press.
Popper, K. and Eccles, J. 1977: The Self and its Brain. Berlin: Springer
Poincaré, H. 1964: “On the nature of mathematical reasoning” in Benacerraf and Putnam 1964, pp. 394-402.
Putnam, H. 1967: “The nature of mental states” in his Philosophical Papers, 2. Cambridge: Cambridge University Press, 1975.
Ross, David 1923: Aristotle, New York: Methuen. reprinted in 1985.
Searle, J. 1992: The Rediscovery of the Mind. Cambridge, Mass.: Bradford/MIT, pp. 1-26.
Silbernagl and Despopoulos 1978: Taschenatlas der Physiologie. Stuttgart: Georg Thieme.
Tolman, R. 1938: The Principles of Statistical Mechanics, Oxford: Oxford University Press. Reprinted in 1980, New York: Dover.
Toynbee, A. & Somervell 1960: A Study of History, Abridgement. Oxford: Oxford University Press.
Varela, F. and Dupuy, J. (eds.) 1991: Understanding Origins: contemporary views on the origin of life, mind, and society. Dordrecht: Kluwer.
Warner, T. and Szubka, T. (eds.) 1994: The Mind-Body Problem. Oxford: Blackwell.
Ham. Do you see yonder cloud that’s almost in
shape of a camel?
Pol. By the mass, and’tis like a camel indeed.
Ham. Methinks it is like a weasel.
Pol. It is backed like a weasel.
Ham. Or like a whale?
Pol. Very like a whale.
Hamlet [III. ii. (400)]
Relation R is agglomerative if and only if, where A bears R to B and C bears R to D, A and C
bear R to B and D (Owens 1989, p. 69).
Relation R is transitive if and only if, where A bears R to B and B bears R to C, A bears R
to C (Owens 1989, p. 70).
“We now introduce the hypothesis of equal a priori probabilities for different regions in the phase space that correspond to extensions of the same magnitude. By this we mean that the phase point for a given system is just as likely to be in one region of the phase space as in any other region of the same extent which corresponds equally well with what knowledge
we do have as to the condition of the system” (Tolman 1938, p. 60)
The uniformity of the system is expressed sometimes as periodic boundary conditions. Instead of fixing the values of magnitudes at the limits of the system, it is assumed that the same values are repeated over certain distances for all the system (Cf. Kittel 1970, Chap. 6).
To avoid a possible misunderstanding I want to remark that I have spent years struggling hard with those “simple” systems. This has lead me i) to appreciate the creative genius of the great scientists that have contributed to the development of physics and ii) to admire the physical universe in the sense similar as when Kant said that the starry heaven above him filled is mind with ever-increasing wonder.
“There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy” [Hamlet, I. v. 166]
Bateson in Steps to an Ecology of Mind (1972, p. 154, p. 339, p. 483) insisted in taking into account from the point of view of cybernetics, relations with context and environment; in a very general way, dealing with a variety of cases such as the evolution of species, or mental pathologies such as schizophrenia.
cf. The Self and its Brain, 17; Objective Knowledge, chap. IV.