June 2013
Lecture by Victoria N. Alexander, Director, Dactyl Foundation
International Biosemiotics Studies, 13th Annual Gathering, Castiglioncello, Italy.
(numbers refer to slides)
CHANCE. What is it? What’s it got to do with the idea that interpretations are not determined by physical laws?
This question has a long history. But let’s start with Peirce, then I’ll move on to more recent theories, particularly the complexity sciences and Terrence Deacon’s work. At the end, my analysis of chance as cause will take us back to some version of Aristotle’s four causes—with all the Christian influence carefully removed.
5 To Aristotle’s four causes, we also add the notion that there are different kinds of selection, which include selection for prevalent type; selection for formal characteristics; selection for differential reproduction; selection for expected effects. These different kinds of selection introduce biases, which can accumulate and affect outcomes.
I think we can get ourselves into trouble if we assert that interpretations are indeterminate, or say that living systems have a choice of representations, without unpacking what we might mean by that.
6 In “The Doctrine of Necessity Examined,”—as well as in other writings—Peirce seeks to falsify the theory that “minds are part of the physical world in such a sense that the laws of mechanics determine everything that happens according to immutable attractions and repulsions.”
How does he argue this?
7 By “admitting pure spontaneity or life as a character of the universe, acting always and everywhere though restrained within narrow bounds by law producing infinitesimal departures from law continually, and great ones with infinite infrequency.” Peirce believed free will, like law, emerged from chance.
What is chance or spontaneity to Peirce? In “Necessity Examined,” it is described as a lack of identity between things categorized as virtually the same. Peirce often uses “chance” and “spontaneity” interchangeably to refer to an unlawfulness inherent in particularity.
8 He notes “‘if A, then B,’ means nothing with reference to a single case” (147). He also says, “When we come to atoms, the presumption in favor of a simple law seems very slender. There is room for serious doubt whether the fundamental laws of mechanics hold good for single atoms…” (288). In “Necessity Examined” Peirce reasons that since finer and finer measurements tend to yield more and more unpredictable results, this indicates that the “lawful” regularity of things (e.g. atoms, molecules) is the probabilistic outcome of large sample sizes. One should expect, therefore, that the more numerous and the irregular parts are to the sum, the better the predictability. Irregularities sum up; they average out. Due to a fundamentally irregular nature of matter and differences of scale, chemistry would be inherently more regular than biology, and the degree of irregularity in biology would be ontological, not a product of measurement error, according to Peirce.
With the discovery of the quantum mechanical world, Peirce’s probabilistic view of causality is somewhat vindicated. Quantum Field physicist Lee Smolin credits Peirce as being the first modern to realize that laws evolve.
So if the fundamental nature of reality is probabilistic and if primordial irregularity really does seep through to the macro-world—what then? If the strict mechanistic hypothesis was the only thing standing in the way of the argument for “free will,” it thus being removed, does it necessarily follow that living organisms are capable of making interpretive choices?
9 Or does this merely make interpretation the product of clockwork with a few loose gears? Some popular postmodern readings of Peirce make all actions somewhat indeterminate, and make our would-be wills thus ruled by chance rather than by law.
But this is not as Peirce would have it.
10 His chance is “in the form of a spontaneity which is to some degree regular” (310). This is the salient point. It is not the idea of “pure” spontaneity but the idea of a “to some degree regular” spontaneity that is the most insightful part of Peirce’s theory of the origins of self-determination and semiotic freedom.
This is how I summarize Peirce theory:
11 If we have irregularity and mechanical disequilibrium—everything in flux—there is a potential for a difference to make difference. Through relations of similarity and contiguity interactions of irregular parts can bias the tendencies of the system. As far as I know, Newtonian equations don’t include variables for similarity or contiguity. We shouldn’t expect them to work very well to predict the behavior of systems where such biases exist.
I think Terrence Deacon has presented the clearest elaboration of this idea is his work.
12 Here we have his autocell, aka autogen, thought experiment, in which the similarity of shape and close proximity of molecules results into an autocell falling together. Very roughly, first there is an autocatalytic reaction, the by-products of which tend to crystallize into a tube-like structure, which can close up around loose floating molecules that are used in the autocatalytic process. If a tube happens to have a way that doesn’t break open unless there are lots of the right molecules for the reaction floating around, then this will serve the purpose of maintaining the processes. Such a tube will become more prevalent. The autogen then makes its own luck. No efficient cause needed here. No force. Just constrained probabilities.
Deacon notes that it is not necessary to assume any, “quantum strangeness” to explain emergence of a self-preserving structure. But he doesn’t go back to cosmological origins, as Peirce does. Whatever the case may be with radical indeterminacy or quantum strangeness,
13 chance is not a-causal. Although it is often seen as the opposite of law,
14 chance and law are two sides of the same coin.
15 The evolution of causality might follow this kind of hierarchy. Material and efficient causality is predictable. Then with formal causality if you know the rules, you can sort of predict the outcome, for example, you know roughly the kinds of shapes that a snowflake or a star system can take. Final cause selections aren’t very predictable. It takes an abductive leap to guess how this structure or that condition might turn at to be useful in some circumstances.
Deacon refers to the constraints in self-organizing systems and teleological processes as an “absence” because the process involves leaving some possibilities out to favor others. Based on this idea, he refers to a whole,
16 as a hole. I love a good pun—probably more than I should—but I don’t think a whole is a hole.
17 What Deacon calls an “absence,” I would call a semiotic object. Last year in my talk I argued that the ultimate object of any sign relation is the objective of maintaining that interpretive response. It is not an absence that constrains complex processes but an “emergent habitual pattern, ” a whole (with a w), which cannot be precisely quantified as a whole and which doesn’t exist as a static thing. But we know that new holistic constraints have emerged because we see indications of them in the additional limits on the behavior of the parts of the whole. I do think wholes exist, I don’t think they are absent: they just aren’t particulars. They are dynamically stable patterns.
18 When we consider wholes in this way, as emergent habitual patterns, we can address the question about how interpretations can be indeterminate—or rather perhaps we should say underdetermined, not to confuse what we mean here with something like quantum indeterminacy or a-causal processes.
A habit is an emergent fluid pattern of behavior that never follows the quite the same algorithm.
19 Never the same if x then y. The algorithm of an emergent pattern is something like if x-ishness then y-ishness. X and y are general types, not particulars. In a mechanistic view, variables are more precisely defined, I believe, and are based on particulars. General types, as casual factors, emerged with the formal cause and final cause phenomena. A habit is a general representation of past experience, in which a response served a function. Functions are general; they must preserve the habit, but they can do so in any way. Getting food, or pursuing any goal, can be accomplished in a number of ways. Habits are not specifically defined.
20 They say Generals are always fighting the last war, and this is as it is with habits too. Habits are determined by the previous exercises of the habit, which reinforced it.
If the system reads the sign right, that response pattern will be reconfirmed. But the reading could be wrong.
21 In living systems, cause and effect are decoupled because the sign-vehicle and the response are decoupled in the sense that what appears to be sign of an objective may not confirm that response, that habit. Sign reading is based on similarity, not identity. In contrast, the mechanistic laws of physics are never wrong, I suppose. Chemical reactions occur in lawful ways because a molecule doesn’t mistake one molecule for another. Habitual responses can and do mistake things for signs that are not really signs.
22 “A man of genius makes no mistakes. His errors are volitional are the portals of discovery.” James Joyce. Joyce says volition, free will, is based on the genius of mistake. This is related to the idea that we make our own luck and in so doing we are self-determined. We don’t have an executive making choices. We sort of fall into the right choices by means of our biases.
In what sense does a cell have a choice of behaviors? I do not suppose that a cell possesses alternative self-representations (its internal states) from which it can choose in a response. How would such a choice be executed, without an homunculus executor?
Even when we are using reason to make a decision, I think are neuronal patterns fall into that decision, biased by past thoughts and experiences. Of courses, consciousness, self-reflection, human language these all provide additional constraints, but I don’t think a homunculus ever emerges from this. Terrence Deacon argues that a core constraint, a homunculus does emerge in the end in the form of consciousness. He is trained in neuroscience so it’s probably best if you believe him and ignore what I just said.
24 I explain choice differently
Insofar as a semiotic system can change the probability of fortuitous states existing (internal and external states), then the system has constructed its own ability to be adaptable—to respond and go in directions that may be counter to the prevailing conditions.
A semiotic system, an organism, is self-caused because it makes its own luck.
In conclusion: there are others things I could say about chance that I have left out. I’ll mention one omission that seems most obvious.
25 The predictability of any non-linear system, even a simple one like a three body system—sun, moon and the earth—are unpredictable after some period of time, and this is apparently not due to measurement error, but to the dynamics. Small perturbations can have effects that are disproportionate to their causes. Organisms depend on many, much more complex, nonlinear interactions. How might this affect interpretation? Do these mechanism produce deterministic chaos, which is available for adaptability, flexibility?
26 Thank you for listening and thank you Franco for organizing the gathering.
In conclusion. The term “chance” is used to describe many different things. I don’t think I have come close to giving a full biosemiotic definition of chance. If we ever get around to putting together a glossary of terms for biosemioticians, I think chance should get several entries and will be as important as the one defining the sign.