It is desirable to be able to say what causality “is.” The idea is not to have to say, “Here is Event A, and abracadabra, here is Event B.” We want to describe what is happening in-between.
But that is increasingly becoming difficult to do, at least in keeping with the traditional notions of causality. That is because science is more and more revealing a world made of random events. Light emanates randomly in every direction, and electrons move randomly, both inside of wires and inside of atoms. Chemical reactions occur via the random collisions of molecules, and many events inside of cells happen from random movements. Even neo-Darwinism proceeds via random encounters. Also, thermodynamics is more technically called “statistical mechanics.”
So where does that leave causality? At least as traditionally construed, causality is the opposite of randomness. To be caused is to be somehow determined by a prior event, but to be random is to be not predetermined in any predictable fashion. So at least at first glance, causality would seem to be left out of science when it is depicting light, electricity, chemistry, molecular biology, evolution, and thermodynamics (to name only a few). All that science can do, it would seem, is to describe the random phenomena with probabilities.
Yet clearly, science does more with its predictions than just to predict random behavior. And a quick glance at the actual equations shows that most of them do not contain probabilities. Science seems to have figured out a way to deal with randomness in a manner that still enables it to make definite, reliable, repeatable (not random!) predictions.
And how it does that is the subject of this book. There exists a second method for dealing with random behavior besides by using probabilities. And this second approach is employed very extensively in science.
So the book will describe this second method, and then it will explore where causality fits in with that.
At first, it might seem like an oxymoron. How can science make repeatable descriptions about random behavior?
But it works. The secret is to realize that random events are not occurring in isolation—if they did occur in isolation, they still would be unpredictable—but it is possible to draw back and observe how the random events are fitting together with other circumstances to make an overall picture. And this overall picture has properties which science can profitably describe, and more exactly, it can use that to make predictions. It is like knowing an election result without having to know how each person voted. Predictions can be made just from seeing the overall picture while in ignorance of the individual particulars (which might be random).
Mathematics, with its probabilities, is a pure idealized system where it still works to treat each event as existing in isolation. But science is about the practical world where it is indeed relevant how real physical actions fit together.
So probabilities work well for treating an event all by itself, but for making more exact predictions about an event in the context of its surroundings, science is able to employ these other approaches.
The book describes very specifically how these other approaches are used in science and offers many examples. And it looks at how causality (ostensibly the opposite of randomness) can still fit in with these other approaches. Indeed, the traditional view of causality will start to seem considerably inadequate.
A chapter by chapter description of the book is as follows:
The first chapter introduces how science deals with random actions in a way that does not employ probabilities (although science still uses probabilities, too). The chapter describes these non-probabilistic methodologies in the context of comparing them with causal chaining, which is one of the traditional ways of thinking about causality. That enables including ideas of causality into the beginning of understanding these other methodologies. But in so doing, it makes the case that science does not usually proceed by tracing such chains since, in order to work, the chains require knowing what each individual is doing, which is not possible with random events. Thus these alternative methodologies are alternatives, not just to using probabilities, but to traditional notions of how to understand causality.
Then the second chapter discusses causality itself more explicitly. The chapter begins by making the case that the arrangement of a situation plays a role in how a motion can occur. And because of that, causality (a type of motion) is likewise dependent on the current arrangement of the setup. But that gives causality different characteristics than in the classical view, where it is just about past events inexorably determining the next and the next regardless of the arrangement of the present circumstances. Three of the ways in which causality is different by being tempered by the current—not just past—circumstances are the following: One is that, by depending on the present arrangement of a situation, causality can come and go as those circumstances themselves change. Two is that, accordingly, causality is more complicated than being an infinite chain of past events determining what happens next. And three is that this dependence on the arrangement of things establishes how events—including random events—can act as per being organized into other events, rather than acting in isolation (as with probabilities). Thus all told, the chapter suggests that causality is “domain-specific,” meaning that it appears only under certain conditions. So science describes the setup of circumstances as part of describing the causality.
Then the third chapter examines how the traditional view of causality (how it is said to be about chains of one event giving rise to the next) amounts to “linear thinking.” It is to see events as happening only in a line of one thing leading to another. But the chapter also examines how our minds are capable of “expansive thinking.” It is like hitting a ball and simultaneously seeing that as happening in a game. In other words, it is possible to see causality simultaneously in two different ways. One way is as a sequence of immediate events, and the second is as it occurs in a larger picture of how things fit together. By understanding one view “in terms of” the other, it is possible to go beyond seeing causality as merely constituting linear thinking. That is important because linear thinking does not match up well—or bode well—with a world full of random events.
And the fourth chapter, a brief one, applies all of the above to quantum mechanics, with its famous probabilities.
Then the fifth chapter takes a different tack from the others and examines causality from the perspective of how it is frequently viewed in biology. That is to define causality in opposition to teleology. In other words, causality can be understood as being about past events influencing the present as opposed to the future influencing the present in the form of fate or predestination. That is a looser more easy-going definition of causality. (It is harder to find something to quibble about it). But it becomes an issue, for instance, in explaining how body organs have functions. Clearly, they do have functions (the function of the heart is to pump the blood), yet a function is a purpose, and purposes are teleological. So this chapter applies the expansive approach used so far in the book to address that controversy.
And the sixth chapter delves into some of the problems of trying to define causality at all. That includes examining Norton’s argument that it is so “plastic” a notion as to be anything we want, and so it is just a “folk science.” But that is followed by consideration of Ben-Menahem’s answer that causality is a cluster concept. That would mean that causality has many qualities not all of which have to apply in every instance. (Bu that can explain why causality can seem so plastic). The chapter ends with my own suggestion that causality should have its own principle of complementarity, akin to Bohr’s principle of complementarity about light. This principle would state that it is okay to use many definitions of causality as long as we do not mix the elements of each usage, since that can lead to error. Examples of the different usages of a causal statement include using it for making syllogisms; using it as in probability studies; and using it to trace a chain of particular events. All of these usages can be valid as long as elements of each approach are not mixed.
Chapter Seven examines the current surging interest in Pearl’s “causal graphs.” At least in the artificial intelligence community, they are making “a causal revolution” (so a book like this one on causality must take them into consideration). Pearl argues that artificial intelligence is doomed to failure until it incorporates causality into its algorithms. But how? Pearl argues that the way to handle random actions is not just with probabilities but with probabilities infused with the causal notions of manipulation and counterfactuals (notions associated with the philosophers Woodward and Lewis, respectively). He argues against the well-known adage that “correlation is not causation.” This chapter introduces how the graphs work, but it also offers some constructive criticism. I agree with Pearl that probabilities alone are insufficient for achieving knowledge. But I have been suggesting a different remedy, that of the expansive methods used by science.
And Chapter Eight proposes an alternative to the Chinese room method for recognizing intelligence in a computer. The lesson from the Chinese room thought experiment is that it is not possible, while talking through a door into another room, to tell the difference between what is on the other side being a simulation of intelligence or the real thing. But the same is true even of a human being. We cannot get into someone else’s head to know what is actually happening in there. So what happens if we lower our expectations and, instead of demanding intelligence as our goal, we settle for something that is measurable from the outside? The chapter explores how that might happen.
The ninth chapter serves as a cautionary tale and emphasizes the difference between AI (artificial intelligence) and Big Data (analytics). Although many of the same techniques are used, the latter is a commercialization of the former and not always put to good purposes.
Then the tenth chapter explores the lessons that might be learned from the various false starts that AI has made in trying to emulate intelligence. It is possible to think of AI as trying to “model” various theories of knowledge, and when those attempts prove successful or not, that tends to confirm or disconfirm the theories. So this chapter provides a brief history of those AI attempts, with an eye on what they say for ontology and epistemology. In the end, the chapter reprises how science seems to have its own principle of individuation, as first suggested in Chapter One. The experience of AI tends to confirm that principle.
And finally, the last chapter brings together some major points from my past books and puts them with major points made here, in order to make a more comprehensive story of how to have an expansive view of causality. That includes a consideration of complexity studies and information theory.
Each chapter can be read as a stand-alone paper. The book is written that way so as to emphasize how each chapter has its own unique starting point sand yet it still comes to the same understanding of causality. Accordingly, all but the shortest chapters have an Introduction and a Conclusion. What they each show is the role played by arrangement in how things move and change. And that includes change that is causal change. Causality can fit into a world that is full of random actions because those random actions can act, not just in isolation as in probabilities, but as per their organization within arrangements.