# INDEPENDENCE: On the Importance of NOT Being Connected

My father used to have a maxim: Never eat ice cream during a sporting event. It will cool off your team.

He was only half joking—call it a superstition—because of course he realized that an observer eating ice cream does not really affect the actions of the players. One event is independent of the others. Everything is not connected. (But he still didn’t eat the ice cream).

As for me, as a kid growing up, I wasn’t above responding with a maxim of my own. During a sporting event, always eat your favorite forbidden candy. It will rev up your team.

After all, I’d argue facetiously, we are all connected by (say) all living on earth together. And we are all made of the same atoms. Who could tell what hidden connections might thereby be possible?

Ironically, the overall lesson seemed to be to ask what in particular was meant by “everything” when we heard that everything was connected. Could it be that only some things were connected, or only connected in a certain sense?

So, that is my subject now: Do such concerns occur in science? And obviously they do. Finding actions to be not related can be as important as recognizing how they are related. For instance, it can be found that close body contact can spread influenza but does not spread cancer. And that has major significance.

Accordingly, I will be discussing independence—not being connected—as it is utilized in science.

Specifically, I’m eventually going to discuss the first law of thermodynamics. It is common to talk about the second law and how it can be restated in at least eight different manners that each has its own counterintuitive meaning. But the first law often gets a pass in that regard. It is just that it has equally imperative ways of restating it . . . for instance, about independence.

Let’s take a look.

the pendulum

But let’s start things off with a famous example of independence from early modern physics,  the pendulum.

Galileo found that the time required for one complete swing of the pendulum is independent of the mass of the thing being swung. It means that it takes the same amount of time to complete a cycle, regardless of how heavy it is. And that enabled great scientific progress in the form of clocks using a pendulum to keep regular time.

Independence is a legitimate phenomenon of the physical world, and it belongs in scientific thinking (and has been there from the very beginning).

the problem

Now consider the following problem, which is what we want to explain. It has to do with our sense of causality.

If Johnny sneezes on Suzie, she might later conclude that the sneezing “caused” her to catch his cold, and that would be correct. Yet there is nothing deterministic about it. She doesn’t “have to” inexorably catch his cold by being sneezed on. The random spread of infected particles through the air can have other outcomes than transmission of the disease. But still, causality is usually portrayed as a sequence of one event giving inevitable rise to the next in a deterministic manner. So, how do we make sense of this kind of causation that isn’t automatic—it only “might” happen—given that causality is usually described as “inexorable” and “inescapable”?

Well, a lot depends on which of the 20+ definitions of causality we use. But I will here address the issue in terms of my own preference, the thermodynamic understanding of causality. I have described some of the other definitions in my earlier essay “Is Causality a Cluster Kind?” That is the suggestion by some philosophers that causality is a cluster of qualities not all of which have to apply in every case. That alleviates the need to explain the vast number of contradictions that result from using any single definition. But I think I’ll stick with a thermodynamic view myself.

other examples of the problem

Let’s consider a few more practical examples of the issue to be explained. A chemist mixes two solutions together and says that that “causes” the molecules to react, which it does. But she also intuitively knows that that means that the molecules will go through a period of random mixing where they combine, fall apart, and recombine in new ways until finally a macro-scale overall outcome is achieved. This process is one that includes random behavior, and so it is not about one event leading inexorably to the next and the next to compel automatic outcomes.

And a naturalist might say that evolution occurs by the survival of the fittest, which is true in the bigger macro picture, but the actual interactions occur via random individual encounters where there are winners and losers. The macro-scale result, which is the evolution of novelties, occurs via these random encounters.

And an electronics engineer realizes that the overall flow of electrons in a circuit is following the path of the circuit even as each electron is individually moving randomly. It is just collectively that they have an overall direction to their flow. The same is true of the blood cells in our blood vessels.

Accordingly, it seems wrong to say that in these examples the overall outcome is occurring via one event leading inexorably to the next and the next, in a direct chain of one event giving rise to another. It is not about what “has to” happen because of some prior event making the next one happen. Instead, it is full of random actions.

Still, if we choose to look only at the macro results—if, for instance, we look only at the chemist adding together two solutions and seeing a new color in the result—then we might indeed use the word “cause” to describe why the colors changed. The mixing caused the color change.

Causality is usually said to be about “lines” or “chains” of one event giving rise to what happens next, yet these examples show how these “lines” are interrupted by a confusion of randomness. More than that, in many cases the random actions seem to be required in order to bring about the macro result. That is what I want to discuss here.

I will start with the question: Where did all of this randomness come from?

the first law

A good way of visualizing the issue is with the example of a gas (air) in a container. That the air molecules are indeed moving randomly on their own—not just moving randomly as a result of random collisions with other molecules—is clear from observing a balloon. If the air molecules are moving via a succession of causal impacts, then the air molecules, in the presence of gravity from the earth, would pool at the bottom of the balloon. But, instead, what is observed is that the gas fills the entire volume of the container, which can happen because each molecule moves via its own internal energy which empowers it with random actions. And these random actions fill the balloon, in spite of gravity.

Of course, we can always fudge our shifting definitions—a common complaint about causality—and say that the random actions “caused” the filling of the container. But randomness is usually thought of as the opposite of deterministic causality, a convention that I will hold with here.

So, what we have now is that the molecules, while moving randomly on the micro-scale, still altogether have macro-scale properties in the balloon that we can measure, such as the pressure and temperature of the air. We can even make equations (the gas laws) relating pressure and the temperature. Thus, that the molecules are moving randomly does not prevent us from making such macro-scale equations. It is not necessary to have the micro-level actions be causal in order to get macro-scale measurable results.

And we can see how that is starting to answer the question of how sneezing can create random actions on the micro-scale but how that only sometimes makes causal macro-scale illness. We have to look closer at the relationship between micro-scale and macro-scale events instead of assuming that they are causal “all the way down.” We can’t look at the macro world we live in and assume that everything else works the same way.

What the first law tells us is that it establishes the existence of this “internal energy” that makes every molecule move randomly. The law more famously is known for stating the conservation of energy, the principle that energy is never made or destroyed but only converted into other forms of energy. More exactly, it states that the total energy never changes, so that when two systems combine, the total energy is the two combined energies.

But what I am discussing here is the existence of internal energy. That means that, besides an object having kinetic energy (about motion) and having potential energy (coming from outside of itself, its setup), an object has internal energy (the energy inside of itself that creates its random motion).

And then the first law tells us something extra about this internal energy. The typical analogy is to a hiker moving in the mountains so as to change elevation, and then we can imagine how the hiker could take different paths and still get from Point A to Point B. And as the hiker goes from one place to the next via different routes, we can imagine that it takes a different amount of labor in each case, and a different amount of time, and he covers a different amount of distance. That is intuitive. But the first law tells us that the internal energy is not like that. Changes in internal energy depend only on the initial and final state of affairs, so that in the example all we have to do is subtract the hiker’s final elevation from the initial one, to find the change in elevation. We can skip worrying about what route the hiker takes, if all we need to know is the net change in altitude. And likewise, changes in internal energy depend only on the initial and final conditions, regardless of the details of what path the change takes. And that is important because it means that the path can be random, as in heat energy being random. (For a full mathematical derivation, see Dickerson, Molecular Thermodynamics, p. 89).

Yet causality, by being about one event giving rise to the next, is clearly about the importance of the path. So the first law is telling us that micro events are not causal. It is telling us that changes in internal energy are “independent” of the path of change.

And if that seems counterintuitive, well, that’s why we have a law to tell us about it.

In other words, it doesn’t make any difference which path the hiker takes, the result is the same, if all we need to know is his altitude change. And so it is with internal energy. That simply is what the first law says. The motion of micro-scale particles (think of heat moving randomly through metal, or of air completely filling a container) is independent of causality. The first law tells us so. The particles randomly rotate, vibrate, and translate from their own internal energy.

If the first law were not so, then we would be able take Path A from here to there, then reverse back to our original position by taking Path B (even if Path B takes less energy), and then by repeating that cycle over and over, we could thereby create an infinite net generation of energy. But the first law is confirmed by how no one has ever been able to make such a perpetual motion machine.

Energy is not about the way it’s connected but about how it ends up compared to before. It’s not about how it got that way because it moves randomly.

And just as the finding of independence in a pendulum created scientific progress in the form of clocks, so the first law and the independence of internal energy from the path has made for the development of further equations. Today, equations that depend on the first law are often called “equations of state” because they depend only on the state of initial and final conditions and not on what happens along the way. To use such equations, we frequently designate values with a prime, as in P and P’, to mean the initial and final pressure, skipping the values in-between.

When the philosopher David Hume in the 1700’s claimed that “everything has a cause,” he was not scientifically informed. Equations of state work independently of what caused the change, and they are describing things moving randomly. If everything has a cause—if the path somehow counts—then perpetual motion machines would be possible.

For that matter, even the second law, about entropy, holds that entropy, too, is path-independent, as is enthalpy. All of the major variables of thermodynamics are path-independent. It is the study of changes in heat, after all, and heat is the energy of random motion which gives everything a temperature.

It shows us that, at its most fundamental, everything is random: thermal motion.

But that is just the starting point.

discussion

It doesn’t mean that our sense of causality is necessarily for naught.

What we are left with is, yes, a world full of random actions, but also we have to factor in how those random actions are occurring within structures and other arrangements of energy. That is because energy not only makes things move randomly, but energy also sticks together, to make those arrangements. So, what we have is a world full of random micro actions, but altogether those micro actions, when within certain arrangements, end up creating predictable macro-scale outcomes. And it is these macro actions that we can see and measure. I gave numerous examples of this transition from micro to macro in my earlier essay on randomness.

What it means is that we’re justified in being as inventive as we want in elaborating our theories about macro-scale causality, as long as we find such theories fruitful. We just shouldn’t visualize causality as extending “all the way down” to the micro scale. Also, we shouldn’t describe it as existing in infinite chains, because the micro-level randomness does intercede in that, which I am about to discuss.

We are presented with the problems that I opened with here, such as being sneezed on. Since the sneeze physically creates a random spreading of the pathogen in the air, the strict causality of the sneeze automatically bringing about the spread of the disease is interrupted. The macro-scale actions—the sequence of one event (sneezing) giving rise to the next (catching the cold)—are seemingly interrupted in their deterministic automaticity by this random behavior connecting them.

But then again, that is exactly what is observed. The sneezing does not automatically spread the disease. Yet when it does, we can say that the sneezing does truly “cause” the transmission.

How do we explain that?

Each person is free to invent one’s own sense of causality, but I look at it this way. What we do when we see causality is that we mentally connect the dots just between the macro events that we can measure, skipping over the random micro events that we only know about indirectly through our science. It would seem that causality is thus only a macro-scale phenomenon—about what is observable—but it still works because, in spite of the micro-scale random interruptions, the role of the setup in enabling it to happen. It’s just that we typically ignore the role of the setup in discussing causality, which we shouldn’t do—we traditionally say that the subsequent event depends only on the preceding event making it happen, not depending on other features such as the present setup of circumstances—and that creates the confusion.

That might seem counterintuitive or just plain too complicated to be correct, but in practice it is actually very intuitive, as the example of being sneezed on illustrates. It is not too complicated for people to realize why being sneezed on only might spread the disease. Another example is how we can say with justification that “lightning caused the fire” even though we know that lightning is itself about random actions creating friction in the air molecules, thereby generating a static charge which builds up randomly until wind randomly moves it to a structure sticking high in the sky, such as a tree, whereupon the charge is discharged. What we do is connect our seeing the lightning with our seeing the fire and skip all the random events occurring in-between at the micro level. We detect causality occurring by looking just at the macro events.

Science shows how such a sense of causality is extremely useful, even if it is not about causality in infinite chains as is traditionally discussed in philosophy. It lets us say that sneezing “causes” transmission even as we realize that it is not a chain of inexorable events.

I realize that that might still seem disappointing to some people, so I’d like to make an analogy between seeing causality and seeing colors.

colors

Our brains, of course, just invent colors. Literally, all that is real are colorless electromagnetic waves, and our brains make up how there is a hue assigned for each frequency of the wave. But seeing colors helps keep us alive, and there is a whole lot of real stuff that we can do with colors such as divide them into primary and secondary colors and theorize how to mix them. We can make art with them, and we can use them for fashion and safety. It’s just that it’s incorrect to think that the hues themselves extend “all the way down” to being intrinsic properties of the electromagnetic waves themselves.

And likewise, we can do a whole lot of powerful magical stuff with our sense of causality at the macro level, including using it to stay alive, without having to contend that it exists all the way down to the micro level. Everything is not connected causally.

work

A possible objection might be to note that the micro particles, besides moving randomly on their own, do indeed also collide with other particles, creating a mechanical component to their actions. But that is still not enough to make the whole thing be deterministic. And in any case, that is handled in thermodynamics by what is called “work.” Without going into it here, we can think of it as a combination of thermal and mechanical energy, as measured by displacement. Yet work is itself thought to be about forces as measured on a macro scale, to make the macro-scale displacement. Work is actually consistent with how I have described causality as a macro-scale phenomenon.

That also explains how we can track subatomic particles in a cloud chamber in spite of their extreme smallness. We are measuring the single micro entity but in macro terms.

other examples

I will close with a few more examples of how the kind of thermodynamic causality I have been describing is more intuitive than we might otherwise think at first.

When we say that a giant meteor colliding with the earth caused the extinction of the dinosaurs, we are connecting the macro events (the meteor striking the earth and the disappearance of the dinosaurs), while skipping over the randomly moving dust clouds and how they randomly occluded the sunlight from getting to earth to randomly warm the air as it randomly moved.

And descriptions of physiology processes invariably start with the notion that molecules will be moving randomly, but it is also to assume that they will not end up random because of occurring within structures that are fitting together in a way that molds the random actions into nonrandom macro-scale events. Ions line up in heart cells via random motions but then discharge to make a beating heart. Kidneys filter wastes as molecules fit or not through pores as they randomly try to move through them.

Chemists routinely use math to describe macro phenomena consisting of countless micro events altogether—it is no hindrance to using math that the micro events are changing randomly—examples include gradients, buffers, and equilibria.

In electronics, transistors acquire their macro-scale utility from being randomly doped.

It’s even intuitive in non-scientific arguments. We can differ as to what we think causes poverty, or what causes crime, and argue if they are related. But we usually do not imagine some object called poverty literally pushing people around. Instead, we see people moving uniquely (akin to moving randomly) but under differing circumstances, creating different net outcomes.

Even artificial intelligence has come around (it used to be that it was about following algorithms like rules). We have probably seen the news of successes such as ChatGPT and think that that is an example of successful application of causality and rule-following. But no. Neural nets require the frequent introduction of randomization in order to work—for ChatGPT it is reportedly 20% of outcomes that need to be randomized before continuing—because otherwise the neural net hangs on some meaningless result. The variety is needed in order to try new possibilities. What you’re really doing is changing the boundary conditions, or weights, as things interact randomly. Strict rule-following doesn’t arrive at a workable answer and doesn’t mirror the actual world, at least as modeled in current AI.

The neural nets are emulating how chemical reactions come to find their most stable products, according the Hopfield’s mathematical treatment of neural nets.

conclusion

We do not need to start with causal processes in order to end up with causal processes. People have been twisting their thinking into knots for over a century trying to find a way to make it seem otherwise, especially regarding quantum mechanics. But the better approach is just to go with it. That is what most of science has learned to do. Random actions via internal energy (having a temperature) are fundamental.

To argue that actions must be causal at all levels of scale is just to assume one’s own conclusion that change can only happen via causal actions, when that is, as thermodynamics shows, just a philosophical assumption, not the science.

I tend to feel that it is one of the reasons why there needs to be a philosophy of chemistry, to go with a philosophy of physics and biology. Chemists tend to approach a problem by assuming that things start out random, but that that doesn’t mean they’re going to end up that way. Indeed, a situation often needs to start out random if we are going to get the expected results. (Think of air filling a balloon). It’s sort of the opposite of 1700’s theories about science, or today’s mechanical philosophy which thinks of itself as patterned after physics (although that is questionable, as well).

And that’s not even to get into the many “time-independent equations” that science has found. It’s hard to see how they can be describing causal relationships if they do not even depend on time. They do not have a “before” and an “after,” an initial and a final value. Yet they describe reproducible relationships.

The philosopher Bertrand Russell, fresh from his success at equating formal logic with the logic of math, went on to argue that causality was incompatible with such logic. Thirty years later, he seemed to reverse himself and tried to describe a separate logic of causality, but then admitted he had failed. A thermodynamic understanding of causality—about it being a connection between macro events—might help to explain how he could arrive at his various positions. The “laws of causality” have never been described without it instigating massive contradictions.

Photo by Polverini Lian @ pixels.com