A common scientific assumption is that very small differences don’t matter. In Chaos: Making a New Science, James Gleick wrote, “A tiny error in fixing the position of Halley’s Comet in 1910 would only cause a tiny error in predicting its arrival in 1986, and the error would stay small for millions of years to come.”
In 1961, Edward Lorenz was using a computer at MIT to model weather. Twelve equations of temperature, pressure, and wind speed gave him “toy weather,” expressed as a printed line of numbers. Wanting to extend an earlier sequence, he began in the middle, entering numbers from the printout. The computer rounded its results to three decimal places, so Lorenz began his new sequence with a tiny discrepancy from the values the computer used in the original run. The resulting weather gradually became wildly different.
Lorenz’s toy weather was the beginning of what came to be called “chaos theory.” The idea is that complex systems, like weather, the stock market, politics, and individual people, are sensitive to initial conditions.
This is the so-called “butterfly effect,” a phrase Lorenz coined for the idea that something as insignificant as a butterfly flapping its wings on the other side of the globe will eventually change the weather here. Unpredictability is often attributed to the variables involved, but some things simply aren’t predictable. A pendulum is predictable, but give it a joint in the middle and it behaves differently.
Chaos theory stirred interest when James Gleick published his book in 1987. It has since been incorporated, along with cybernetics and systems theory, into complexity theory. But what good is it? It’s vaguely interesting that something tiny can affect large and distant events, but the world contains so many “butterflies” that the observation is almost banal. Scientists have used chaos to explain Jupiter’s Red Spot, but if you’re not a scientist, who cares?
Irene Sanders was an aid to former Georgia Senator Sam Nunn, the author of Strategic Thinking and the New Science: Planning in the Midst of Chaos, Complexity and Change, and she attended medical school. She’s currently a complexity consultant as director of the Washington Center for Complexity and Public Policy.
In 1989, someone gave her a copy of Gleick’s book and she has made a study of complexity ever since. “Unfortunately, a lot of our policy is still being developed using that linear, cause-effect thinking. We now have a way to understand complex systems and complex issues...There were times when world events and national crises very quickly pushed other issues aside, restructured alliances, and rearranged budget priorities. Issues seem to emerge out of nowhere.”
Recognizing complexity, Sanders says, we can know what to look for, how a system is likely to change. We know where to put our attention. For instance, she quotes contacts within the CIA as saying that they failed to predict the end of Apartheid because all their information was coming from the power structure. American intelligence had no sources among the resistance and didn’t realize its solidarity and determination.
Sanders speaks of intelligence officers studying Myanmar who were discouraged from speaking with colleagues in other Indochinese countries. “That was astonishing to me because, whether you are looking at cybersecurity or terrorism or what’s going on in one part of the world, it’s really a worldwide system.”
Dealing with complexity and chaos involves studying the big picture, looking at systems and anticipating how they might change. Relationships and influences are important—knowing when to disconnect from an electrical grid to avoid a blackout, for instance, or protect air-transit hubs. Computer technology “provides a tool of insight much like the microscope did for biologists and the telescope for astronomers.”