Friday, January 27, 2012

Think Complexity, Part Two

My new book, Think Complexity, will be published by O'Reilly Media in March. For people who can't stand to wait that long, I am publishing excerpts here.  If you really can't wait, you can read the free version at thinkcomplex.com.

In the previous installment I outlined the topics in Think Complexity and contrasted a classical physical model of planetary orbits with an example from complexity science: Schelling's model of racial segregation.  In this installment I outline some of the ways complexity differs from classical science.

Paradigm shift?  

When I describe this book to people, I am often asked if this new kind of science is a paradigm shift.  I don't think so, and here's why.   Thomas Kuhn introduced the term ``paradigm shift'' in The Structure of Scientific Revolutions in 1962.  It refers to a process in the history of science where the basic assumptions of a field change, or where one theory is replaced by another. He presents as examples the Copernican revolution, the displacement of phlogiston by the oxygen model of combustion, and the emergence of relativity.

The development of complexity science is not the replacement of an older model, but (in my opinion) a gradual shift in the criteria models are judged by, and in the kinds of models that are considered acceptable.  For example, classical models tend to be law-based, expressed in the form of equations, and solved by mathematical derivation.  Models that fall under the umbrella of complexity are often rule-based, expressed as computations, and simulated rather than analyzed.  Not everyone finds these models satisfactory.

For example, in Sync, Steven Strogatz writes about his model of spontaneous synchronization in some species of fireflies.  He presents a simulation that demonstrates the phenomenon, but then writes:
I repeated the simulation dozens of times, for other random initial conditions and for other numbers of oscillators.  Sync every time. [...] The challenge now was to prove it.  Only an ironclad proof would demonstrate, in a way that no computer ever could, that sync was inevitable; and the best kind of proof would clarify why it was inevitable.  
Strogatz is a mathematician, so his enthusiasm for proofs is understandable, but his proof doesn't address what is, to me, the most interesting part the phenomenon.  In order to prove that ``sync was inevitable,'' Strogatz makes several simplifying assumptions, in particular that each firefly can see all the others.

In my opinion, it is more interesting to explain how an entire valley of fireflies can synchronize despite the fact that they cannot all see each other.  Think Complexity discusses how this kind of global behavior emerges from local interactions. Explanations of these phenomena often use agent-based models, which explore (in ways that would be difficult or impossible with mathematical analysis) the conditions that allow or prevent synchronization.

I am a computer scientist, so my enthusiasm for computational models is probably no surprise.  I don't mean to say that Strogatz is wrong, but rather that people disagree about what questions to ask and what tools to use to answer them.  These decisions are based on value judgments, so there is no reason to expect agreement.  Nevertheless, there is rough consensus among scientists about which models are considered good science, and which others are fringe science, pseudoscience, or not science at all.

I claim, and this is a central thesis of the book, that the criteria this consensus is based on change over time, and that the emergence of complexity science reflects a gradual shift in these criteria.

The axes of scientific models

I have described classical models as based on physical laws, expressed in the form of equations, and solved by mathematical analysis; conversely, models of complexity systems are often based on simple rules and implemented as computations.  We can think of this trend as a shift over time along two axes:

Equation-based simulation-based
Analysiscomputation

The new kind of science is different in several other ways:

Continuousdiscrete: Classical models tend to be based on continuous mathematics, like calculus; models of complex systems are often based on discrete mathematics, including graphs and cellular automata.

Linearnon-linear: Classical models are often linear, or use linear approximations to non-linear systems; complexity science is more friendly to non-linear models.  One example is chaos theory, which explores the dynamics of systems of non-linear equations.

Deterministicstochastic: Classical models are usually deterministic, which may reflect underlying philosophical determinism; complex models often feature randomness.

Abstractdetailed: In classical models, planets are point masses, planes are frictionless, and cows are spherical (see http://en.wikipedia.org/wiki/Spherical_cow).  Simplifications like these are often necessary for analysis, but computational models can be more realistic.

One, twomany: In celestial mechanics, the two-body problem can be solved analytically; the three-body problem cannot.  Where classical models are often limited to small numbers of interacting elements, complexity science works with larger complexes (which is where the name comes from).

Homogeneouscomposite: In classical models, the elements tend to be interchangeable; complex models more often include heterogeneity.

These are generalizations, so we should not take them too seriously. And I don't mean to deprecate classical science.  A more complicated model is not necessarily better; in fact, it is usually worse.

Also, I don't mean to say that these changes are abrupt or complete. Rather, there is a gradual migration in the frontier of what is considered acceptable, respectable work.  Some tools that used to be regarded with suspicion are now common, and some  models that were widely accepted are now regarded with scrutiny.  For example, when Appel and Haken proved the four-color theorem in 1976, they used a computer to enumerate 1,936 special cases that were, in some sense, lemmas of their proof.  At the time, many mathematicians did not consider the theorem truly proved.  Now computer-assisted proofs are common and generally (but not universally) accepted.

Conversely, a substantial body of economic analysis is based on a model of human behavior called ``Economic man,'' or, with tongue in cheek, Homo economicus.  Research based on this model was highly-regarded for several decades, especially if it involved mathematical virtuosity.  More recently, this model is treated with more skepticism, and models that include imperfect information and bounded rationality are hot topics.

At this point I have laid out a lot of ideas for one article, and explained them very briefly.  Think Complexity gets into these topics in more detail, but I will stop here for now.  Next time I will talk about related shifts in engineering and (a little farther afield) in ways of thinking.

1 comment:

  1. I think we lack the historical perspective to judge if it is a revolution of paradigm shift. In term of years it is a long period, it is short only through the lenses of history.

    For me, Aumann's Interactive Epistemology papers are very appealing. The probability paper describes the empiricist side, the logic paper the "simulations". It's nice to see that a big thinker sees no contradiction between the two approaches.

    (Links: www.ma.huji.ac.il/raumann/pdf/Interactive%20epistemology1.pdf and www.ma.huji.ac.il/raumann/pdf/Interactive%20epistemology2.pdf)

    ReplyDelete