Tuesday, February 14, 2012

Think Complexity: Part Four

My new book, Think Complexity, will be published by O'Reilly Media in March. For people who can't stand to wait that long, I am publishing excerpts here.  If you really can't wait, you can read the free version at thinkcomplex.com.

And we need a blurb.  Think Complexity goes to press soon and we have a space on the back cover for a couple of endorsements.  If you like the book and have something quotable to say about it, let me know.  Thanks!

In Part One I outline the topics in Think Complexity and contrasted a classical physical model of planetary orbits with an example from complexity science: Schelling's model of racial segregation.

In Part Two I outline some of the ways complexity differs from classical science.  In Part Three, I describe differences in the ways complex models are used, and their effects in engineering and (of all things) epistemology.

In this installment, I pull together discussions from two chapters: the Watts-Strogatz model of small world graphs, and the Barabasi-Albert model of scale free networks.  But it all starts with Stanley Milgram.


Stanley Milgram

Stanley Milgram was an American social psychologist who conducted two of the most famous experiments in social science, the Milgram experiment, which studied people's obedience to authority (http://en.wikipedia.org/wiki/Milgram_experiment) and the Small World Experiment (http://en.wikipedia.org/wiki/Small_world_phenomenon), which studied the structure of social networks.

In the Small World Experiment, Milgram sent a package to several randomly-chosen people in Wichita, Kansas, with instructions asking them to forward an enclosed letter to a target person, identified by name and occupation, in Sharon, Massachusetts (which is the town near Boston where I grew up).  The subjects were told that they could mail the letter directly to the target person only if they knew him personally; otherwise they were instructed to send it, and the same instructions, to a relative or friend they thought would be more likely to know the target person.

Many of the letters were never delivered, but of the ones that were it turned out that the average path length---the number of times the letters were forwarded---was about six.  This result was taken to confirm previous observations (and speculations) that the typical distance between any two people in a social network is about ``six degrees of separation.''

This conclusion is surprising because most people expect social networks to be localized---people tend to live near their friends---and in a graph with local connections, path lengths tend to increase in proportion to geographical distance.  For example, most of my friends live nearby, so I would guess that the average distance between nodes in a social network is about 50 miles.  Wichita is about 1600 miles from Boston, so if Milgram's letters traversed typical links in the social network, they should have taken 32 hops, not six.

 Watts and Strogatz

In 1998 Duncan Watts and Steven Strogatz published a paper in  Nature, ``Collective dynamics of 'small-world' networks,'' that proposed an explanation for the small world phenomenon.  You can download it from http://www.nature.com/nature/journal/v393/n6684/abs/393440a0.html.

Watts and Strogatz started with two kinds of graph that were well understood: random graphs and regular graphs.  They looked at two properties of these graphs, clustering and path length.

Clustering is a measure of the ``cliquishness'' of the graph. In a graph, a clique is a subset of nodes that are all connected to each other; in a social network, a clique is a set of friends who all know each other.  Watts and Strogatz defined a clustering coefficient that quantifies the likelihood that two nodes that are connected to the same node are also connected to each other.

Path length is a measure of the average distance between two nodes, which corresponds to the degrees of separation in a social network.

Their initial result is what you might expect: regular graphs have high clustering and high path lengths; random graphs with the same size tend to have low clustering and low path lengths.  So neither of these is a good model of social networks, which seem to combine high clustering with short path lengths.

Their goal was to create a generative model of a social network.  A generative model tries to explain a phenomenon by modeling the process that builds or leads to the phenomenon.  In this case Watts and Strogatz proposed a process for building small-world graphs:

  1. Start with a regular graph with n nodes and degree k.  Watts and Strogatz start with a ring lattice, which is a kind of regular graph.  You could replicate their experiment or try instead a graph that is regular but not a ring lattice.
  2. Choose a subset of the edges in the graph and ``rewire'' them by replacing them with random edges.  Again, you could replicate the procedure described in the paper or experiment with alternatives.   The proportion of edges that are rewired is a parameter, p, that controls how random the graph is.  With p=0, the graph is regular; with p=1 it is random.
Watts and Strogatz found that small values of p yield graphs with high clustering, like a regular graph, and low path lengths, like a random graph.


     Barabasi and Albert

    In 1999 Barabasi and Albert published a paper in Science, ``Emergence of Scaling in Random Networks,'' that characterizes the structure (also called ``topology'') of several real-world networks, including graphs that represent the interconnectivity of movie actors, world-wide web (WWW) pages, and elements in the electrical power grid in the western United States.  You can download the paper from http://www.sciencemag.org/content/286/5439/509.

    They measure the degree (number of connections) of each node and compute P(k), the probability that a vertex has degree k; then they plot P(k) versus k on a log-log scale.  The tail of the plot fits a straight line, so they conclude that it obeys a power   law; that is, as k gets large, P(k) is asymptotic to k^(-γ), where γ is a parameter that determines the rate of decay.

    They also propose a model that generates random graphs with the same property.  The essential features of the model, which distinguish it from the model and the Watts-Strogatz model, are:

    Growth:  Instead of starting with a fixed number of vertices, Barabasi and Albert start with a small graph and add vertices gradually.

    Preferential attachment: When a new edge is created, it is more likely to connect to a vertex that already has a large number of edges.  This ``rich get richer'' effect is characteristic of the growth patterns of some real-world networks.

    Finally, they show that graphs generated by this model have a distribution of degrees that obeys a power law.  Graphs that have this property are sometimes called scale-free networks; see http://en.wikipedia.org/wiki/Scale-free_network. That name can be confusing because it is the distribution of degrees that is scale-free, not the network.

    In order to maximize confusion, distributions that obey a power law are sometimes called scaling distributions because they are invariant under a change of scale.  That means that if you change the units the quantities are expressed in, the slope parameter, γ, doesn't change.  You can read http://en.wikipedia.org/wiki/Power_law for the details, but it is not important for what we are doing here.

     Explanatory models


    We started the discussion of networks with Milgram's Small World Experiment, which shows that path lengths in social networks are surprisingly small; hence, ``six degrees of separation''.   When we see something surprising, it is natural to ask ``Why?''  but sometimes it's not clear what kind of answer we are looking for.

    One kind of answer is an explanatory model.  The logical structure of an explanatory model is:

    1. In a system, S, we see something observable, O, that warrants explanation. 
    2. We construct a model, M, that is analogous to the system; that is, there is a correspondence between the elements of the model and the elements of the system.
    3. By simulation or mathematical derivation, we show that the model exhibits a behavior, B, that is analogous to O.   
    4. We conclude that S exhibits O because S is similar to M, M exhibits B, and B is similar to O.

    At its core, this is an argument by analogy, which says that if two things are similar in some ways, they are likely to be similar in other ways.  Argument by analogy can be useful, and explanatory models can be satisfying, but they do not constitute a proof in the mathematical sense of the word.

    Remember that all models leave out, or ``abstract away'' details that we think are unimportant.  For any system there are many possible models that include or ignore different features. And there might be models that exhibit different behaviors, B, B' and B'', that are similar to O in different ways. In that case, which model explains O?

    The small world phenomenon is an example: the Watts-Strogatz (WS) model and the (BA) model both exhibit small world behavior, but they offer different explanations:

    1. The WS model suggests that social networks are ``small'' because they include both strongly-connected clusters and ``weak ties'' that connect clusters.
    2. The BA model suggests that social networks are small because they include nodes with high degree that act as hubs, and that   hubs grow, over time, due to preferential attachment. 

    As is often the case in young areas of science, the problem is not that we have no explanations, but too many.

    Questions

    [Think Complexity can be used as a textbook, so it includes exercises and topics for class discussion.  Here are some ideas for discussion and further reading.]

    Are these explanations compatible; that is, can they both be right? Which do you find more satisfying as an explanation, and why?  Is there data you could collect, or an experiment you could perform, that would provide evidence in favor of one model over the other?

    Choosing among competing models is the topic of Thomas Kuhn's essay, ``Objectivity, Value Judgment, and Theory Choice.''  You can download it here in PDF.  What criteria does Kuhn propose for choosing among competing models? Do these criteria influence your opinion about the WS and BA models? Are there other criteria you think should be considered?

    13 comments:

    1. Hi Allen. We're still using your Think Python book here at Skyline HS in Colorado. I was thrilled when I noticed you doing a book on complexity using Python. Last year I had a group of students do a case study on Melanie Mitchell's "Robby the Robot" genetic algorithms work. We mess around with a lot of the facets of complexity science that you are covering here, and I'll be buying a few copies of this and Think Stats to have around home and the lab.

      Take care!
      Richard

      ReplyDelete
      Replies
      1. Hi Richard. Your genetic algorithm project might make a good case study for the next edition. See the call for submissions here: http://greenteapress.com/complexity/case_studies/

        "I invite readers to submit additional case studies. Reports that meet the criteria will be published in an online supplement to this book, and the best of them will be included in future print editions."

        Regards,
        Allen

        Delete
    2. Hi Allen,

      We have some Python code that presents an easy way for people to program "Robby", and also code that let's you enter the "dna" for a particular bot and test it. Haven't found a good home for this code, so we'll look into wrapping it up for a case study in your book. Luckily, all students involved on the case study were Freshmen, so adding this to their future letters of recommendation is a major plus. :-)

      ReplyDelete
      Replies
      1. That sounds great. I am looking forward to seeing these case studies!

        Delete
    3. Hi Allen,

      I'll see if we can get this done. The students will be presenting the research at this year's SIGCSI conference. :-) They are your typical high-achievers, and therefore have a lot on their plates. I have them all in my APCS class, though, so I will keep on them about this!

      ReplyDelete
      Replies
      1. Congrats to you and your students! They must have done some interesting work. Looking forward to hearing more.

        Delete
      2. Allen,

        We're working on this and currently working on the original C code along with our code that allows one to test their ability to program in a strategy to compete against the GA strategies. Any preference on GUI code? We can use John Zelle's graphics module, or just plain Tkinter. Or Pyglet.

        We'll also be including links to our site which will include an HTML5 Canvas version, and maybe a Java and Processing version.

        Haven't done much with WX Python or PyQT, but we could consider those.

        Delete
      3. I use Tkinter, and a wrapper I wrote, called Gui, that comes with Swampy. But any of the common Python GUI toolkits would be fine.

        Delete
      4. Okay, thanks. We're looking in to how Python with Numpy will handle the entire study, not just the parts we wrote, but we will have a Python version of the "try it yourself" extension that we wrote, allowing users to compete with the GA solutions found in the C code trials.

        Thanks for all your contributions to CS Education, Common Sense, Logic, and life in general. :-)

        Delete
      5. We are actually STILL working on this, and I have a student doing work this semester to finally wrap all of this up and send it to you. The original group of students are now Seniors and this is their farewell gift giving back to our CS program.

        Delete
    4. Hi Allen,

      First, let me say an enormous thank you for this book. It really allows you to engage with this very interesting subject in a way that could otherwise require years of study in CS, math, and physics. As someone who came to this subject with little background in any of those, this book has allowed me to explore this topic which I've become very interested in despite my lack of prior knowledge. For your contributions to that and other similar endeavors, I thank you.

      One question though: are there any plans to release solutions to the exercises? It has been my experience that when you are starting out in a new subject area, in particular computer programming, you can have at a problem for days, but in the end, sometimes you just don't know what you don't know. When that happens, it is very useful to be able to look at a solution; not to as a cheat to deprive yourself of building and breaking your own models, but as a glimpse of enlightenment so you continue forward on that journey.

      Again, many thanks for creating this book and making it so widely available!

      Benjamin

      ReplyDelete
      Replies
      1. Hi Benjamin,

        I'm glad you are enjoying the book, and thanks for this note.

        Some of the exercises have links to solutions. There are some other solutions in this repository:

        https://code.google.com/p/complexity/source/browse/#svn%2Ftrunk%2Fcode

        but they are not well organized. Next time I teach my class, I will enlist the students to generate more solutions and make them available in a friendlier form.

        Delete
      2. woot! \0/ Awesome. This is great. Thanks for sharing!

        Delete