Wednesday, July 28, 2010

A new kind of economics


This is the first of the pieces I've written on the back of a workshop that I attended at the end of June on agent-based modelling of the economy. It appears (in edited form) in the August issue of Prospect. I have also written on this for the Economist - will post that shortly. And I am writing on the more general issue of large-scale simulation of the economy and other social systems for New Scientist.

***************************************************

Critics of conventional economic theory have never had it so good. The credit crunch has left the theory embraced by most of the economic community a sitting duck. Using the equations of the most orthodox theoretical framework – so-called dynamic stochastic general equilibrium (DSGE) models – the governor of the Federal Reserve Frederic Mishkin forecast in the summer of 2007 that the banking problems triggered by stagnation of the US housing market would be a minor blip. The story that unfolded subsequently, culminating in September 2008 in the near-collapse of the global financial market, seemed to represent the kind of falsification that would bury any theory in the natural sciences.

But it has not done so here, and probably will not. How come? The Nobel laureate Robert Lucas, who advocated the replacement of Keynesian economic models with DSGE models in the 1970s, has explained why: this theory is explicitly not designed to handle crashes, so of course it will not predict them. That’s not a shortcoming of the models, Lucas says, but a reflection of the stark reality that crashes are inherently unpredictable by this or any other theory. They are aberrations, lacunae in the laws of economics.

You can see his point. Retrospective claims to have foreseen the crisis amount to little more than valid but generalized concerns about the perils of prosperity propped up by easy credit, or of complex financial instruments whose risks are opaque even to those using them. No one forecast the timing, direction or severity of the crash – and how could they, given that the debts directly tied up in ‘toxic’ sub-prime mortgage defaults were relatively minor?

But this pessimistic position is under challenge. For Lucas is wrong; there are models of financial markets that do generate crashes. Fluctuations ranging from the quotidian to the catastrophic are an intrinsic feature of some models that dispense with the simplifying premises of DSGE and instead try to construct market behaviour from the bottom up. They create computer simulations of large numbers of ‘agents’ – individuals who trade with one another according to specified decision-making rules, while responding to each others’ decisions. These so-called agent-based models take advantage of the capacity of modern computers to simulate complex interactions between vast numbers of agents. The approach has already been used successfully to understand and predict traffic flow and pedestrian movements – here the agents (vehicles or people) are programmed to move to their destination at a preferred speed unless they must slow down or veer to avoid a collision – as well as to improve models of contagion in disease epidemics.

A handful of economists, along with interlopers from the natural sciences, believe that agent-based models offer the best hope of understanding the economy in all its messy glory, rather than just the decorous aspects addressed by conventional theories. At a workshop in Virginia in June, I heard how ABMs might help us learn the lessons of the credit crunch, anticipate and guard against the next one, and perhaps even offer a working model of the entire economic system.

Some aspects of ABMs are so obviously an improvement on conventional economic theories that it seems bizarre to outsiders why they are still marginalized. Agents, like real traders, can behave in diverse ways. They can learn from experience. They are affected by each other’s actions, potentially leading to the herd behaviour that undoubtedly afflicts markets. ABMs, unlike DSGE models, can include institutions such as banks (a worrying omission, you might imagine, in models of financial markets). Some of these factors can be incorporated into orthodox theories, but not easily or transparently, and often they are not.

What upsets traditional economists most, however, is that ABMs are ‘non-equilibrium’ models, which means that they generally never settle into a steady state in which prices adjust to meet demand and markets ‘clear’, meaning that supply is perfectly matched to demand. Conventional economic thinking has, more or less since Adam Smith, assumed the reality of this platonic ideal, which is ruffled by external ‘shocks’ such as political events and policies. In its most simplistic form, this perfect market demands laissez-faire free trade, and is only hindered by regulation and intervention.

Even though orthodox theorists acknowledge that ‘market imperfections’ cause deviations from this ideal (market failures), that very terminology gives the game away. In ABMs, ‘imperfections’ and ‘failures’ are generally a natural, emergent feature of the more realistic ingredients of the models. This posits a totally different view of how the economy operates. For example, feedbacks such as herd-like trading behaviour can create bubbles in which commodity prices soar on a wave of optimism, and crashes when panic sweeps across the trading floor. It seems clear that such amplifying processes turned a downturn of the US housing market into a freezing of credit throughout the entire banking system.

What made the Virginia meeting, sponsored by the US National Science Foundation, unusual is that it was relatively heedless of these battle lines between conventional and alternative thinkers. Committed agent-based modellers mixed with researchers from the Federal Reserve, the Bank of England and the Rand Corporation, specialists in housing markets and policy advisers. The goal was both to unravel the lessons of the credit crunch and to discuss the feasibility of making immense ABMs with genuine predictive capability. That would be a formidable enterprise, requiring the collaboration of many different experts and probably costing tens of millions of dollars. Even with the resources, it would probably take at least five years to have a model up and running.

Once that would have seemed a lot to gamble. Now, with a bill from the crisis running to trillions (and the threat of more to come), to refuse this investment would border on the irresponsible. Could such a model predict the next crisis, though? That’s the wrong question. The aim – and there is surely no president, chancellor, or lending or investment bank CEO who does not now crave this – would be to identify where the systemic vulnerabilities lie, what regulations might mitigate them (and which would do the opposite), and whether early-warning systems could spot danger signs. We’ve done it for climate change. Does anyone now doubt that economic meltdown poses comparable risks and costs?

Monday, July 26, 2010

Darwin vs D'Arcy: a false dichotomy?


I’ve just been directed towards P. Z. Myer’s Pharyngula blog in which, during the course of a dissection of Fodor and Piattelli-Palmarini’s book What Darwin Got Wrong, Myers has the following to say about D’Arcy Thompson and On Growth and Form:

D’Arcy Wentworth Thompson was wrong.

Elegantly wrong, but still wrong. He just never grasped how much of genetics explained the mathematical beauty of biology, and it's a real shame — if he were alive today, I'm sure he'd be busily applying network theory to genetic interactions.

[Sorry, must stop you there. Not even Fodor and Piattelli-Palmarini called their book Darwin Was Wrong. I suspect they wanted to, but could not justify it even to themselves. D’Arcy Thompson’s book is over 1000 pages long. Is it all wrong? Simple answer: of course it is not. Take a look at this, for example. I know; this is simply rhetoric. It’s just that I still believe it matters to find the right words, rather than sound bites.]

Let's consider that Fibonacci sequence much beloved by poseurs. It's beautiful, it is so simple, it appears over and over again in nature, surely it must reflect some intrinsic, fundamentally mathematical ideal inherent in the universe, some wonderful cosmic law — it appears in the spiral of a nautilus shell as well as the distribution of seeds in the head of a sunflower, so it must be magic. Nope. In biology, it’s all genes and cellular interactions, explained perfectly well by the reductionism [Mary] Midgley deplores [in her review of F&P-P].

The Fibonacci sequence (1, 1, 2, 3, 5, 8…each term generated by summing the previous two terms) has long had this kind of semi-mystical aura about it. It's related to the Golden Ratio, phi, of 1.6180339887… because, as you divide each term by the previous term, the ratio tends towards the Golden Ratio as you carry the sequence out farther and farther. It also provides a neat way to generate logarithmic spirals, as we seen in sunflowers and nautiluses. And that's where the genes sneak in.

Start with a single square on a piece of graph paper. Working counterclockwise in this example, draw a second square with sides of the same length next to it. Then a third square with the same dimensions on one side as the previous two squares. Then a fourth next to the previous squares…you get the idea. You can do this until you fill up the whole sheet of paper. Now look at the lengths of each side of the squares in the series — it's the Fibonacci sequence, no surprise at all there.

You can also connect the corners with a smooth curve, and what emerges is a very pretty spiral — like a nautilus shell.

It's magic! Or, it's mathematics, which sometimes seems like magic! But it's also simple biology. I look at the whirling squares with the eyes of a developmental biologist, and what do I see? A simple sequential pattern of induction. A patch of cells uses molecules to signal an adjacent patch of cells to differentiate into a structure, and then together they induce a larger adjacent patch, and together they induce an even larger patch…the pattern is a consequence of a mathematical property of a series expressed on a 2-dimensional sheet, but the actual explanation for why it recurs in nature is because it's what happens when patches of cells recruit adjacent cells in a temporal sequence. Abstract math won't tell you the details of how it happens; for that, you need to ask what are the signaling molecules and what are the responding genes in the sunflower or the mollusc. That's where Thompson and these new wankers of the pluralist wedge fail — they stop at the cool pictures and the mathematical formulae and regard the mechanics of implementation as non-essential details, when it's precisely those molecular details that generate the emergent property that dazzles them…

There is nothing in this concept that vitiates our modern understanding of evolutionary theory, the whole program of studying changes in genes and their propagation through populations. That's the mechanism of evolutionary change. What evo-devo does is add another dimension to the issue: how does a mutation in one gene generate a ripple of alterations in the pattern of expression of other genes? How does a change in a sequence of DNA get translated into a change in form and physiology?

Those are interesting and important questions, and of course they have consequences on evolutionary outcomes…but they don't argue against genetics, population genetics, speciation theory, mutation, selection, drift, or the whole danged edifice of modern evolutionary biology. To argue otherwise is like claiming the prettiness of a flower is evidence against the existence of a root.

OK (hello, me again), I think I’d go along with just about all of this, apart from a suspicion that there is probably a better term for ‘wankers of the pluralist wedge’. Indeed, it is precisely how this self-organization is initiated at the biomolecular/cellular level that I have explored, both in phyllotaxis and in developmental biology generally, in my book Shapes (OUP, 2009) (alright, but I’m just saying). Yet there seems to be a big oversight here. Myers seems to be implying that, because genetic signals are involved, phyllotactic patterns are adaptive. I’m not aware that there is any evidence for that. In fact, quite the contrary: it seems that spiral Fibonacci phyllotaxis is the generic pattern for any meristem budding process that operates by some reaction-diffusion scheme, or indeed by any more general process in which the pattern elements experience an effective mutual repulsion in this cylindrical geometry (see here). So apparently, in phyllotaxis at least, the patterns and shapes are not a product of natural selection. Possessing leaves is surely adaptive, but there seems to be little choice in where they go if they are to be initiated by diffusing hormones. In his review of F&P-P, Michael Ruse puts it this way: ‘The order of a plant’s leaves may be fixed, but how those leaves stand up or lie down is selection-driven all of the way.’

So sure, there is absolutely nothing in this picture that challenges Neodarwinism. And sure, we should say so. But it does imply that, in the case of plants, an important aspect of shape determination may lie beyond the reach of natural selection to do much about. And this surely suggests that, since the same processes of morphogen diffusion operate in animal development, there might equally be aspects of that process too that have little to do with natural selection. Myers alludes to the case of spiralling mollusc shells: well yes, here too it appears that the basic logarithmic-spiral shape is going to be enforced by the simple maths of self-similar growth, and all evolution can do is fine-tune the contours of that spiral. That, indeed, is what Myers has said, though appearing to think he has not: the pattern is an inevitable consequence of the maths of the growth process. So no, it’s not magic. But it’s not in itself adaptive either. And correct me if I’m wrong, but I believe that was basically D’Arcy Thompson’s point (which is not to deny that he was unreasonably suspicious of adaptive explanations).

One of the points that F&P-P make is that insufficient effort has been devoted to asking how far these constraints operate. I agree with that much, and the blasé way in which Myers implies self-organization is just enslaved by natural selection perhaps explains why this is so. Let me say this clearly (because, my God, you have to do that with all these fellows): of course canny Neodarwinists accept that not every aspect of growth and form is adaptive (and by the way, I’m this kind of Neodarwinist too). But it seems quite possible that even rather significant morphological features such as phyllotactic patterns may be included in these non-adaptive traits – and that is less commonly recognized. Ian Stewart argued the same point in Life’s Other Secret. Anyone wishing to argue that such constraints undermine the case for natural selection happening at all is of course talking utter nonsense (and not even Fodor and Piattelli-Palmarini go that far). But it’s an interesting issue, and I’m made uncomfortable when people, through understandable fear of creationism’s malign distortions, want to insist that it’s not.

Thursday, July 22, 2010

The Disappearing Spoon


I have a review of Sam Kean’s book The Disappearing Spoon in the latest issue of Nature. I am posting the pre-edited version here mostly because a change made to the text after I’d seen the proofs has inverted my meaning in the published version in an important way, rendering it most confusing. Such things happen. But this is what it was meant to say.

I really didn’t want to be too hard on this book, and I hope I wasn’t – it does have genuine merits, and I feel sure Kean will write some more good stuff. But it did sometimes make me grind my teeth.

********************************************************************* 

The Disappearing Spoon

Sam Kean
Little, Brown & Co, New York.
400 pages
$24.99

Can there be a more pointless enterprise in scientific taxonomy than redesigning the Periodic Table? What is it that inspires these spirals, pretzels, pyramids and hyper-cubes? They hint at a suspicion that we have not yet fully cracked the geometry of the elements, that there is some hidden understanding to be teased out from these baroque juxtapositions of nature’s ‘building blocks’. It is probably the same impulse that motivated grand unified theories and supersymmetry – a determination to find cryptic order and simplicity, albeit here inappropriately directed towards contingency.

To call the Periodic Table contingent might elicit howls of protest, for the allowed configurations of electrons around nuclei are surely a deterministic consequence of quantum mechanics. But the logic of these arrangements is in the end tortuous, with the electron-shell occupancy (2, 8, 18…) subdivided and interleaved. The delicate balance of electron-electron interactions creates untidy anomalies such as non-sequential sub-shell filling and the postponed incursions of the d and f subshells, making the Periodic Table especially unwieldy in two dimensions. And relativistic effects – the distortion of electron energies by their tremendous speeds in heavy atoms – create oddities such as mercury’s low melting point and gold’s yellow lustre. All can be explained, but not elegantly.

There is thus little to venerate aesthetically in the Periodic Table, a messy family tree whose charm stems more from its quirks than its orderliness. No one doubts its mnemonic utility, but new-fangled configurations of the elements will not improve that function more than infinitesimally. It seems perverse that we continue to regard the Table as an object of beauty, rather than as just the piecemeal way things turned out at this level in the hierarchy of matter.

More pertinently, it seems odd still to regard it as the intellectual framework of chemistry. Sam Kean’s The Disappearing Spoon implicitly accepts that notion, although he is more interested in presenting it as a cast of characters, a way of telling stories about ‘all of the wonderful and artful and ugly aspects of human beings and how we interact with the physical world.’ Those stories are here unashamedly as much about physics as chemistry, for exploring the nether reaches of the Periodic Table has depended on nuclear physics and particle accelerators. With molecules featuring only occasionally as receptacles into which atoms of specific elements are fitted like stones in jewellery, The Disappearing Spoon is not the survey of chemistry it might at first seem.

So what, you might say – except that by making the Periodic Table the organizational emblem of his book, Kean ends up with a similarly piecemeal construction, an arrangement of facts about the behaviours and histories of the elements rather than a thesis about our conception of the material world. It is an attractive collection of tales, but lacks a moral: resolutely from the ‘there’s a thing’ school of science writing, it is best taken in small, energizing bites than digested in one sitting. This makes for enjoyable snacking, and I defy anyone not to learn something – in my case, for example, the story (treated with appropriate caution) of Scott of the Antarctic’s misadventure with tin solder, allegedly converted by the extreme cold into a brittle allotrope. The more familiar tale of the disintegrating buttons of Napoleon’s troops in the fateful Russian campaign, alluded to here, furnished the title of Penny Le Couteur and Jay Burreson’s portmanteau of ‘molecules that changed history’, Napoleon’s Buttons (Tarcher/Puttnam, 2003), another example of this genre – and indeed most of Kean’s stories have been told before.

It should be said, moreover, that when the reader learns something, it is at what we might call a particular cognitive level – namely, that which Kelvin considers Rutherford to be ‘full of crap’ and William Crookes’ dalliance with spiritualism enabled ‘135 years of New Age-y BS’. There’s a fine line between accessible informality and ahistorical sloppiness, between the wryness of hindsight and smirks at the conventions (and sartorial norms) of the past. And although Kean’s writing has the virtues of energy and pace, one hopes that his cultural horizons might come to extend beyond the United States: rarely have I felt so constantly reminded of an author’s nationality, whether by Cold War partisanship or references to Mentos and Life Savers.

More serious is the Whiggish strain that turns retrospective errors into irredeemable gaffes rather than the normal business of science. Emilio Segrè certainly slipped up when he failed to spot the first transuranic element, neptunium, and Linus Pauling’s inside-out model of DNA was worse than a poor guess, ignoring the implausibility of the closely packed anionic phosphate groups. But scientists routinely perpetrate such mistakes, and it is more illuminating to put them in context than to present them as pratfalls.

The Disappearing Spoon is a first book, and its flaws detract only slightly from the promise its author exhibits. His next will doubtless give a more telling indication of what he can do.

Wednesday, July 21, 2010

Why music is good for you


Here’s my latest Muse article for Nature News. I hope it does not sound in any way critical of the peg paper (reference 3), which is a very nice read.

*********************************************************************

A survey of the cognitive benefits of music makes a valid case for its educational importance. But that's not the best reason to teach all children music.

Remember the Mozart effect? Thanks to a suggestion in 1993 that listening to Mozart makes you cleverer, there has been a flood of compilation CDs filled with classical tunes that will allegedly boost your baby’s brain power.

Yet there’s no evidence for this claim, and indeed the original ‘Mozart effect’ paper [1] did not make it. It reported a slight, short-term performance enhancement in some spatial tasks when preceded by listening to Mozart as opposed to sitting in silence. Some follow-up studies replicated the effect, others did not. None found it specific to Mozart; one study showed that pop music could have the same effect on schoolchildren [2]. It seems this curious but marginal effect stems from the cognitive benefits of any enjoyable auditory stimulus, which need not even be musical.

The original claim doubtless had such inordinate impact because it plays to a long-standing suspicion that music makes you smarter. And as neuroscientists Nina Kraus and Bharath Chandrasekaran of Northwestern University in Illinois point out in a review in Nature Reviews Neuroscience [3], there is good evidence that music training reshapes the brain in ways that convey broader cognitive benefits. It can, they say, lead to ‘changes throughout the auditory system that prime musicians for listening challenges beyond music processing’ – such as interpreting language.

This is no surprise. Many sorts of mental training and learning alter the brain, just as physical training alters the body, and learning-related structural differences between the brains of musicians and non-musicians are well established [4]. Moreover, both neurological and psychological tests show that music processing draws on cognitive resources that are not music-specific, such as pitch processing, memory and pattern recognition [5] – so cultivating these mental functions through music would naturally be expected to have a wider payoff. The interactions are two-way: the pitch sensitivity imbued by tonal languages such as Mandarin Chinese, for example, enhances the ability to name a musical note just from hearing it (called absolute pitch) [6].

We can hardly be surprised, meanwhile, that music lessons improve childrens’ IQ [7], given that these will nourish general faculties such as memory, coordination and attentiveness. Kraus and Chandrasekaran now point out that, thanks to the brain’s plasticity (ability to ‘rewire’ itself), musical training sharpens our sensitivity to pitch, timing and timbre, and as a result our capacity to discern emotional intonation in speech, to learn our native and foreign languages, and to identify statistical regularities in abstract sound stimuli.

Yet all these benefits of music education have done rather little to alter a common perception that music is an optional extra to be offered (beyond tokenistic exposure) only if children have the time and inclination. Ethnomusicologist John Blacking put it more damningly: we insist that musicality is a rare gift, so that music is to be created by a tiny minority for the passive consumption of the majority [8]. Having spent years among African cultures that recognized no such distinctions, Blacking was appalled at the way this elitism labelled most people ‘unmusical’.

Kraus and Chandrasekaran rightly argue that the marginalization of music training in schools ‘should be reassessed’ in light of the benefits it may offer by ‘improving learning skills and listening ability’. But it will be a sad day when the only way to persuade educationalists to embrace music is via its side-effects on cognition and intelligence. We should be especially wary of that argument in this age of cost-benefit analyses, targets and utilitarian impact assessments. Music should indeed be celebrated (and studied) as a gymnasium for the mind; but ultimately its value lies with the way it enriches, socializes and humanizes us qua music.

And while in no way detracting from the validity of calling for music to be essential in education, it’s significant that musical training, like any other pleasure, has its hazards when taken to excess. I was recently privileged to discuss with the pianist Leon Fleisher his traumatic but fascinating struggle with focal dystonia, a condition that results in localized loss of muscle control. Fleisher’s dazzling career as a concert pianist was almost ended in the early 1960s when he found that two fingers of his right hand insisted on curling up. After several decades of teaching and one-handed playing, Fleisher regained the use of both hands through a regime of deep massage and injections of botox to relax the muscles. But he says his condition is still present, and he must constantly battle against it.

Focal dystonia is not a muscular problem (like cramp) but a neural one: over-training disrupts the feedback between muscles and brain, expanding the representation of the hand in the sensory cortex until the neural correlates of the fingers blur. It is the dark side of neural plasticity, and not so uncommon – an estimated one in a hundred professional musicians suffer from it, though some do so in secrecy, fearful of admitting to the debilitating problem.

We would be hugely impoverished without virtuosi such as Fleisher. But his plight serves as a reminder that hot-housing has its dangers, not only for the performers but (as Blacking) suggests for the rest of us. Give us fine music, but rough music too.

References

1. Rauscher, F. H., Shaw, G. L. & Ky, K. N. Nature 365, 611 (1993).
2. Schellenberg, E. G. & Hallam, S. Ann. N. Y. Acad. Sci. 1060, 202-209 (2005).
3. Kraus, N. & Chandrasekaran, B. Nat. Rev. Neurosci. 11, 599-605 (2010).
4. Gaser, C. & Schlaug, G. J. Neurosci. 23, 9240-9245 (2003).
5. Patel, A. D. Music, Language, and the Brain (Oxford University Press, New York, 2008).
6. Deutsch, D., Henthorn, T., Marvin, E. & Xu, H.-S. J. Acoust. Soc. Am. 119, 719-722 (2006).
7. Schellenberg, E. G. J. Educ. Psychol. 98, 457-468 (2006).
8. Blacking, J. How Musical Is Man? (Faber & Faber, London, 1976).

Monday, July 19, 2010

Organic nightmares


How do you make and use a Grignard reagent? This isn’t a question that has generally kept me awake at night. But last night it gave me nightmares. As a rule my ‘exam anxiety’ dreams, three decades after the event, feature maths: I find, days before the exam, that I have done none of the coursework or required reading, and am clueless about all of the mathematical methods on which I’m about to be grilled. Now it seems I may be about to transfer my disturbance to organic synthesis. Last night I was even at the stage of sitting in the exam hall waiting to be told to open the test paper, when I realised that I could recall not one of the countless details of reagents, methods and strategies involved in the Grignard reaction in particular (yes, alkylmagnesium, I know that much) or the aldol reaction, Claesen rerrangement, Friedel-Crafts acylation and all the rest. Now, I know this is nothing shameful even for someone who writes about chemistry for a living – as I say, it was three decades ago that I learnt this stuff, and if I want to know the details now then I can look them up, right? And my memory of the Diels-Alder reaction, about which I’ve written relatively recently, remains sufficiently fleshed-out to reassure me that the decay constant of my mind is at least still measured in years rather than days. All the same, it is sobering bordering on scary to realise that (i) I did once have to memorize all this stuff, and (ii) I did so. Organic synthesis can achieve tremendous elegance, and is not devoid of general principles; but this dream reminds me that it is nonetheless perhaps the closest chemistry comes to becoming a list of bald, unforgiving facts.

Oh God, and the truly scary thing is that I just looked up Grignard reagents on Wikipedia, and… and there was a carbonyl group lurking in my dream too. I think I’d have been more reassured to know that I was just improvising than that a fragment of that grim scheme has stayed lodged in my cortex.

Wednesday, July 14, 2010

Who should pay for the police?


I have a Muse piece on Nature News about a forthcoming paper in Nature on cooperation and punishment in game theory, by Karl Sigmund and colleagues. It’s quite closely related to recent work by Dirk Helbing, also discussed briefly below. There are many interesting aspects to Dirk’s papers, which I can’t touch on here – not least, the fact that the outcomes of these games can be dependent on the spatial configuration of the players. Here is the pre-edited article.

***********************************************************************

The punishment of anti-social behaviour seems necessary for a stable society. But how should it be policed, and how severe should it be? Game theory offers some answers.

The fundamental axis of political thought in democratic nations could be said to refer to the ‘size’ of government. How much or how little should the state interfere in our lives? At one end of the axis sits political philosopher Thomas Hobbes, whose state is so authoritarian – an absolute monarchy – that it barely qualifies as a democracy at all once the ruler is elected. At the other extreme we have Peter Kropotkin, the Russian revolutionary anarchist who argued in Mutual Aid (1902) that people can organize themselves harmoniously without any government at all.

At least, that’s one view. What’s curious is that both extremes of this spectrum can be viewed as either politically right- or left-wing. Hobbes’ domineering state could equally be Stalin’s, while the armed, vigilante world of extreme US libertarianism (and Glenn Beck) looks more like the brutal ‘State of Nature’ that Hobbes feared – everyone for themselves – than Kropotkin’s cosy commune.

But which works best? I’m prepared to guess that most Nature readers, being benign moderates, will cluster around the middle ground defined by John Stuart Mill, who argued that government is needed to maintain social stability, but should intrude only to the extent of preventing individuals from harming others. Laws and police forces, in this view, exist to ensure that you don’t pillage and murder, not to ensure that you have moral thoughts.

If only it were that simple. The trouble is that ‘harming others’ is a slippery concept, illustrated most profoundly by the problem of the ‘commons’. If you drop litter, if you don’t pay your taxes, if you tip your sewage into the river, it’s hard to pinpoint how or who your actions ‘harm’, if anyone – but if we all do it, society suffers. So laws and penal codes must not only prevent or punish obvious crimes like murder, but also discourage free-riders who cheat on the mechanisms that promote social order.

How much to punish, though, and how to implement it? If you steal, should you temporarily lose your liberty, or permanently lose your hand? And what works best in promoting cooperative behaviour: the peer pressure of social ostracism, or the state pressure of police arrest?

Experiments in behavioural economics, in particular ‘public goods games’ where participants seek to maximize their rewards through competition or cooperation, have shown that people care about punishment to an ‘irrational’ degree [1]. Say, for example, players are asked to put some of their money into a collective pot, which will then be multiplied and divided among the players. The more you all put in, the better the payoff. But if one person doesn’t contribute, they still get the reward – so there’s a temptation to free-ride.

If players are allowed to fine free-riders, but at a cost to themselves, they will generally do it even if they make a loss: they care more about fairness than profit. Now, however, the problem is that there’s a second-order temptation to free-ride: you contribute to the pot but leave others to shoulder the cost of sanctioning the cheaters who don’t. There’s an infinite regress of opportunities to free-ride, which can eventually undermine cooperation.

But what if the players can share the cost of punishment by contributing to a pool in advance – equivalent, say, to paying for a police force and penal service? This decreases the overall profits – it costs society – because the ‘punishment pool’ is wasted if no one actually cheats. Yet in a new paper in Nature [2], game theorist Karl Sigmund of the University of Vienna and his colleagues show in a computer model that pool-punishment can nevertheless evolve as the preferred option over peer-punishment as a way of policing the game and promoting cooperation: a preference, you might say, for a state police force as opposed to vigilante justice. This arrangement is, however, self-organized à la Kropotkin, not imposed from the top down à la Hobbes: pool-punishment simply emerges as the most successful (that is, the most stable) strategy.

Of course, we know that what often distinguishes these things in real life is that state-sponsored policing is more moderate and less arbitrary or emotion-led than vigilante retribution. That highlights another axis of political opinion: are extreme punishments more effective at suppressing defection than less severe ones? A related modelling study of public-goods games by Dirk Helbing of ETH in Zürich and his coworkers, soon to be published in the New Journal of Physics [3] and elaborated in another recent paper [4], suggests that the level of cooperation may depend on the strength of punishment in subtle, non-intuitive ways. For example, above a critical punishment (fine) threshold, cooperators who punish can gain strength by sticking together, eventually crowding out both defectors and non-punishing cooperators (second-order free riders). But if punishment is carried out not by cooperators but by other defectors, too high a fine is counterproductive and reduces cooperation. Cooperation can also be created by an ‘unholy alliance’ of cooperators and defectors who both punish.

Why would defectors punish other defectors? This behaviour sounds bizarre, but is well documented experimentally [5], and familiar in real life: there are both hypocritical ‘punishing defectors’ (think of TV evangelists whose condemnation of sexual misdemeanours ignores their own) and ‘sincere’ ones, who deplore certain types of cheating while practising others.

One of the most important lessons of these game-theory models in recent years is that the outcomes are not necessarily permanent or absolute. What most people (perhaps even Glenn Beck) want is a society in which people cooperate. But different strategies for promoting this have different vulnerabilities to an invasion of defectors. And strategies evolve: prolonged cooperation might erode a belief in the need for (costly) policing, opening the way for a defector take-over. Which is perhaps to say that public policy should be informed but not determined by computer models. As Stephen Jay Gould has said, ‘There are no shortcuts to moral insight’ [6].

References 
[1] Fehr, E. & Gächter, S. Am. Econ. Rev. 90, 980-994 (2000).
[2] Sigmund, K., De Silva, H., Traulsen, A. & Hauert, C. Nature doi:10/1038/nature09203.
[3] Helbing, D., Szolnoki, A., Perc, M. & Szabó, G. New J. Phys. (in press); see http://arxiv.org/abs/1007.0431 (2010).
[4] Helbing, D., Szolnoki, A., Perc, M. & Szabó, G. PLoS Comput. Biol. 6(4), e1000758 (2010).
[5] Shinada, M., Yamagishi, T. & Omura, Y. Evol. Hum. Behav. 25, 379-393 (2004).
[6] Gould, S. J. Natural History 106 (6), 12-21 (1997).

The music of chemistry


My latest Crucible column for Chemistry World  (July) is mostly non-technical enough to put up here. This is the pre-edited version.

************************************************************************

The English composer Edward Elgar is said to have boasted that the first of his Pomp and Circumstance Marches had a tune that would ‘knock ‘em flat’. But on another occasion he inadvertently found a way to achieve that effect rather too literally. Elgar was an enthusiastic amateur chemist, and fitted up his home in Hereford with a laboratory which he called The Ark. His friend, the conductor and composer William Henry Reed, tells how Elgar delighted in making a ‘phosphoric concoction’ which would explode spontaneously when dry – possibly Armstrong’s mixture, red phosphorus and potassium chlorate, used in toy cap guns. One day, Reed says, Elgar made a batch of the stuff but then musical inspiration struck. He put the mixture into a metal basin and dumped it in the water butt before returning to the house.

‘Just as he was getting on famously,’ wrote Reed, ‘writing in horn and trumpet parts, and mapping out wood-wind, a sudden and unexpected crash, as of all the percussion in all the orchestras on earth, shook the room… The water-butt had blown up: the hoops were rent: the staves flew in all directions; and the liberated water went down the drive in a solid wall. Silence reigned for a few seconds. Then all the dogs in Herefordshire gave tongue.’

Schoolboy pranks were not, however, the limit of Elgar’s contribution to chemistry. He took his hobby seriously enough to invent a device for synthesizing hydrogen sulphide, which was patented and briefly manufactured as the Elgar Sulphuretted Hydrogen Apparatus. Elgar’s godson claimed that the device was ‘in regular use in Herefordshire, Worcestershire and elsewhere for many years,’

Elgar is one of a small, select band of individuals who made recognized contributions to both chemistry and music [1,2] (although chemists who are also musicians are legion). Georges Urbain, best known as the discoverer of the element lutetium, was also a noted pianist and composer. Eighteenth-century musician and composer George Berg conducted extensive experiments in the chemistry of glass-making. But the most famous representative of the genre is Aleksandr Borodin, whose name is still familiar to chemists and musicians alike. As one of the Five, the group of Russian composers that included Mussorgsky and Rimsky-Korsakov, Borodin created a musical idiom every bit as characteristically Russian as Elgar’s was English.

As historian of chemistry Michael Gordin says of Borodin, ‘it is the fascination of this hybrid figure that has drawn a great deal of attention to the man, mostly focusing on whether there was some sort of “conflict” between his music and his science’ [3]. There is good reason to suspect that the conflict was felt by Borodin himself, who seems to have stood accused by both chemists and musicians of spending too long on ‘the other side’. ‘You waste too much time thinking about music’, his professor Nikolai Zinin told him. ‘A man cannot serve two masters.’ Meanwhile, Borodin complained in a letter that ‘Our musicians never stop abusing me. They say I never do anything, and won’t drop my idiotic activities, that is to say, my work at the laboratory.’

Rimsky-Korsakov portrayed his friend as literally rushing between his two passions, trying to keep both balls in the air. ‘When I went to see him’, he wrote, ‘I would often find him at work in the laboratory next door to his flat. When he had finished what he was doing, he would come back with me to his flat, and we would play together or talk. Right in the middle he would jump up and rush back into the laboratory to make sure nothing had burnt or boiled over, all the while making the corridor echo with incredible sequences of successive ninths or sevenths’ [4].

Such anecdotes titillate our curiosity not just about whether a person can ‘serve two [intellectual] masters’ but whether each might fertilize or inhibit the other. Yet there is little evidence that scientific knowledge does much more for artists (or vice versa) than supply novel sources of metaphor and plot: the science literacy evident in, say, the novels of Vladimir Nabokov (a lepidopterist) or Thomas Pynchon (an engineer) is a are joy to the scientist, but one imagines they would have been great writers in any event. And the ‘science’ in Goethe’s works might have been better omitted.

The enduring appeal of these questions has in Gordin’s view unduly elevated Borodin’s chemical reputation. He has been credited with discovering the so-called Hunsdiecker reaction, the decarboxylation of silver salts of carboxylic acids with bromine (sometimes called the Borodin reaction) [5], and most importantly with the aldol reaction, the conversion of an aldehyde into a b-hydroxy aldehyde, which forms a new carbon-carbon bond. The latter is often presented as Borodin’s discovery which was ‘stolen’ by the German chemist Charles-Adolphe Wurtz, whereas Gordin shows that in fact Wurtz got there first and that Borodin conceded as much.

Borodin’s priority claim was inflated, Gordin says, because of the desire to cast him as polymathic musician-chemist. ‘His chemistry is at best historically interesting’, says Gordin, ‘but not outstandingly so.’ Perhaps this determination to make Borodin ‘special’ in the end does more harm than good to the notion that ordinary scientists can be interested in more than just science.

1. L. May, Bull. Hist. Chem. 33, 35-43 (2008).
2. S. Alvarez, New J. Chem. 32, 571-580 (2008).
3. M. D. Gordin, J. Chem. Educ. 83, 561-565 (2006).
4. Quoted in A. Bradbury, Aldeburgh Festival programme booklet 2010, p.30-33.
5. See E. J. Behrman, J. Chem. Educ. 83, 1138 (2006).