Friday, April 30, 2010

A supercomputing crystal ball

Here's a little piece I've just written for Nature's news blog The Great Beyond .


The good news is that your future can be predicted. The bad news is that it’ll cost a billion euros. That, at least, is what a team of scientists led by Dirk Helbing of the ETH in Switzerland believes. And as they point out, a billion euros is small fare compared with the bill for of the current financial crisis – which might conceivably have been anticipated with the massive social-science simulations they want to establish.

This might seem the least auspicious moment to start placing faith in economic modelling, but Helbing’s team proposes to transform the way it is done. They will abandon the discredited and doctrinaire old models in favour of ones built from the bottom up, which harness the latest understanding of how people behave and act collectively rather than reducing the economic world to caricature for the sake of mathematical convenience.

And it is not just about the economy, stupid. The FuturIcT ‘knowledge accelerator’, the proposal  for which has just been submitted to the European Commission’s Flagship Initiatives scheme which seeks to fund visionary research, would address a wide range of environmental, technological and social issues using supercomputer simulations developed by an interdisciplinary team. The overarching aim is to provide systematic, rational and evidence-based guidance to governmental and international policy-making, free from the ideological biases and wishful thinking typical of current strategies.

Helbing’s confidence in such an approach has been bolstered by his and others’ success in modelling social phenomena ranging from traffic flow in cities to the dynamics of industrial production. Modern computer power makes it possible to simulate such systems using ‘agent-based models’ that look for large-scale patterns and regularities emerging from the interaction of large numbers of individual agents.

The FuturIcT proposal includes the establishment of ‘Crisis Observatories’ that might identify impending problems such as financial crashes, wars and social unrest, disease epidemics, and environmental crises. It would draw on expertise in fields ranging from engineering, law, anthropology and geosciences to physics and mathematics. Crisis Observatories could be operational by 2016, the FuturIcT team says, and by 2022 the programme would incorporate a Living Earth Simulator that couples human social and political activity to the dynamics of the natural planet.

Sceptics may dismiss the idea as a hubristic folly that exaggerates our ability to understand the world we have created. But when we compare the price tag to the money we devote to getting a few humans outside our atmosphere, it could be a far greater folly not to give the idea a chance.

Monday, April 26, 2010

Big quantum


Here’s a little piece I wrote for Prospect, who deemed in the end that it was too hard for their readers. But I am sure it is not, dear blogspotter, too hard for you.


If you think quantum physics is hard to understand, you’re probably confusing understanding with intuition. Don’t assume, as you fret over the notion that a quantum object can be in two places at once, that you’re simply too dumb to get your mind around it. Nobody can, not even the biggest brains in physics. The difference between quantum physicists and the rest of us is that they’ve elected to just accept the weirdness and get on with the maths – as physicist David Mermin puts it, to ‘shut up and calculate.’

But this pragmatic view is losing its appeal. Physicists are unsatisfied with the supreme ability of quantum theory to predict how stuff behaves at very small scales, and are following the lead of its original architects, such as Bohr, Heisenberg and Einstein, in demanding to know what it means. As Lucien Hardy and Robert Spekkens of the high-powered Perimeter Institute in Canada wrote recently, ‘quantum theory is very mysterious and counterintuitive and surprising and it seems to defy us to understand it. And so we take up the challenge.’

This is something of an act of faith, because it isn’t obvious that our minds, having evolved in a world of classical physics where objects have well-defined positions and velocities, can ever truly conceptualize the quantum world where, apparently, they do not. That difference, however, is part of the problem. If the microscopic world is quantum, why doesn’t everything behave that way? Where, once we reach the human scale, has the weirdness gone?

Physicists talk blithely about this happening in a ‘quantum-to-classical transition’, which they generally locate somewhere between the size of large molecules and of living cells – between perhaps a billionth and a millionth of a metre (a nanometre and a micrometre). We can observe subatomic particles obeying quantum rules – that was first done in 1927, when electrons were seen acting like interfering waves – but we can’t detect quantumness in objects big enough to see with the naked eye.

Erwin Schrödinger tried to force this issue by placing the microcosm and the macrocosm in direct contact. In his famous thought experiment, the fate of a hypothetical cat depended on the decay of a radioactive atom, dictated by quantum theory. Because quantum objects can be in a ‘superposition’ of two different states at once, this seemed to imply that the cat could be both alive and dead. Or at least, it could until we looked, for the ‘Copenhagen’ interpretation of quantum theory proposed by Bohr and Heisenberg insists that superpositions are too delicate to survive observation: when we look, they collapse into one state or the other.

The consensus is now that the cross-over from quantum to classical rules involves a process called decoherence, in which delicate quantum states get blurred by interacting with their teeming, noisy environment. An act of measurement using human-scale instruments therefore induces decoherence. According to one view, decoherence imprints a restricted amount of information about the state of the quantum object on its environment, such as the dials of our measuring instruments; the rest is lost forever. Physicist Wojciech Zurek thinks that the properties we measure this way are just those that can most reliably imprint ‘copies’ of the relevant information about the system under inspection. What we measure, then, are the ‘fittest’ states – which is why Zurek calls the idea quantum Darwinism. It has the rather remarkable corollary that the imprinted copies can be ‘used up’, so that repeated measurements will eventually stop giving the same result: measurement changes the outcome.

These are more than just esoteric speculations. Impending practical applications of quantum superpositions, for example in quantum cryptography for encoding optical data securely, or super-fast quantum computers that perform vast numbers of calculations in parallel, depend on preserving superpositions by avoiding decoherence. That’s one reason for the current excitement about experiments that probe the contested ‘middle ground’ between the unambiguously quantum and classical worlds, at scales of tens of nanometres.

Andrew Cleland and coworkers at the University of California have now achieved a long-sought goal in this arena: to place a manufactured mechanical device, big enough to see sharply in the electron microscope, in a quantum superposition of states. They made a ‘nanomechanical resonator’ – a strip of metal and ceramic almost a micrometer thick and about 30 micrometres long, fixed at one end like the reed of a harmonica – and cooled it down to within 25 thousandths of a degree from absolute zero. The strip is small enough that its vibrations follow quantum rules when cold enough, which means that they can only have particular frequencies and energies (heat will wash out this discreteness). The researchers used a superconducting electrical circuit to induce vibrations, and they report in Nature that they could put the strip into a superposition of two states – in effect, as if it is both vibrating and not vibrating at the same time.

Sadly, these vibrations are too small for us to truly ‘see’ what an object looks like that is both moving and not moving. But even more dramatic incursions of quantum oddness might be soon in store. Last year a team of European scientists outlined a proposal to create a real Schrödinger’s cat, substituting an organism small enough to stand on the verge of the quantum world: a virus. They suggested that a single virus suspended by laser beams could be put into a superposition of moving and stationary states. Conceivably, they said, this could even be done with tiny, legged animals called tardigrades or ‘water bears’, a few tenths of a millimetre long. If some way could be devised to link the organism’s motion to its biological behaviour, what then would it do while simultaneously moving and still? Nobody really knows.

Wednesday, April 21, 2010

Peter's patterns

I have a little piece on the BBC Focus site about the work of sculptor Peter Randall-Page , with whom I had the pleasure of discussing pattern formation and much else at Yorkshire Sculpture Park last month. I will put an extended version of this piece on my web site shortly (under ‘Patterns’) in which there are lots more stunning pictures of Peter’s work and natural patterns.

Friday, April 09, 2010

The right formula


Message to a heedless world: Please remember that the O in the formula H2O is a capital O meaning oxygen, not a zero meaning zero. Water is composed of hydrogen and oxygen, not hydrogen and nothing.

Heedless world replies: Get a life, man.

Heedless world continues (after some thought): How do you know the difference anyway?

Me: Zeros are narrower.

Heedless world: This is truly sad.

Tuesday, April 06, 2010

An uncertainty principle for economists?


Here’s the pre-edited version of my latest Muse for Nature News. The paper I discuss here is very long but also very ambitious, and well worth a read.
**********************************************************************
Bad risk management contributed to the current financial crisis. Two economists believe the situation could be improved by gaining a deeper understanding of what is not known.

Donald Rumsfeld is an unlikely prophet of risk analysis, but that may be how posterity will anoint him. His remark about ‘unknown unknowns’ was derided at the time as a piece of meaningless obfuscation, but more careful refection suggests he had a point. It is one thing to recognize the gaps and uncertainties in our knowledge of a situation, another to acknowledge that entirely unforeseen circumstances might utterly change the picture. (Whether you subscribe to Rumsfeld’s view that the challenges in managing post-invasion Iraq were unforeseeable is another matter.)

Contemporary economics can’t handle the unknown unknowns – or more precisely, it confuses them with known unknowns. Financial speculation is risky by definition, yet the danger is not that the risks exist, but that the highly developed calculus of risk in economic theory – some of which has won Nobel prizes – gives the impression that they are under control.

The reasons for the current financial crisis have been picked over endlessly, but one common view is that it involved a failure in risk management. It is the models for handling risk that Nobel leaureate economist Joseph Stiglitz seemed to have in mind when he remarked in 2008 that ‘Many of the problems our economy faces are the result of the use of misguided models. Unfortunately, too many [economic policy-makers] took the overly simplistic models of courses in the principles of economics (which typically assume perfect information) and assumed they could use them as a basis for economic policy’ [1].

Facing up to these failures could prompt the bleak conclusion that we know nothing. That’s the position taken by Nassim Nicholas Taleb in his influential book The Black Swan [2], which argues that big disruptions in the economy can never be foreseen, and yet are not anything like as rare as conventional theory would have us believe.

But in a preprint on Arxiv, Andrew Lo and Mark Mueller of MIT’s Sloan School of Management offer another view [3]. They say that what we need is a proper taxonomy of risk – not unlike, as it turns out, Rumsfeld’s infamous classification. In this way, they say, we can unite risk assessment in economics with the way uncertainties are handled in the natural sciences.

The current approach to uncertainty in economics, say Lo and Mueller, suffers from physics envy. ‘The quantitative aspirations of economists and financial analysts have for many years been based on the belief that it should be possible to build models of economic systems – and financial markets in particular – that are as predictive as those in physics,’ they point out.

Much of the foundational work in modern economics took its lead explicitly from physics. One of its principal architects, Paul Samuelson, has admitted that his seminal book Foundations of Economic Analysis [4] was inspired by the work of mathematical physicist Edwin Bidwell Wilson, a protégé of the pioneer of statistical physics Willard Gibbs.

Physicists were by then used to handling the uncertainties of thermal noise and Brownian motion, which create a gaussian or normal distribution of fluctuations. The theory of Brownian random walks was in fact first developed by physicist Louis Bachelier in 1900 to describe fluctuations in economic prices.

Economists have known since the 1960s that these fluctuations don’t in fact fit a gaussian distribution at all, but are ‘fat-tailed’, with a greater proportion of large-amplitude excursions. But many standard theories have failed to accommodate this, most notably the celebrated Black-Scholes formula used to calculate options pricing, which is actually equivalent to the ‘heat equation’ in physics.

But incorrect statistical handling of economic fluctuations is a minor issue compared with the failure of practitioners to distinguish fluctuations that are in principle modellable from those that are more qualitative – to distinguish, as Lo and Mueller put it, trading decisions (which need maths) from business decisions (which need experience and intuition).

The conventional view of economic fluctuations – that they are due to ‘external’ shocks to the market, delivered for example by political events and decisions – has some truth in it. And these external factors can’t be meaningfully factored into the equations as yet. As the authors say, from July to October 2008, in the face of increasingly negative prospects for the financial industry, the US Securities and Exchange Commission intervened to impose restrictions on certain companies in the financial services sector. ‘This unanticipated reaction by the government’, say Lo and Mueller, ‘is an example of irreducible uncertainty that cannot be modeled quantitatively, yet has substantial impact on the risks and rewards of quantitative strategies.’

They propose a five-tiered categorization of uncertainty, from the complete certainty of Newtonian mechanics, through noisy systems and those that we are forced to describe statistically because of incomplete knowledge about deterministic processes (as in coin tossing), to ‘irreducible uncertainty’, which they describe as ‘a state of total ignorance that cannot be remedied by collecting more data, using more sophisticated methods of statistical inference or more powerful computers, or thinking harder and smarter.’

The authors think that this is more than just an enumeration of categories, because it provides a framework for how to think about uncertainties. ‘It is possible to “believe” a model at one level of the hierarchy but not at another’, they say. And they sketch out ideas for handling some of the more challenging unknowns, as for example when qualitatively different models may apply to the data at different times.

‘By acknowledging that financial challenges cannot always be resolved with more sophisticated mathematics, and incorporating fear and greed into models and risk-management protocols explicitly rather than assuming them away’, Lo and Mueller say, ‘we believe that the financial models of the future will be considerably more successful, even if less mathematically elegant and tractable.’

They call for more support of post-graduate economic training to create a cadre of better informed practitioners, more alert to the limitations of the models. That would help; but if we want to eliminate the ruinous false confidence engendered by the clever, physics-aping maths of economic theory, why not make it standard practice to teach everyone who studies economics at any level that these models of risk and uncertainty apply only to specific and highly restricted varieties of it?

References
1. Stiglitz, J. New Statesman, 16 October 2008.
2. Taleb, N. N. The Black Swan (Allen Lane, London, 2007).
3. Lo, A. W. & Mueller, M. T. http://www.arxiv.org/abs/1003.2688.
4. Samuelson, P. A. Foundations of Economic Analysis (Harvard University Press, Cambridge, 1947).

Thursday, April 01, 2010

Bursting the genomics bubble


Here’s the pre-edited version of a Muse that’s just gone up on Nature News. There’s a bunch of interesting Human Genome Project-related stuff on the Nature site to mark the 10th anniversary of the first draft of the genome (see here and here and here, as well as comments from Francis Collins and Craig Venter). Some is celebratory, some more thoughtful. Collins considers his predictions to have been vindicated – with the exception that ‘The consequences for clinical medicine have thus far been modest’. Now, did you get the sense at the time that it was precisely the potential for advancing clinical medicine that was the HGP’s main selling point? Venter is more realistic, saying ‘Phenotypes — the next hurdle — present a much greater challenge than genotypes because of the complexity of human biological and clinical information. The experiments that will change medicine, revealing the relationship between human genetic variation and biological outcomes such as physiology and disease, will require the complete genomes of tens of thousands of humans together with comprehensive digitized phenotype data.’ Hmm… not quite what the message was at the time, although in fairness Craig was not really one of those responsible for it.

*********************************************************************
The Human Genome Project attracted investment beyond what a rational analysis would have predicted. There are pros and cons to that.

If you were a venture capitalist who had invested in the sequencing of the human genome, what would you now have to show for it? For scientists, the database of the Human Genome Project (HGP) may eventually serve as the foundation of tomorrow’s medicine, in which drugs will be tailored personally to your own genomic constitution. But for a return to the bucks you invested in this grand scheme, you want medical innovations here and now, not decades down the line. Ten years after the project’s formal completion, there’s not much sign of them.

A team of researchers in Switzerland now argue in a new preprint [1] that the HGP was an example of a ‘social bubble’, analogous to the notorious economic bubbles in which investment far outstrips any rational cost-benefit analysis of the likely returns. Monika Gisler, Didier Sornette and Ryan Woodard of ETH in Zürich say that ‘enthusiastic supporters of the HGP weaved a network of reinforcing feedbacks that led to a widespread endorsement and extraordinary commitment by those involved in the project.’

Some scientists have already suggested that the benefits of the HGP were over-hyped [2]. Even advocates now admit that the benefits for medicine may be a long time coming, and will require further advances in understanding, not just the patience to sort through all the data.

This stands in contrast to some of the claims made while the HGP was underway between 1990 and 2003. In 1999 the International Human Genome Sequencing Consortium (IHGSC) leader Francis Collins claimed that the understanding gained by the sequencing effort would ‘eventually allow clinicians to subclassify diseases and adapt therapies to the individual patient’ [3]. That might happen one day, but we’re still missing fundamental understanding of how even diseases with a known heritable risk are related to the makeup of our genomes [4]. Collins’ portrait of a patient who, in 2010, is prescribed ‘a prophylactic drug regimen based on the knowledge of [his] personal genetic data’ is not yet on the horizon. And going from knowledge of the gene to a viable therapy has proved immensely challenging even for a single-gene disease as thoroughly characterized as cystic fibrosis [5]. Collins’ claim,shortly after the unveiling of the first draft of the human genome in June 2000, that ‘new gene-based ‘designer drugs’ will be introduced to the market for diabetes mellitus, hypertension, mental illness and many other conditions’ [6] no longer seems a foregone conclusion, let alone a straightforward extension of the knowledge of all 25,000 or so genes in the human genome.

This does not, in the analysis of Gisler and colleagues, mean that the HGP was money poorly spent. Some of the benefits are already tangible, such as much faster and cheaper sequencing techniques; others may follow eventually. The researchers are more interested in the issue of how, if the HGP was such a long-term investment, it came to be funded at all. Their answer invokes the notion of bubbles borrowed from the economic literature, which Sornette has previously suggested [7] as a driver of other technical innovations such as the mid-nineteenth-century railway boom and the explosive growth of information technology at the end of the twentieth century. In economics, bubbles seem to be an expression of what John Maynard Keynes called ‘animal spirits’, whereby the instability stems from ‘the characteristic of human nature that a large proportion of our positive activities depend on spontaneous optimism rather than mathematical expectations’ [8]. In economics such bubbles can end in disastrous speculation and financial ruin, but in technology they can be useful, creating long-lasting innovations and infrastructures that would have been deemed too risky a venture under the cold glare of reason’s spotlight.

For this reason, Gisler and colleagues say, it is well worth understanding how such bubbles occur, for this might show governments how to catalyse long-term thinking that is typically (and increasingly) absent from their own investment strategies and those of the private sector. In the case of the HGP, the researchers argue, the controversial competition between the public IHGSC project and the private enterprise conducted by the biotech firm Celera Genomics worked to the advantage of both, creating an sense of anticipation and hope that expanded the ‘social bubble’ as well as in the end reducing the cost of the research by engaging market mechanisms.

To that extent, the ‘exuberant innovation’ that social bubbles can engender seems a good thing. But it’s possible that the HGP will never really deliver economically or medically on such massive investment. Worse, the hype might have incubated a harmful rash of genetic determinism. As Gisler and colleagues point out, other ‘omics’ programmes are underway, including an expensively funded NIH initiative to develop high-throughput techniques for solving protein structures. Before animal spirits transform this into the next ‘revolution in medicine’, it might be wise to ask whether the HGP has something to tell us about the wisdom of collecting huge quantities of stamps before we know anything about them.

References
1. Gisler, M., Sornette, D. & Woodard, R. Preprint http://www.arxiv.org/abs/1003.2882.
2. Roberts, L. et al., Science 291, 1195-1200 (2001).
3. Collins, F. S. New England J. Med. 28, 28-37 (1999).
4. Dermitzakis, E. T. & Clark, A. G. Science 326, 239-240 (2009).
5. Pearson, H. Nature 460, 164-169 (2009).
6. Collins, F. S. & McKusick, V. A. J. Am. Med. Soc. 285, 540-544 (2001).
7. Sornette, D. Socio-econ. Rev. 6, 27-38 (2008).
8. Keynes, J. M., The General Theory of Employment, Interest and Money (Macmillan, London, 1936).

The Times does The Music Instinct


There are some extracts from The Music in the Eureka science supplement of the Times today, although oddly they don’t seem yet to have put it online. It’s amongst a real mash-up of stuff about the ‘science of music’, which is all kind of fun but slightly weird to find my words crash-landed there. The editors did a pretty good job, however, of plucking out bits of text and getting them into a fairly self-contained form, when they were generally part of a much longer exposition.

I notice in Eureka that Brain May, bless him, doesn’t believe in global warming. “Most of my most knowledgeable scientist friends don’t believe that global warming exists”, he says. Come on Brian, name them. Have you been chatting to the wrong Patrick Moore ? (Actually, I’m not too sure if chatting to the other one would help very much.)