Saturday, January 26, 2013

Stormy weather ahead

Next up, a kind of book review for Prospect. In my experience as a footballer playing on an outdoor pitch through the winter, the three-day forecasts are actually not that bad at all.

________________________________________________________________

Isn't it strange how we like to regard weather forecasting as a uniquely incompetent science – as though this subject of vital economic and social importance can attract only the most inept researchers, armed with bungling, bogus theories?

That joke, however, is wearing thin. With Britain’s, and probably the world’s, weather becoming more variable and prone to extremes, an inaccurate forecast risks more than a soggy garden party, potentially leaving us unprepared for life-threatening floods or ruined harvests.

Perhaps this new need to take forecasting seriously will eventually win it the respect it deserves. Part of the reason we love to harp on about Michael Fish’s disastrously misplaced reassurance over the Great Storm of 1987 is that there has been no comparable failure since. As meteorologists and applied mathematicians Ian Roulstone and John Norbury point out in their account of the maths of weather prediction, Invisible in the Storm (Princeton University Press, 2013), the five-day forecast is, at least in Western Europe, now more reliable than the three-day forecast was when the Great Storm raged. There has been a steady improvement in accuracy over this period and, popular wisdom to the contrary, prediction has long been far superior to simply assuming that tomorrow’s weather will be the same as today’s.

Weather forecasting is hard not in the way that fundamental physics is hard. It’s not that the ideas are so abstruse, but that the basic equations are extremely tough to solve, and that lurking within them is a barrier to prediction that must defeat even the most profound mind. Weather is intrinsically unknowable more than two weeks ahead, because it is an example of a chaotic system, in which imperceptible differences in two initial states can blossom into grossly different eventual outcomes. Indeed, it was the work of the American meteorologist Edward Lorenz in the 1960s, using as set of highly simplified equations to determine patterns of atmospheric convection, that first alerted the scientific community to the notion of chaos: the inevitable divergence of all but identical initial states as they evolve over time.

It’s not obvious that weather should be susceptible to mathematical analysis in the first place. Winds and rains and blazing heat seem prone to caprice, and it’s no wonder they were long considered a matter of divine providence. Only in the nineteenth century, flushed with confidence that the world is a Newtonian mechanism, did anyone dare imagine weather prediction could be a science. In the 1840s Louis-NapolĂ©on demanded to know why, if his celebrated astronomer Urbain Le Verrier could mathematically predict the existence of the planet Neptune, he and his peers couldn’t anticipate the storms destroying his ships. Le Verrier, as well as the Beagle’s captain Robert FitzRoy, understood that charts of barometric air pressure offered a rough and ready way of predicting storms and temperatures, but those methods were qualitative, subjective and deeply untrustworthy.

And so weather prediction languished back into disrepute until the Norwegian physicist Vilhelm Bjerknes (‘Bee-yerk-ness’) insisted that it is “a problem in mechanics and physics”. Bjerknes asserted that it requires ‘only’ an accurate picture of the state of the atmosphere now, coupled to knowledge of the laws by which one state evolves into another. Although almost a tautology, that make the problem rational, and Bjerknes’s ‘Bergen school’ of meteorology pioneered the development of weather forecasting in the face of considerable scepticism.

The problem was, however, identified by French mathematician Henri PoincarĂ© in 1903: “it may happen that small differences in the initial outcomes produce very great ones in the final phenomena.” Then, he wrote, “prediction becomes impossible.” This was an intimation of the phenomenon now called chaos, and it unravelled the clockwork Newtonian universe of perfect predictability. Lorenz supplied the famous intuitive image: the butterfly effect, the flap of a butterfly’s wings in Brazil that unleashes a tornado in Texas.

Nonetheless, it is Newton’s laws of motion that underpin meteorology. Leonhard Euler applied them to moving fluids by imagining the mutual interactions of little fluid ‘parcels’, a kind of deformable particle that avoids having to start from the imponderable motions of the individual atoms and molecules. Euler thus showed that fluid flow could be described by just four equations.

Yet solving these equations for the entire atmosphere was utterly impossible. So Bjerknes formulated the approach now central to weather modelling: to divide the atmosphere into pixels and compute the relevant quantities – air temperature, pressure, humidity and flow speed – in each pixel. That vision was pursued by the ingenious British mathematician Lewis Fry Richardson in the 1920s, who proposed solving the equations pixel by pixel using computer – not an electronic device, but as the word was then understood, by human calculators. Sixty-four thousand individuals, he estimated (optimistically), should suffice to produce a global weather forecast. The importance of forecasting for military operations, not least the D-Day crossing, was highlighted in the Second World War, and it was no surprise that this was one of the first applications envisaged for the electronic computers such as the University of Pennsylvania’s ENIAC whose development the war stimulated.

But number-crunching alone would not get you far on those primitive devices, and the reality of weather forecasting long depended on the armoury of heuristic concepts familiar from television weather maps today, devised largely by the Bergen school: isobars of pressure, highs and lows, warm and cold fronts, cyclones and atmospheric waves, a menagerie of concepts for diagnosing weather much as a doctor diagnoses from medical symptoms.

In attempting to translate the highly specialized and abstract terminology of contemporary meteorology – potential vorticity, potential temperature and so on – into prose, Roulstone and Norbury have set themselves an insurmountable challenge. These mathematical concepts can’t be expressed precisely without the equations, with the result that this book is far and away too specialized for general readers, even with the hardest maths cordoned into ‘tech boxes’. It is a testament to the ferocity of the problem that some of the most inventive mathematicians, including Richardson, Lorenz, John von Neumann and Jule Charney (an unsung giant of meteorological science) have been drawn to it.

But one of the great strengths of the book is the way it picks apart the challenge of making predictions about a chaotic system, showing what improvements we might yet hope for and what factors confound them. For example, forecasting is not always equally hard: the atmosphere is sometimes ‘better behaved’ than others. This is evident from the way prediction is now done: by running a whole suite (ensemble) of models that allow for uncertainties in initial conditions, and serving up the results as probabilities. Sometimes the various simulations might give similar results over the next several days, but other times they might diverge hopelessly after just a day or so, because the atmosphere is in a particularly volatile state.

Roulstone and Norbury point out that the very idea of a forecast is ambiguous. If it rightly predicts rain two days hence, but gets the exact location, time or intensity a little wrong, how good is that? It depends, of course, on what you need to know – on whether you are, say, a farmer, a sports day organizer, or an insurer. Some floods and thunderstorms, let alone tornados, are highly localized: below the pixel size of most weather simulations, yet potentially catastrophic.

The inexorable improvement in forecasting skill is partly a consequence of greater computing power, which allows more details of atmospheric circulation and atmosphere-land-sea interactions to be included and pixels to become smaller. But the gains also depend on having enough data about the current state of the atmosphere to feed into the model. It’s all very well having a very fine-grained grid for your computer models, but at present we have less than 1 percent of the data needed fully to set the initial state of all those pixels. The rest has to come from ‘data assimilation’, which basically means filling in the gaps with numbers calculated by earlier computer simulations. Within the window of predictability – perhaps out to ten days or so – we can still anticipate that forecasts will get better, but this will require more sensors and satellites as well as more bits and bytes.

If we can’t predict weather beyond a fortnight, how can we hope to forecast future climate change, not least because the longer timescales also necessitate a drastic reduction in spatial resolution? But the climate sceptic’s sneer that the fallibility of weather forecasting renders climate modelling otiose is deeply misconceived. Climate is ‘average weather’, and as such it has different determinants, such as the balance of heat entering and leaving the atmosphere, the large-scale patterns of ocean flow, the extent of ice sheets and vegetation cover. Nonetheless, short-term weather can impact longer-term climate, not least in the matter of cloud formation, which remains one of the greatest challenges for climate prediction. Conversely, climate change will surely alter the weather; there’s a strong possibility that it already has. Forecasters are therefore now shooting at a moving target. They might yet have to brave more ‘Michael Fish’ moments in the future, but if we use those to discredit them, it’s at our peril.

No comments: