Monday, March 25, 2013

I'm not a neuroscientist, but I know what I like

Here’s my latest Muse comment for Nature news. I recommend also taking a look at this nice piece by Steven Poole on neuropseudoscience.

__________________________________________________________________

Can brain scans ever tell us why we like art?

No one with even a passing interest in scientific trends will have failed to notice that the brain is the next big thing. That’s been said for at least a decade, but now it’s getting serious, for example with the recent European award of €1 bn to the Human Brain Project to “build a new infrastructure for future neuroscience” and a $1 bn US initiative endorsed by President Obama. Having failed to ‘find ourselves’ in our genome, we’re going looking in the grey matter.

It’s a reasonable objective, but only if we have a clear idea of what we hope and expect to find. Some neuroscientists have grand visions, such as that adduced by Semir Zeki of University College London: “It is only by understanding the neural laws that dictate human activity in all spheres – in law, morality, religion and even economics and politics, no less than in art – that we can ever hope to achieve a more proper understanding of the nature of man [sic].”

Zeki heads the UCL Institute of Neuroesthetics – one of many fields that attaches ‘neuro’ to some human trait with the implication that the techniques of neuroscience, such as functional MRI, will explain it. We have neurotheology, neuroethics, neurocriminology and so on – or in popular media, a rash of books and articles proclaiming (in a profoundly ugly trope) that “this is your brain on drugs/music/religion/sport”.

If anyone is going to pursue neuroaesthetics (my brain refers that spelling), I’d be glad for it to be Zeki, who has a deep and sincere appreciation of art and an awareness of the limits of a scientific approach to the way we experience it. But some of the pitfalls of neuroaesthetics are perceptively expressed by neuroscientist Bevil Conway of Wellesley College, Massachusetts, and musicologist Alexander Rehding of Harvard University in an article in PLoS Biology [1]. They point out that “it is an open question whether an analysis of artworks, no matter how celebrated, will yield universal principles of beauty”, and that “rational reductionist approaches to the neural basis for beauty… may well distill out the very thing one wants to understand.”

For one thing, to suggest that the human brain responds in a particular way to art risks creating criteria of right or wrong, either in the art itself or individual reactions to it. Although it’s a risk most researchers are likely to recognize, experience suggests that scientists studying art find it hard to resist drawing up rules for critical judgements. The Nobel laureate chemist Wilhelm Ostwald, a competent amateur painter, devised an influential theory of colour in the early twentieth century that led him to declare Titian had once used the ‘wrong’ blue. Paul Klee, whose intuitive handling of colour was impeccable, spoke for many artists in his response to such hubris:
"That which most artists have in common, an aversion to colour as a science, became understandable to me when a short time ago I read Ostwald’s theory of colours… Scientists often find art to be childish, but in this case the position is inverted… To hold that the possibility of creating harmony using a tone of equal value should become a general rule means renouncing the wealth of the soul. Thanks but no thanks."

Even if neuroaestheticists refrain from making similar value judgements, they are already close to falling prey to one. Conway and Rehding discuss this field primarily as an attempt to understand how the brain responds to beauty. As they point out, beauty is not a scientific concept – in which case it’s not clear even which questions neuroaesthetics is examining. But the problem is deeper, for equating an appreciation of art with an appreciation of beauty is misleading. A concept of beauty (not necessarily ours today) was certainly important for, say, Renaissance artists, but until recently it had almost vanished from the discourse of contemporary art. Those who like the works of Marcel Duchamp, Joseph Beuys or Robert Rauschenberg generally don’t do so for their beauty. Scientists as a whole have always had conservative artistic tastes; a quest for beauty betrays that little has changed.

Even the narrower matter of aesthetics is not just about beauty. It has traditionally also concerned taste and judgement. Egalitarian scientists have a healthy scepticism of such potentially elitist notions, and it’s true that arbiters of taste may be blinkered and dogmatic: witness the blanket dismissal of jazz by Theodor Adorno, a champion of modernism. But the point is not whether aesthetes are right or wrong, but whether they can offer us stimulating and original ways of seeing, listening, and experiencing. In this regard aesthetics is partly a question of culture and circumstance, not a fundamental quality of the brain. Reducing it to what is shared and general recalls exercises in producing the ‘perfect’ picture or song from poll averages, the results of which are (intentionally) hideous and banal.

And what will a neuroaesthetic ‘explanation’ consist of anyway? Indications so far are that it may be along these lines: “Listening to music activates reward and pleasure circuits in brain regions such as the nucleus accumbens, ventral tegmental area and amygdala”. Thanks but no thanks. And while it is worth knowing that musical ‘chills’ are neurologically akin to the responses invoked by sex or drugs, an approach that cannot distinguish Bach from barbiturates is surely limited.

There surely are generalities in art and our response to it, and they can inform our artistic understanding and experience. But they will never wholly define or explain it.

Reference
1. Conway, B. R. & Rehding, A. PLoS Biology 11, e1001504 (2013).

Thursday, March 21, 2013

Buried emotions

One more, this time from the curious world of culturomics, which is also on Nature news.

_________________________________________________________________

Changes in expressions of sentiment can be discerned from Google Books

There’s nothing like having stereotypes confirmed. If you associate contemporary British fiction with the cool, detached tones of Martin Amis and Julian Barnes, and American fiction with the emotional inner worlds explored by Jonathan Franzen or the sentimentality of John Irving, it seems you’ve good reason. An analysis of the digitized texts of English-language books over the past century books concludes that, since the 1980s, emotion terms have become significantly more common in American than British books.

The study [1], by anthropologist Alberto Acerbi of the University of Bristol in England and coworkers, takes advantage of Google’s database of over 5 million digitally scanned books from the past several centuries. This resource has previously been used to examine the evolution of literary styles [2] and trends in literary expressions of individualism [3].

Such mining of the cultural information made available by new technologies has been called ‘culturomics’ [4]. Its advocates believe that these approaches can unearth trends in social opinions and norms that are otherwise concealed within vast swathes of data. “Language use in books reflects what people are talking about and thinking about during a particular time, so Google Books provides a fascinating window into the past”, says psychologist Jean Twenge of San Diego State University, author of Generation Me.

The new results certainly seem to show that familiar narratives about social mood are reflected in the literature (both fiction and non-fiction) of the twentieth century. Acerbi and colleagues find that, while words connoting happy emotions show peaks of usage in the ‘roaring twenties’ and the ‘swinging sixties’, sad words come to the fore during the years of the Second World War.

But there are surprises too: the First World War doesn’t seem to register on this happy-sad index, for example. By this measure, happiness seems to be rising since the 1990s, although it is too early to see whether the global recession will reverse that.

“The relationship between historical events and collective mood is complicated”, Acerbi admits, “but just by doing a somewhat crude analysis of emotion words it is possible to find trends that resonate with what we know about history.” He hopes that further analysis might reveal, for example, whether literature is ahead of its time or only slowly reflect other changes.

“This is a fascinating look at how two cultures have changed over time, especially how world events influence the expression of emotion in media”, says Twenge.

Overall, the use of emotion-related words in English-language books declined over the twentieth century. But when the researchers distinguished books in American and British English (about 1 million and 230,000 respectively), they found that, despite the overall decline, emotion words have become relatively more frequent in the former since about 1980, whereas previously the differences were minor. Such changes were not seen for a random selection of words. “Our results support the popular notion that American authors express more emotions than the British”, they say.

A similar change is seen in the usage of ‘content-free’ words such as pronouns and prepositions (you, us, about, within). Acerbi and colleagues interpret this as indicating that the shift in emotionality is coupled to a general shift in literary style, according to which American texts are increasingly prolix. “The correlation with mood terms is not altogether surprising, as these longer constructions provide increased opportunity for expressing sentiments”, explains biologist David Krakauer of the University of Wisconsin, who has mined Google Books for changes in literary style [2].

“Authors tend to read their contemporaries and their competitors largely within their respective cultures”, he adds, “and so we might expect British English and American English to diverge somewhat”.

Do these shifts imply that the US population in general expresses more emotion than the British? Although that doesn’t necessarily follow – literary norms may sometimes invert rather than mirror tendencies in everyday life – Acerbi feels that these new findings “may reflect a genuine cultural change, because of the size of the sample, and because Google Books is not explicitly biased towards successful or influent books.”

But Krakauer cautions that differences in literary expression don’t necessarily represent differences in the emotional mindscapes behind them. “It is a rather intriguing and open question why different cultures express the same level of feeling with different numbers of words”, he says.

References
1. Acerbi, A, Lampos, V., Garnett, P. & Bentley, R. A. PLoS ONE 8, e59030 (2013).
2. Hughes, J. M.. Foti, N. J., Krakauer, D. C. & Rockmore, D. N. Proc. Natl Acad. Sci. USA 109, 7682-7686 (2012).
3. Twenge, J., Campbell, K. W. & Gentile B. PLoS ONE 7, e40181 (2012).
4. Michel, J.-P. et al., Science 331, 176-182 (2011).

Weil to go (OK, so you have to know the pronunciation)

Here’s the initial draft (sort of) of my story for Nature news on the Abel Prize. I blanched when I read the award citation, but in the end this was fun.

_________________________________________________________________

Proof of a deep conjecture about algebra and geometry nets Abel Prize

It has been four decades since Belgian mathematician Pierre Deligne completed the work for which he became celebrated, but that fertile contribution to number theory has now earned him the Abel Prize, one of the most prestigious awards in mathematics.

Given annually by the Norwegian Academy of Science and Letters and named after the famous Norwegian mathematician Niels Henrik Abel, the prize is worth 6 million Norwegian krone, or about US$ 1m.

The Academy has rewarded Deligne, who works at the Institute for Advanced Study in Princeton, New Jersey, “for seminal contributions to algebraic geometry and for their transformative impact on number theory, representation theory, and related fields.”

Speaking via webcast on Wednesday, Deligne said he was surprised to learn that he had won the prize. Despite having won major prizes before, he said, he did not spend much time wondering about when the next one would come. “The nice thing about mathematics is doing mathematics,” Deligne said. “The prizes come in addition.”

Deligne has made “many different contributions that have had a huge impact on mathematics for the past 40-50 years”, says Cambridge mathematician Timothy Gowers, who delivered the award address in Olso.

“Usually mathematicians are either theory builders, who develop tools, or problem-solvers, who use those tools to find solutions”, says Peter Sarnak, also at the IAS in Princeton. “Deligne is unusual in being both. He’s got a very special mind.”

Algebraic geometry explores the links between geometric objects and the algebraic equations that describe them – for example, the expression for a circle of radius r, x2+y2=r2. It has proved to have deep connections to many areas of mathematics, particularly the properties of pure integers (number theory).

This last connection is evident in the analogy between the Riemann hypothesis, which describes a relationship between prime numbers, and the so-called Weil conjectures, which were proposed by mathematician André Weil in 1949 – the subject of Deligne’s most famous result.

The Weil conjectures concern objects in algebraic geometry called algebraic varieties, which are the set of solutions of algebraic equations. The number of such solutions can be found from a function called the zeta function. While Riemann’s hypothesis concerns the nature of the Riemann zeta function, which determines how prime numbers are distributed among all the integers, the Weil conjectures specify some of the properties of zeta functions derived from algebraic varieties.

There are four of these conjectures. The first three were proved to be true in the 1960s, but the fourth and hardest – and the direct analogue of the Riemann hypothesis – was proved by Deligne in 1974. The Riemann hypothesis itself remains “the most famous unsolved problems in mathematics”, says Gowers – which is in itself an indication of the significance of Deligne’s proof.

Gowers adds that this proof “completed a long-standing programme” in mathematics. “By solving that”, says Sarnak, “he solved a whole lot of things at once”. For example, the solution also proved a long-standing, recalcitrant conjecture by the famous Indian mathematician Srinivasa Ramanujan.

In finding his proof, Deligne built on the work of his mentor, the German-born mathematician Alexander Grothendieck, who proved the second Weil conjecture in 1965. That work introduced a crucial concept called l-adic cohomology.

The general notion of cohomology, which concerns the topological properties of spaces described by algebraic equations, was itself first developed in the 1920s and 30s, and Weil recognized that it would be needed to prove his hypotheses. Grothendieck laid the foundations for finding the right cohomology, but his student Deligne found the final proof alone – and in a different way from what Grothendieck had imagined.

Deligne’s proof won him the Fields Medal, the “other maths Nobel” besides the Abel Prize, in 1978, and in 1988 he shared the Crafoord Prize with Grothendieck – making him an obvious candidate for the Abel. Since completing the work that secured his reputation, he has applied tools such as l-adic cohomology to extend algebraic geometry and to relate it to other areas of maths. For example, because much of his work is concerned with so-called finite fields – basically modulo arithmetic – it can be applied to the kind of digital logic used in computing. “People in computer science are using his results without even knowing it”, says Sarnak.

“Even if you took away his most famous result on the Weil conjectures”, says Gowers, “you would still be left with a great mathematician.”

Deligne said he had not thought yet about how he would spend the money that came with his Abel Prize, but that he would like to find a way to make it useful for mathematics. “To some extent, I feel that this money belongs to mathematics, not to me.”

Listen up

Here are a couple of podcasts I’ve been involved in recently. First, a programme for the BBC World Service all about black (it’s the 8 Feb episode). Second, a discussion about chemistry, atoms and art broadcast by the journal Leonardo in conjunction with an ebook called Art and Atoms, which is mostly a compilation of interesting papers on the intersections of art and the molecular sciences.

Monday, March 18, 2013

Chinese made easy(er)

Here’s my latest piece for BBC Future. Hmm, will Blogger permit Chinese characters? We’ll see. If you want to find out more about this interesting learning system, there’s some stuff here, but apparently more on the way as the authors figure out how to develop this tool.

________________________________________________________________

There’s no way round it: learning Chinese is tough. As far as reading goes, what most dismays native speakers of alphabetic languages is that Chinese characters offer so few clues. With virtually no Spanish, I can figure out in the right context that baño means bath, but that word in Chinese (洗澡) seems to offer no clues about pronunciation, let alone meaning.

There seems no alternative, then, but to slavishly learn the 3,500 or so characters that account for at least 99% of usage frequency in written Chinese. This is hard even for native Chinese speakers, usually demanding endless rote copying in school. And even then, it is far more common than is often admitted for Chinese people to forget even quite routine characters, such as 钥匙 (key).

Is there a better way? Jinshan Wu of Beijing Normal University, a specialist in the new mathematical science of network theory, and colleagues have investigated the relationships between these 3,500 most used characters to develop a strategy that makes optimal use of the connections to assist learning and memorization.

Chinese characters aren’t really as arbitrary and bewilderingly diverse as they seem at first encounter. For one thing, they are made up of a fairly limited number of sub-characters or radicals, which themselves are composed of a set of standard marks or ‘strokes’. What’s more, the radicals often contain clues about meaning or pronunciation, or both. In the Chinese for bath, for example (pronounced xizao in the pinyin Romanization system), both characters start with the same radical, which denotes ‘water’, and the righthand half of both indicates the pronunciation. There are general rules (called liu shu, 六书) for building characters from radicals.

These connections can be exploited in learning. Once you know that wood is 木 (mu), it’s not so hard to remember that forest is 林 (lin) – or even more pictorially, 森林 (senlin). Assisted by the liu shu rules, Wu and colleagues mapped out the structural relationships between all 3,500 of the common characters, to form a network with over 7,000 links. This shows that the roughly 224 radicals are combined in just 1,000 or so characters that form the basis of all the others.

This network is hierarchical, meaning that it is somewhat like a tree, with a few central nodes (trunks) branching into many branch tips. That’s very different from a web-like network such as a grid or street map, in which there are often many different ways to get to any particular node. The researchers figured that it could be most efficient to start learning at the lower levels of the hierarchy – the trunks, as it were – and to progress gradually out towards the branch tips.

But would that necessarily be better than a strategy which focuses on the most frequently used words first? How, indeed, can one assess the relative learning cost of different strategies? There’s no unique way to do this, but Wu and colleagues developed a logical, intuitive method of enumerating costs. They figured that it is easier to learn a multi-part character if all the components had been learnt previously. To take a simple case, it’s easier to learn 明 (ming: bright) if you have already learnt 日 (ri: sun, day) and 月 (yue: month, moon). The researchers assigned cost values to each ‘new learning’ task.

The ‘cheapest’ way to learn all the characters in the network is then to start with the ‘trunk’ characters that have the highest number of branches, and work up through the layers. But that could leave you knowing a lot of words you rarely need to use. If, on the other hand, you simply learn characters in order of use frequency (as some learning methods do), you fail to take advantage of the network connections that can aid recognition.

The idea approach is a compromise between the two. Wu and colleagues therefore adjust the relationship network by giving a certain weighting or priority to each character depending on its use frequency. Then the learning path spreads gradually through the network while picking up most of the common characters first. It’s rather like planning a shopping trip by seeking a short total path between shops while also contriving to pick up the heaviest items last.

The researchers compared the learning cost of their strategy with that for the most widely used textbook in Chinese primary schools (covering 2,475 characters) and a popular textbook for learning Chinese as a second language. For a given cost, their new strategy picked up both considerably more characters in total and a significantly greater total use frequency than the two alternatives.

What’s more, the researchers say that their approach would allow each student’s learning strategy to be tailored to his or her individual strengths – for example, to suit those who have already learnt some characters. This just isn’t possible with traditional approaches.

Of course, the ultimate test is whether students do actually learn faster. This remains to be seen. But with a debate already raging in China over whether current teaching methods are the most suitable, this new proposal shows that there may be rational ways to pursue the question.

Reference
X. Yan, Y. Fan, Z. Di, S. Havlin & J. Wu, preprint.

Thursday, March 14, 2013

Smoke signals



Ah, there is after all a route to BBC Future from the UK, in which case you can read here a piece I wrote for it yesterday on ‘papal smoke’. It’s now gone white, of course. The main difference in the version below, my original draft, is in the final paragraph, which understandably was a bit near the knuckle for the BBC.

_____________________________________________________________________

There’s something almost poignant in the way the Vatican has had to resort to chemistry to get its archaic method of communicating the papal election results to work properly: science helping to sustain a bizarre tradition from a distant age. Before this week, “fumata nera” and “fumata bianca” meant little to most people outside Italy. But now all eyes are on the copper chimney of the Sistine Chapel, from which the release of black smoke signalled today that the 115 cardinals voting to choose the new pope have not yet reached the two-thirds majority needed to secure a decision. When they do, the smoke will turn white.

The smoke comes partly from the burning of ballot papers in a special stove in the chapel. But to colour it white or black, this smoke is mixed with that from chemical additives burnt in a second stove. Traditionally the Vatican produced the different colours by burning wet straw for white and tarry pitch for black.

Anyone who has ever made a bonfire knows that damp grass will work for the former; the less responsible of you will know that chucking old tyres or roofing felt into the flames will turn the smoke black – and what’s more, noxious, because it is then full of sooty carbon particles that can clog the lungs and are potentially carcinogenic.

It’s not concern for the environment that has led the Vatican to change its ways, however. Rather, the smoke in some previous elections came out an ambiguous grey, prompting the decision for the last conclave in 2005 to use a more reliable method based on chemical ingredients.

The Vatican has now revealed what these are. For black, it uses a mixture of potassium perchlorate, anthracene and sulphur; white comes from potassium chlorate, lactose and the conifer resin called rosin, also used to rub violin strings to give the bow purchase.

We needn’t imagine a team of Vatican chemists labouring like alchemists to devise these magic recipes, because what they really show is that the Vatican is making plain old smoke bombs. A smoke bomb – as well as fireworks designed to be particularly smoky – works by combining an easily burnt carbon-rich compound such as sugar with a so-called oxidizing agent, which provides the oxygen for the combustion reaction. Potassium perchlorate and chlorate (which differ only in precisely how much oxygen they contain) are the most common oxidizers in these applications. Lactose (milk sugar), rosin and anthracene are the sources of carbon – anthracene, found in coal tar, is particularly good for producing big black sooty particles, although it is no longer used in pyrotechnic displays because it is carcinogenic. Sulphur also burns well, and was a traditional component of gunpowder: indeed, the Vatican’s ‘fumata nera’ mix is basically that, with the traditional oxidizer of saltpetre replaced by another.

White smoke for pyrotechnics is more commonly – and reliably – made by burning zinc dust with the organic solvent hexachloroethane and zinc oxide added. It is widely used for military training exercises. But the solvent is poisonous, and the smoke itself can cause liver damage and respiratory problems – so it’s no surprise that the Vatican chose a safer recipe.

In any event, the smoke system now leaves little to chance. Electrical heating of the flue, and backup air fans make sure that the smoke will come pumping out, and the process will surely have been tested to ensure that the black smoke doesn’t turn white as its big sooty flakes break up into smaller particles – an effect sometimes to be seen as bonfire smoke rises.

But aren’t the Vatican being a bit unimaginative and literally monotone here? Why stop at a mere two-colour signalling system? The lurid rainbow smokes used in aerial displays like those of the Red Arrows are tinted with pigments and dyes such as indigo and rhodamine. Couldn’t we have beige smoke to denote a coffee break, pink smoke to tell the world that Cardinals Monteiro de Castro and Sandoval are arguing over homosexuality, or red to indicate that Cardinal Calcagno is threatening to get out his extensive collection of firearms?

Monday, March 11, 2013

Moore's law is not just for computers

Here is my latest news story for Nature.

_______________________________________________________

Mathematical laws make industrial growth and productivity predictable

Predicting the future of technology has often seemed a fool’s game: no one forgets IBM founder Thomas J. Watson’s (possibly apocryphal) prediction that the world would need five computers. But a team of researchers in the USA now says that technological progress really is predictable, and backs up the claim with evidence from 62 different technologies.

It’s not a new claim. But a group of researchers at the Santa Fe Institute in New Mexico and the Massachusetts Institute of Technology in Cambridge, Massachusetts have put to the test several hypothetical mathematical laws describing how the costs of technologies evolve. They find that one proposed as early as 1936 supplies the best fit to the data [1].

That proposal was made by aeronautical engineer Theodore Wright, who pointed out that the cost of airplanes decreased as the total number of planes manufactured (the cumulative production) increased [2]. Specifically, this cost was proportional to the inverse of the cumulative production raised to some power. This has since been put forward as a more general law governing costs of technological products, and is often explained on the basis that, the more we make, the better and more efficient we get at making.

But it’s not the only contender. Much more famous than Wright’s law is the relationship proposed in 1965 by Gordon Moore, cofounder of the microelectronics company Intel. He pointed out that the computer power was increasing exponentially over time [3] – which means, in effect, that the cost per transistor was falling exponentially.

So who is right: Wright or Moore? Perhaps neither, for several other hypothetical relationships between scale and cost of production have been suggested – that, for example, costs fall purely due to economies of scale. All of these ‘laws’ predict that costs tend to fall over time, but at slightly different rates.

“The predictive ability of these hypotheses hadn't been tested against a large dataset before,” says Trancik. She and her colleagues tested six of them by collecting data on 62 technologies, ranging from chemicals production to energy technologies (such as photovoltaic cells) and information technologies, spanning periods of between 10 and 39 years. “Assembling a large enough data set was a big challenge”, says Trancik.

The researchers evaluated the performance of each ‘law’ with hindcasts: using earlier data to make predictions about later costs, and then seeing how these compared with the actual figures. They used statistical methods to figure out which law produced the smallest predictive errors.

And the winner was… Well, in fact there wasn’t a huge difference between any of the laws. The best was Wright’s law, but Moore’s law was close behind, at least for a relatively modest time horizon of a few decades. In fact, their predictions were so similar that the researchers suspected the two laws might be related.

This seems quite likely. In 1979 political scientist Devendra Sahal pointed out that if cumulative production of an item grows exponentially, then Wright’s law and Moore’s law are equivalent [4]. The new data confirm that production does show such growth for a wide range of products. “You wouldn’t necessarily expect that”, says Trancik.

That Moore’s law applies at all to so many different industries is a surprise, since computing has often been regarded as a special case. “It’s a much more general thing”, says coauthor Doyne Farmer, currently at the University of Oxford.

Economist William Nordhaus of Yale University warns that these laws will only work for technologies that survive – which is not itself easy to predict. “History is written only about the victors”, he says. “Those technologies that didn’t make it in the market don't make it into the data set. This is one reason why it is so difficult to forecast which of many nascent energy technologies will survive.”

The future of some technologies will depend crucially on governmental policies, not just conventional market forces. For example, in climate-change technologies, in which Nordhaus specializes, he says that the evolution will depend on the future pricing policies of carbon emissions. “Some technologies, such as carbon capture and storage, won’t even get off the ground with a zero carbon price”, he says.

Estimating the potential costs of climate-change mitigation technologies is one of the prime applications the researchers envisage for their findings. A key question is whether costs will fall just as a matter of time, as Moore’s law implies, or whether stimulating growth by public policies (such as subsidies or taxes) that boost production might accelerate the fall, as Wright’s law implies.

The results seem to imply the latter, which is good news. “We have more control over these things than we might think”, says Farmer.

References
1. Nagy, B., Farmer, J. D., Bui, Q. M. & Trancik, J. E. PLoS ONE 8, e52669 (2013). [doi:10.1371/journal.pone.0052669]
2. Wright, T. P. J. Aeronaut. Sci. 10, 302-328 (1936).
3. Moore, G. E. Electr. Mag. 38 (1965).
4. Sahal, D. AIIE Trans. 11, 23-29 (1979).

Cloud control

Here’s my latest piece for BBC Future.

________________________________________________________________

It’s quite a promise. Using existing technology, we could engineer the clouds “to cancel the entire warming caused by human activity from pre-industrial times to present day”. But this, the latest of many ‘geo-engineering’ proposals to mitigate climate change, has a drawback. Get it only a bit wrong, and you make the problem worse.

That, however, has been the worry with such ‘techno-fixes’ all along. We could fire a fleet of little mirrors into an orbit around the Sun that locks them in place to deflect sunlight from the Earth. But if it goes wrong, we could be plunged into an ice age. Manipulating the clouds has been a popular idea with would-be geo-engineers, but these proposals face the fact that the climate effects of clouds are among the hardest parts of the climate system to understand and predict, so we can’t be too sure what the results will be.

The new suggestion examined by climate scientist Trude Storelvmo of Yale University and her coworkers targets one particular kind of cloud: the cirrus ice clouds that extend their wispy tendrils in the upper troposphere, at altitudes of about 5-15 kilometres. The researchers say that these thin clouds are known with confidence to have a net warming effect on our planet, since their ice crystals trap infrared radiation from the sun-warmed surface and re-emit it back down towards ground. So if we can make cirrus still thinner, we’ll let out more heat and cool the globe.

This idea was first suggested in 2009 by David Mitchell and William Finnegan of the Desert Research Institute in Nevada (D. Mitchell & W. Finnegan, Environmental Research Letters 4, 045102 (2009)). It relies on a rather counterintuitive effect: to reduce the warming influence of cirrus cloud, one should add to the upper troposphere tiny ‘seed’ particles that actually encourage the formation of the ice crystals from which the clouds are made.

So how does that work? The ice crystals of cirrus clouds generally form spontaneously in moist, cold air. But seed particles, if present in the right concentration in the air, could grab all the water vapour to produce a small number of large ice crystals, preventing the formation of lots of unseeded little ones. This would have two consequences. First, the resulting clouds would be more transparent, just as big blobs of water in oil create a more transparent mixture as salad dressing separates out, compared with the milky, opaque mixture you get when you shake it into lots of tiny droplets. The thinner clouds absorb less radiation from the warm ground, allowing more to escape into space.

Second, clouds made from larger ice particles have shorter lives, because the big crystals sink down through the atmosphere under gravity.

This wasn’t by any means the first proposal for geo-engineering climate by modifying the reflection or absorption of light in the atmosphere. For example, British meteorologist John Latham and coworkers have suggested that a fleet of solar-powered ships might spray sea salt into the air to seed the formation of stratocumulus clouds, which cool the planet by reflecting sunlight (J. Latham, Nature 347, 339 (1990)). And climate scientist Paul Crutzen has proposed injecting a sulphur-containing gas into the stratosphere, which would form a haze of sulphate particles to reflect sunlight – a process that happens naturally in volcanic eruptions and which is known to cool the earth (P. J. Crutzen, Climatic Change 77, 211 (2006)).

One of the advantages of climate engineering via clouds is that the effects are transient: if it doesn’t go to plan, the process can be stopped and all will return to normal in a matter of weeks. Mitchell and Finnegan suggested that seeding of cirrus cloud might be done by releasing the particles from aircraft. They suggested that the somewhat exotic (but not excessively costly) compound bismuth tri-iodide would be a good material for the seeds, as it is known to promote ice formation on its surface.

But will it work as planned? That’s what Storelvmo and colleagues have now studied by using a climate model that incorporates a description of cirrus cloud formation. They find that to get climate cooling, one has to use just the right concentration of seed particles. Too few, and cirrus clouds form just as they do normally. But if there are too many seeds, they generate more ice crystals than would have formed in their absence, and the clouds are actually thicker, trapping even more heat.

If we get the seed concentration right, the effect is dramatic: the cooling is enough to offset all global warming. But this ‘Goldilocks’ window is quite narrow. What’s more, the researchers say, finding the precise boundaries of the window requires more information than we have at present, for example about the ability of bismuth tri-iodide to seed ice formation and the rates at which the ice crystals will settle through the atmosphere. So attempting this sort of engineering prematurely could backfire – even if the effect would be quite short-lived, we should hold fire until we know more.

Reference
T. Storelvmo et al., Geophys. Res. Lett. 40, 178-182 (2013).

Saturday, March 02, 2013

Falstaffian science

Here’s a review of the new production of Bertolt Brecht’s Life of Galileo from the Royal Shakespeare Company, published in Nature.

___________________________________________________________________

It is one of the central works of drama about science, and one of the most controversial. Bertolt Brecht’s Life of Galileo has been criticised for misrepresenting history, science and Galileo himself, with some validity. The real question, however, is whether the play works, theatrically and psychologically.

Shakespeare, after all, took vast liberties with history, yet such is his way with human portraiture that no one complains. Shakespeare and Galileo were born within a few weeks of each other in 1564 – a coincidence that the Royal Shakespeare Company (RSC) understandably makes much of for its latest production of Brecht’s play. More significantly, the play shows Brecht at his most Shakespearean, with Galileo the wily, tragically compromised sensualist redeemed by self-insight that others lack — he is, as Adam Gopnik suggested in a recent article in The New Yorker, a kind of intellectual Falstaff.

The exuberance and wit of this production owe much to the new translation by current RSC writer in residence Mark Ravenhill. Ravenhill has commented on the “comic sensibility in Brecht’s language which I think [is] often overlooked”, but which he and director Roxana Sibert have found in abundance. In the title role, Ian McDiarmid is sly and wordly while pulling off the important trick of making Galileo loveable. It’s with the basic fabric of the play, not its realization, that the questions lie.

In retrospect we can see that that Brecht set himself an impossible task, because even now there is no consensus on Galileo. Many scientists still prefer the narrative that prevailed when Brecht first wrote the play in 1937-39, of a martyr persecuted by the Catholic Church for his pursuit of truth about the arrangement of the cosmos. A more measured view holds now, recognizing that a less pugnacious man might have navigated the currents of the papal court more skilfully. It is certainly not to excuse the bullying, dogmatic Vatican to point out that Galileo’s evidence for a heliocentric universe was equivocal and in some respects (his interpretation of the tides) wrong.

Galileo’s mathematical physics, rightly adored by physicists today, was not, as some older science historians had it, the right way to do science. It was the right approach for celestial and terrestrial mechanics, but useless for chemistry, medicine, botany, zoology and much else. And while Einstein celebrated Galileo’s rejection of logical deduction devoid of empirical input as essential to modern science, Galileo was not the first to do that.

Brecht must take some blame for making Galileo more original than he was. He fell for the idea of a Scientific Revolution in which Great Men begin thinking in a totally new way. Complaints about historical accuracy could seem carping in a work of art, but Brecht himself attested of the first version of the play that “I was trying here to follow history”.

Brecht was in any case disingenuous, for his original version of the play was evidently informed by, and widely interpreted as a comment on, the political climate of the time. Brecht fled Nazi Germany after the Reichstag fire in 1933, and his cunning Galileo who subverts the ideology of the authorities – recanting on his heliocentrism in order to continue his work in secret – was regarded as a symbol of anti-Nazi resistance.

That, however, is not the Galileo of the revised 1947 version – the one most often performed, and used here. Although Brecht was already reworking the play in 1944, the bombing of Hiroshima and Nagasaki transformed his view of scientists. “Overnight the biography of the founder of the new system of physics read differently,” he wrote. He felt that the Manhattan Project scientists had betrayed their moral obligations, and criticized even Einstein as a politically naïve “eternal schoolboy”. Regardless of the merits of that view, it is the play’s downfall.

Now Galileo, confronting his former student Andrea, launches into a diatribe on how, by focusing on science for science’s sake, “you might jump for joy at some new achievement, only to be answered by a world shrieking in horror.” Nothing in Galileo’s former conduct has prepared this (anachronistic) concern about the social applications of science, leaving us with a confusing portrait.

On 30 October 1947, when the new version was premiered, Brecht got a taste of Galileo’s ordeal: he testified before the House Committee on Un-American Activities (others involved in the production refused). He left for Europe the following day, accused of having compromised artistic freedom and with perhaps a keener appreciation that ideological interference in art and science was not confined to dictatorships.

Brecht’s other impossible task was to explain how real science is done. He succumbs to the view that you just need to think clearly, believe your eyes, trust in reason. He then has to skirt around the problem that your eyes tell you that the sun, not the earth, moves. What’s more, philosophers such as Paracelsus and Bernardino Telesio had been relying on experience, rather than Aristotle, for a hundred years before Galileo, but had reached rather different ‘truths’. Nor was there any ‘scientific method’ in Galileo’s time, just as there is none today: its ad hoc combination of hypothesis, assumption, experiment, theory, logic and intuition will not reduce to any formula.

The RSC’s production is spirited and visually inventive. But the play itself is pulled between too many irreconcilable poles to make a coherent whole. It is perhaps the history of the play, rather than the text itself, that reveals the most about the difficult relationship between science and the cultures in which it is embedded.