Global temperatures are rising and the causal relationship to steadily increasing levels of anthropogenic greenhouse gases (GHGs), particularly carbon dioxide (CO2), but also methane (CH4), nitrous oxide (N2O) and chlorofluorocarbons (CFCs), is more than compelling. The underlying mechanism—absorption of infrared radiation by the GHGs and the resultant warming of the planetary surface—is based on irrefutable laws of physics (IPCC 2007) and the principle is confirmed daily by myriads of glass-roofed greenhouses located in the cooler parts of the planet. Further, evidence of increasing GHG levels in the atmosphere accompanied by rising temperatures is based on measurements carried out all over the globe; their veracity is beyond any doubt. Finally, predictions of the impact of global warming on the climate system, on the ice of the high mountains and polar regions, as also on the ecosystems of the planet are coming true at a rate faster than anticipated less than a decade ago.
Unfortunately for the future of the planet’s inhabitants, in particular humans, the dire straits into which we are headed has not been adequately realised by the public and is not given the priority it deserves by policy makers. One can think of many reasons for the public hesitation to face reality but probably the major one is the campaign of disinformation unleashed by those whose profits and power bases depend on the current globalised economy. Coping with climate change will entail a profound reorganisation, not only of the global economy, but particularly the way of life-consumerism—on which it is based. Understandably, those who stand to lose will do their best to defend themselves and prolong their lifestyles as far as possible; it is up to the responsible media and educated public to separate the wheat from the chaff and, in the interests of future generations of humans, as also the plants and animals with which we share the planet, to take the steps necessary to avert impending calamity. Recently, an expert on the financial markets has sounded a clarion call to climate scientists in a one-page essay freely available on the internet titled: ‘Be persuasive. Be brave. Be arrested (if necessary)’ published by the journal Nature (Grantham 2012). He calls for scientists “to take more career risks and sound a more realistic, more desperate note on the global-warming problem” because the community is not doing enough to warn and educate the public on the seriousness of the changes. The view from his vantage point—the behaviour of the financial markets—corresponds with that of earth system scientists on the current state and near future of the planet. He also draws attention to the problem of dwindling resources, citing the exhaustion of phosphate and potash reserves in the next few decades as a crisis for which no one is prepared.
Scientists tend to be cautious in their statements because the direct impact of global warming is complicated by a myriad of interlinked processes. A good example is the effect of increasing levels of atmospheric aerosols (small particles suspended in the air, such as cloud droplets, dust or smoke) which reduce solar radiation reaching the surface of the land and the sea, but in different ways. Since aerosol particles act as nuclei on which water vapour condenses, their density is crucial in determining the size of cloud droplets: the larger the number of particles in a given volume of air, the smaller the droplets forming within it. Thus, the large droplets of dark rain clouds absorb radiation, thereby
warming the atmosphere, whereas the small droplets of bright clouds scatter sunlight and reflect some of it back into space, thereby cooling the planet. Since the moisture (water vapour) content of air increases rapidly with temperature (about a doubling for every 10°C), global warming increases the amount of water transported by the atmosphere hence also the extent and density of clouds and the intensity of the subsequent rainfall. Contrary to intuition, moist air is lighter than dry air at the same temperature and pressure. Evaporation of water, which cools the air, and condensation of vapour into cloud droplets, which warms it, hence both affect the temperature and buoyancy of the air masses; the resultant gradients in air pressure give rise to winds whose strength depends on the steepness of the gradients between adjacent air masses. In short, global warming energises the climate machinery thereby increasing the incidence of severe weather events.
The complex behaviour of clouds and aerosols in the atmosphere greatly hampers the predictive abilities of climate models. Thus, until the 1970s the smoke from burning coal and oil and vehicle exhaust was released unchecked into the atmosphere over Europe and northern America, much as is the situation over southern and eastern Asia these days. In particular the sulphur emissions, which result in haze and clouds with smaller droplets, cooled the planet sufficiently to dampen the effects of GHGs. The detrimental effects on human health (respiratory ailments) and the environment (acid rain) forced western governments to pass legislation during the 1980s which led to drastic reductions of sulphur emissions. The air became cleaner, but global warming set in with a vengeance from the 1990s onwards. It is highly likely that the rate of warming during the past decade has been dampened by the haze over Asia at the cost of human health and environmental degradation. The countries impacted by air pollution today will sooner or later have to undergo a similar transition to cleaner air as occurred in the western world. A jump in the rate of warming corresponding to that of the 1990s will have much worse consequences as heat waves then will be several degrees higher than those of today. In subtropical regions temperatures already reach levels every summer that can be lethal for humans and animals. Forecasting the effect of cleaning the notorious ‘Asian brown haze’ on local temperatures and ultimately on the global climate machinery in the coming decades is a challenge facing climate modellers. The predictions will be dire.
Debate within the scientific community on whether rising CO2 levels are causing global climate change has long ceased; over the past decades, attention has focussed on the rate at which it is happening, which impacts at regional scales can be attributed to it and what is to be expected in the future. Evidence has been mounting over the past few years that ongoing global warming is disrupting previous seasonal cycles and is increasing the frequency of extreme weather events, such as droughts and floods, heat and cold waves, tornadoes and storms, and also unusual monsoons (Coumou and Rahmstorf, 2012). The record-breaking heat wave and drought in the continental USA during the summer of 2012, followed by the severity of the tropical storm Sandy in late autumn along the US east coast were front-page news also outside the USA because of the global economic impact of crop losses and the damage suffered by a city at the centre of global attention. Similarly, unusual, several-week long, large-scale weather patterns, such as the one which brought a heat wave and fierce forest fires to western Russia and torrential floods to Pakistan in 2010, indicate that ‘normal’ seasonal cycles are vulnerable everywhere. These unusual events have caused much human suffering and losses to the economy but, because of their stochastic nature, their respective roles as signals of ongoing global climate change are still being disputed in some quarters. How many more such natural disasters are required before humanity, now connected by internet, will rise to the challenge and ‘take up arms against a sea of troubles’?
However, the most dramatic, continuous, undisputable changes are occurring in sparsely populated areas along the planetary ice edges of the high altitudes and latitudes. Indeed, warming of the polar regions represents the greatest long-term threat to human civilisation and the planet’s natural environment because of the sea level rise that melting of the continental ice caps will cause. Besides, the coastlines bordering the Arctic Ocean are thawing and releasing methane and CO2 so far capped by permafrost. The amount of GHGs currently slumbering there are enough to accelerate global warming (Koven et al., 2011). There is a lesson to be learned from Earth’s history: The termination phases of past ice ages have occurred rapidly, much faster than the transitions from warm to cold phases, strongly suggestive of runaway, feedback processes set into motion once the ice starts retreating. There is no reason to believe that the current transition to a warmer climate will occur more slowly than the previous ones. So what is now happening at the poles will have a tremendous impact all over the planet.
Observations confirm what theory predicted many decades ago, except that the current rates of change, exemplified by the retreat of Arctic summer sea ice, are happening faster than anticipated by climate models. This can not only be attributed to problems in incorporating the complications mentioned above into models of climate change but is more than partly due to overly cautious programming of the models by climate scientists who are wary of being called doomsday prophets, plotting to generate more research funding, by the climate sceptics. This accusation is not only false, but also an insult to the integrity of the scientific enterprise which is quality controlled by peer review based on the dedication of voluntary, anonymous, largely state-funded individuals whose possible conflicts of interest are closely monitored within the community. In striking contrast, the well-funded PR managers and lobbyists fielded by giant corporations running the carbon-based global economy know no scruples. In the words of Grantham, 2012: “Scientists are understandably protective of the dignity of science and are horrified by publicity and overstatement. These fears, unfortunately, are not shared by their opponents, which makes for a rather painful one-sided battle. Overstatement may generally be dangerous in science (it certainly is for careers) but for climate change, uniquely, understatement is even riskier and therefore, arguably, unethical.”
The magnitude of the problem
The rise in atmospheric CO2 concentrations during the past 100 years, from 0.028 to now 0.039 per cent (or 280-391 ppmv), is equivalent, in terms of mass, to 224 gigatonnes (Gt) of elemental carbon. One Gt is one thousand million tonnes, an incomprehensible amount that needs to be put into human perspective so that it can be appreciated. A metric tonne is equivalent to a cubic metre of water. Pure carbon (e.g. in the form of graphite) has twice the density of water, so 220 Gt would have the volume of 110 cubic kilometres, again too large to comprehend. We bring it closer to our size scales with the following thought experiments: If 110 km3 of carbon were converted to a one metre-thick, solid black plate, it would cover an area of 110,000 km2, about one third the area of Germany. If it were made into a rod with a square metre base it would extend 110 million km into space, two thirds of the distance to the sun. The amount can also be visualised by comparing it with the plant biomass (all the forests, grasses and crops) standing on the continents which contain around 600 Gt of carbon. In other words, one would have to increase biomass of land vegetation by one third to take up this amount. Since this amount is only about half the total released during a century of industrialisation, this example provides a graphic impression of the scale at which humankind has been deforesting the planet and burning fossil fuel. It is worth pointing out here that the last time in earth’s history when CO2 levels were around 400 ppm is about 25 million years ago: halfway on the time span between the end of the dinosaur age (Mesozoic) and today. For comparison, the first bipedal ape—our ancestor—appeared just some 6 million years ago.
As graphically demonstrated in the film ‘An Inconvenient Truth’ by former US Vice President Al Gore, concentrations of CO2 in the atmosphere have risen during the past century by the same amount (about 100 ppm) as that between the low values of the last ice age and the beginning of civilisation 10,000 years ago (the Holocene). The transition occurred in a few 1,000 years, a much shorter time span than that between the birth of Indian civilisation and today. In this comparatively short period, the melting of kilometre-thick ice shields (like the one on Greenland) that covered all of north America down to the latitude of New York City, northern Europe, parts of Asia, Patagonia and the South Island of New Zealand, resulted in a global sea level rise of more than 100 m. Remarkably, the CO2 levels of the atmosphere stabilised after reaching 280 ppm, just as they had been stable for an even longer time span at 180 ppm at the height of the last ice age. Even more remarkably, much the same cycle between moist, warm ages and dry, ice ages was repeated four times in the past 440,000 years with minimum and maximum CO2 levels each time at about the same values. What renders these figures and their constancy remarkable is the fact that the turnover time of atmospheric CO2 is only 3-4 years. Thus annual net plant production by photosynthesis, which binds CO2 in organic matter, is around 60 Gt on land and 40 Gt (by phytoplankton) in the ocean, most of which is returned by respiration of microbes and animals. About 90 Gt is exchanged by seasonal warming and cooling of the ocean surface (see below). A slight shift of only 10 per cent in any of these fluxes, and cumulative shifts in any of the other slower processes, such as volcanic outgassing, will make themselves noticeable in the span of a century. The meta-level processes orchestrating the disparate components of the global carbon cycle around 180 and then 280 ppm are not yet understood (Falkowski et al., 2003). How they will react to the imbalance brought about by human emissions is a troubling thought.
Sinks and sources of atmospheric CO2
So, from where did the CO2 come that ushered in the warming and where did it go when the planet was cooling? The sources and sinks and the various possible mechanisms affecting the transport from and to them, respectively, are still under debate. Even more intriguing, because even less understood, are the mechanisms which maintained CO2 concentrations at the high and low level marks of warm and cold ages for thousands of years. The search is complicated by complex interactions such as the following example. Retreat of the ice sheets uncovered vast tracts of land on which forests subsequently flourished that took up CO2 from the atmosphere. Further, because of the lower moisture content of cold air, vast regions of the land surface which are forested today, such as a substantial portion of the Amazon rain forest, were savannah in the ice ages and the deserts were much more extensive. So one would expect that the spread of forests on land ecosystems would take up, rather than release CO2 to the atmosphere in the transitions to warm ages and vice versa. However, there is about four times more carbon locked in the soil of land ecosystems than meets the eye as vegetation on the surface. The amount of carbon in the soil (including living roots) not only depends on the type of vegetation (grasslands and forests) but also on temperature and rainfall. So it is feasible that the carbon sequestered in the drier soils of ice-age savannahs could have been released by the moist, warm conditions, conducive to growth of soil microbes and fauna, offered by forests during their spread in the warm ages. As mentioned above, this is just one example of the complications facing earth system scientists in their quest to account for the flows between the major pools of the global carbon cycle.
The oceans, on the other hand, harbour fifty times the amount of carbon in the atmosphere in the form of derivatives of CO2: dissolved carbonate, bicarbonate, carbonic acid and molecular CO2. These components of the carbonate system are in dynamic equilibrium with one another and buffer the pH of the ocean at around 8.2; the same system also buffers the pH of blood plasma albeit between 7.35 and 7.45. The concentration of CO2 dissolved in water depends, apart from the state of the carbonate system, on its temperature and the CO2 concentration of the overlying air. Warming drives CO2 out and cooling pulls CO2 into the water; similarly the higher the concentration in the air, the more dissolves in the water. This is known as the physical carbon pump, and because it is well constrained, its magnitude can be modelled with reasonable accuracy. Thus about 25 per cent of the CO2 emitted by humans over the past decade has been transferred to the ocean by this process. This excess CO2 dissolved in the water makes it more acidic, which exerts stress on organisms that make carbonate skeletons (from corals and molluscs to unicellular chalk algae). Reports of signs of stress suffered by this motley group of organisms—the victims of ocean acidification—are increasing.
Much more complex are the workings of the biological carbon pump which functions according to the same principles as cloud formation and precipitation in the air. Dissolved plant nutrients (nitrate and phosphate) are transported upward by water movement (like water vapour in the air), and, in the presence of light they are converted into organic particles by photosynthesis of phytoplankton (the unicellular plants of the ocean). As long as they are alive, phytoplankton cells stay in suspension, but when they die, or are eaten by zooplankton, the dead cells or their remnants packed in faeces sink out of the surface layer, transporting the carbon they contain into the deep ocean. Like rain drops or snow flakes in the air, their sinking rate and the depth they reach in the deep ocean before being dissolved by the activity of deep sea microbes or consumed by zooplankton depends on many factors, such as size and density of the sinking particles and the likelihood of interception by the microbes and zooplankton in the water column which convert the carbon back into CO2 by utilising oxygen. A portion of the particles reach the sea floor where they are consumed by microbes and animals and a remnant is buried in the sediments for geological time scales.
The gases, in particular O2 and CO2 dissolved in the ocean’s surface layer are, over time scales of a few weeks, in equilibrium with the atmosphere. The CO2 deficit caused by phytoplankton uptake is compensated fairly quickly by the drawdown from the atmosphere. Equilibration of the waters of the deep ocean on the other hand occurs on time scales of about 1,000 years, the time required for the water of the entire ocean to pass through the surface layer in the course of vertical circulation of the ocean. Carbon exported out of the surface layer by the biological carbon pump is sequestered in the subsurface and deeper layers for time scales of tens, hundreds or a thousand years depending on the depth to which the organic matter sinks before being converted back into CO2 by microbes and animals and the time scales of ventilation of that water. The deeper the particles sink, the longer the CO2 is sequestered in the oceans. When deep water is transported back to the surface by upwelling, it ‘exhales’ back to the atmosphere the excess CO2 accumulated in it by the biological pump. In the order of 10-15 Gt of carbon are annually exchanged between ocean and atmosphere by the working of the biological pump (Falkowski et al., 1998). As atmospheric CO2 concentrations rise, less CO2 is out-gassed into the atmosphere from the upward transport of deep water to reach equilibrium.
As touched on above, about half the CO2 emitted by humans accumulates in the atmosphere, the other half is taken up by sinks on the land and oceans. Thus, the figures published for 2011 (Le Quéré et al., 2012) are 9.5 Gt emitted from fossil fuel combustion and cement production (3 per cent over 2010 levels) and 0.9 Gt from deforestation and fires. Measurements indicate that 3.6 Gt remained in the atmosphere and models suggest that the ocean took up 2.6 Gt, implying that the remaining 4.1 Gt was taken up by various terrestrially based processes. The sinks of the ‘missing CO2’ are a cause for relief, as without them, atmospheric CO2 concentrations would now be closer to 500 ppm instead of 391 ppm. But they also raise worries, because they have not been properly identified and their relationship to rising CO2 concentrations is uncertain, hence also their future uptake efficiency (Schimel 2007). Clearly, earth system scientists need to redouble their efforts to identify and quantify these sinks, not only to be warned on time, but also to find out if they can be manipulated or stimulated to take up more CO2 than they are currently doing. As we shall see below, natural sinks could be simulated or mimicked artificially. Quantitative projections indicate that the excess CO2 already in the atmosphere is going to stay there for many thousands of years and the bulk will eventually be taken up by the oceans via the acidifying surface layer.
Biological sequestration of carbon on land
Clearly the need to reduce emissions is extremely urgent but ongoing global warming is a problem already with us, as there is no saying how climate patterns are going to adjust themselves to the new CO2 levels. Doing nothing is certainly the worst option, so we need to explore the feasibility of manipulating or mimicking the known carbon sinks to start removing as much CO2 as possible, regardless of the emission issue. It is being argued that initiating research on CO2 removal techniques—one branch of geo-engineering—will distract attention from the paramount task of reducing emissions. But one could also argue the opposite, that more effort expended on this front could draw public attention to the magnitude and overall seriousness of the problem building up around us and highlight how restricted are the means to cope with it. After all, the amounts that could be removed by any of the available techniques are puny compared to the size of the threat looming in the sky. The impact of any CO2 removal technique on atmospheric CO2 levels will be discernible only many years after emissions have been drastically reduced (Lenton 2010). An analogy which comes to mind is that of a ship which has sprung a leak through which so much water has entered the ship that its stability is seriously threatened. As long as bailing out the water does not interfere with stopping the leak, it would not make sense to defer the former until the latter is fixed. The ‘crew’ of the ship could busy themselves with fixing the leak using technological means (developing infrastructure to tap renewable energy instead of fossil fuels) and the ‘passengers’ could be organised to bail out the water with buckets using manpower.
Technological means of sequestering carbon (artificial trees, on site carbon capture and storage) do not appear to be viable given their huge energy demand and problems associated with the safe storage of CO2. Harnessing the biosphere is gentler because it can be carried out in a decentralised manner. Thus, a good example of the ‘bucket technique’ is biochar sequestration which mimics savannahs by adding inert forms of carbon to soils at extensive scales. The technique holds a lot of promise and is under development also in India (Barrow, 2012). Plant wastes from agriculture are converted into elemental carbon (biochar) by using a modernised version of the technique employed for millennia to make charcoal from wood, except that biochar is flaky. Adding powdered biochar, reputed to be resistant to breakdown by microbes, to agricultural soil has been shown to have a beneficial effect on plant growth. In any case, carbon sequestration by this technique should only be employed if the beneficial effect is substantial and amply demonstrated. Since the effects will vary with soil type, extensive, local experimentation will be required, but this would certainly be worth the effort. This technique is particularly appealing because it does not require industrial-scale implementation. Indeed, it can be carried out even by poor farmers if they are provided with the necessary implements (simple stoves) and incentives (higher crop yields), and perhaps later, remuneration from a global carbon tax distributed justly amongst the nations. This is of course thinking far ahead but it is good to know that this approach, however small its potential impact, exists. Given that total annual net plant production on land is only 60 Gt, a very small portion of which could feasibly be channelled into the soil, the scope of this technique for carbon sequestration is accordingly limited, probably less than 1 Gt per year when developed to full scale. Perseverance will be called for, and belief in the workings of the scientific enterprise for which scientists will have to gain the trust of the global population.
Under ongoing climate change, afforestation (growing forest on land previously without forests) and reforestation (restoring recently deforested land), despite their many merits, have more limitations and risks as a carbon sequestration technique than biochar, which does not, by any means, imply that planting more trees will be of little avail. Indeed, wastes from the timber industry (bark and branches) could be converted into biochar and applied appropriately (Lehmann, 2007). In any case, existing forests represent a very large pool in the global carbon budget, so maintaining them is part and parcel of the efforts to reduce emissions. Thus, India’s forest cover today is estimated at approximately 9-10 Gt carbon. There are innumerable other benefits accruing from forests (both natural and planted) and from trees planted in cities and on the borders of agricultural land. However, one cannot bank on forest cover as a secure, long-term carbon sequestration technique given the vulnerability of forests to fire, insect pests and fungal diseases, all likely to increase with global warming (Kurz et al., 2008). Glaring examples are the extensive forest fires that have recently occurred in the Amazon basin and in Russia and the depredations of the bark beetle, enabled by global warming, over vast stretches of the forests of western USA and Canada. Nevertheless, in addition to encouraging more natural forest cover, we should redouble our efforts in planting more trees and using wood as a building material or in furniture. The huge flux of refugees from the deltas also does not bode well for increasing the area under forest.
Biofuels (wood pellets, plant oils, alcohol and biogas) on the other hand are proclaimed as an alternative to fossil fuels because of their purported carbon neutrality, i.e., their use should not lead to carbon emissions. This argument, however, overlooks the fact that industrial agriculture, which is the main source of biofuels today, has a large carbon footprint in the form of machines, fertiliser and irrigation and that biofuels compete with food production. Today, biofuels are coming increasingly under attack for the above reasons and also because of their limited scope, illustrated with the following example from Germany. Annual net plant production over the total area of Germany is about 100 million tonnes of carbon with an energy content equivalent to 3.4 x 1018 joules. On an areal basis, plant production in Germany will not be very different from that in India because, despite its cooler climate, it receives regular rainfall over the growing season. To return to Germany, the total energy consumption from fossil fuels, nuclear and renewable sources is 14 x 1018 joules, i.e., four times the total plant production. Since 50 per cent of the latter is appropriated as human and animal food, the limited scope for biofuels is apparent (German National Academy of Sciences, Leopoldina, 2012). Indeed, the very fact that biofuels (alcohol from maize and oil from rapeseed) have received so much promotion in the EU, in terms of advertisement and subsidies, is proof of what can be achieved by policy makers driven by the right incentives. It is, of course, also a warning of the dimensions that unchecked misuse can attain.
Similar arguments can also be made against developing marine biofuels. The total, annual net plant production (by phytoplankton and sea weeds) in the entire oceans is around 40 Gt, so even if facilities were developed along the coast to grow marine plants intensively, their yield would be vanishingly small compared to the demand. Further, growing oil-producing algae for fuel would require extensive, industrial-scale infrastructure with a large energy demand for pumping and filtering water, not to mention recycling nutrients. The same facilities could be used to better purpose for growing algae for aquaculture and other uses, such as pharmaceuticals or alginates. It also needs to be pointed out that the same area of the biofuel facility covered with photovoltaic cells would yield twenty times more energy in the form of clean electricity simply because plants, including algae, are able to utilise at most less than one percent of incoming solar energy under optimal conditions, compared to >20 per cent already achieved by solar technology (Lenton, 2010).
There is also a lesson from history. Civilisations before industrialisation were run almost entirely on biofuels in the form of wood as fuel for cooking, making bricks and heating houses, and grass for feeding the animal ‘machines’ (horses and oxen). The fact that the exceedingly low per capita energy consumption of pre-industrial civilisations compared with today already resulted in widespread deforestation of regions occupied by thriving human populations, from antique to modern times, needs to be stressed in this connection. However, there were also highly organised civilisations in Egypt and Asia that lasted for millennia, so one wonders how they solved their demand for fuels. Possibly, climate is the decisive factor because of the wood required for heating during cold winters. In this connection it would be interesting to know how much wood was burnt to make the huge amount of bricks that went into construction of the ancient towns of the Harappan civilisation in the Indus basin with their sophisticated drainage systems.
Biological sequestration of carbon in the ocean
As outlined above, the ocean sequesters carbon from the atmosphere by the action of the biological carbon pump which is driven by phytoplankton production. Along the continental margins, the supply of the nutrient nitrate limits the magnitude of plant production but in the open ocean, iron, which is highly insoluble in alkaline, oxygenated water, is the limiting nutrient. The major supply of iron to the open ocean is by dust blown off from arid regions and deserts of the continents. It follows that the most severely iron-limited regions of the ocean are the ones furthest away from land. This applies particularly to the entire Southern Ocean which is rich in nitrate and phosphate (the other limiting bulk nutrient) but has low productivity because of iron limitation. Phytoplankton require 1 atom of nitrogen to bind 6 atoms of carbon in organic matter but the requirement for iron is about 100,000 times lower. So the iron in a sprinkling of dust over the ocean can stimulate phytoplankton growth sufficiently to greatly enhance export to the deep sea and sea floor through the biological carbon pump. Measurements of samples from ice cores and ocean sediments indicate that during the drier ice ages much more dust was blown into the oceans than during the moist warm ages, so phytoplankton productivity in the oceans must have been correspondingly higher.
In his iron hypothesis John Martin (1990), who first showed that iron deficiency can limit phytoplankton growth, proposed that a more active biological carbon pump in the Southern Ocean during the ice ages was a major sink of atmospheric CO2 (Martin 1990). Martin suggested carrying out iron fertilisation experiments at scales of tens to hundreds of square kilometres in the open ocean to test his hypothesis. The proposal was greeted with disbelief by the scientific community (including myself at the time) as it was thought that horizontal mixing would quickly dilute the fertilised water, and that the added iron would immediately precipitate to insoluble rust and sink out. Nevertheless, Martin succeeded in acquiring the necessary funds and the first ocean iron fertilisation (OIF) experiment (IRONEX 1) was carried out in the Equatorial Pacific in 1993, unfortunately, shortly after his untimely death. Since then a dozen OIF experiments have been carried out in various iron-limited regions of the open ocean which released up to 10 tonnes of dissolved iron sulphate over tens of square kilometres and elicited phytoplankton blooms visible in satellite images that could be studied for up to 5 weeks.
Most OIF experiments stimulated phytoplankton blooms dominated by diatoms, a group of algae with protective shells made of silica (similar to glass). Natural diatom blooms are known to die and sink en masse, but the OIF experiment EIFEX carried out in 2004 was the first to show that the bulk of the iron-induced diatom biomass sank out to the deep sea and sediments about 4-6 weeks after fertilisation, thereby sequestering a substantial amount of carbon from the atmosphere for centuries and longer (Smetacek et al., 2012). Because grazing pressure on the diatoms was surprisingly low despite large grazer populations, the growth and export of diatom biomass had little effect on the ecosystem. The joint Indo-German OIF experiment (LOHAFEX) was carried out in 2009 in silicon-limited water precluding diatom growth to test the effect of fertilisation on the non-diatom phytoplankton. The populations of small flagellated phytoplankton doubled biomass in about 2 weeks similar to the diatoms but, unlike them, did not increase in the subsequent 3.5 weeks because they were heavily grazed by the zooplankton. Much of the carbon produced by the bloom accumulated as dissolved organic carbon in the surface layer and only a negligible amount sank out of it (Naqvi and Smetacek, 2011). However, the ultimate fate of the added iron could not be determined—did it sink out in the course of recycling by itself or did it eventually export carbon to greater depths, e.g. in the form of dead zooplankton? Answering this crucial question will require longer-term experiments.
Although both experiments have increased our understanding of ocean ecosystem functioning and provided data to parameterise models of ocean carbon cycles, they have also been plagued by the problem of dilution with surrounding, unfertilised water which hampered quantification of the processes. Nevertheless, the amount of carbon which sank below 1,000 m depth in the form of aggregated dead algal cells in the EIFEX experiment was at least 10 g per square metre, without correcting for dilution. Observations of natural blooms suggest that a large portion will have settled out on the sea floor as a light fluffy layer and provided food to the deep-sea fauna. The CO2 released at this depth (an average of 3,700 m) would reside in the ocean for about 1,000 years. So how much is this in a relevant scale? The overall area of the Southern Ocean is around 50 million km2 so scaling up the EIFEX value of 10 g carbon/m2 to this area would be equivalent to 0.5 Gt, which is not much compared to emissions, but too much to ignore. The amount of iron required at 0.1 tonne/km2 ocean surface would be 5 million tonnes. The EIFEX bloom was clearly overdosed so the amount actually required is likely to be less than half this figure. For scale, compare it with the total of 7 Gt of goods and raw materials, of which 2 Gt is petroleum in tankers, shipped annually over the oceans today. Ferrous sulphate is an unwanted by-product of many industries and the quantities required are freely available at low price. Spreading the iron will not require much fuel: the fertilising ships or barges could be anchored at strategic sites and distances apart and dispersal left to the strong currents of the Antarctic Circumpolar Current. Diatom blooms comparable to the EIFEX one could theoretically be generated in the spring over the entire area, as silicon is available then. In the southern half of the Southern Ocean enough silicon is available throughout the year, so more blooms could be induced after the spring. These rough estimates are only intended to convey a sense of the upper limits of the sequestration potential offered by large-scale OIF and its potential carbon footprint. In any case, larger-scale, longer-term experiments must be first carried out to weigh the pros and cons of OIF before applying the technique as a climate mitigation measure.
Unfortunately, despite the wealth of insight into the workings of planktonic ecosystems provided by OIF experiments, they have not achieved the popularity they deserve in the scientific community because of their geo-engineering implications. It is feared that private companies could use OIF on a large scale to make profits on the carbon credit market offered by the Kyoto Protocol. However, international regulations by the London Convention are now in place that allow experiments but restrict commercial-scale endeavours, so it is hoped that more OIF experiments will be carried out by professional scientists equipped with permits in the not-too-distant future. It might be mentioned that the possible negative side effects of sustained OIF pointed out in the literature, such as anoxia in the deep ocean and production of the GHGs methane and nitrous oxide, or the spread of toxic algal blooms, are worst-case scenarios based on thought experiments. Certainly the effects of small-scale OIF experiments are innocuous, if not benign, as they have a beneficial effect on the animals of the plankton and the underlying sea floor. As in any medical treatment, and we are dealing with a fevered planet, the efficacy of the treatment crucially depends on the dosage. If it ever comes to large-scale OIF the control over the treatment should remain firmly in the hands of the international scientific community and carried out by a non-profit agency of the UN, analogous to the International Atomic Energy Agency or the World Health Organisation of the UN (Smetacek and Naqvi, 2008). In short, OIF is essentially a ‘bucket technology’, requiring comparatively modest infrastructure and technological development that could be expanded, redirected to other locations, or stopped at any time.
Learning from history
To continue the leaking ship analogy in today’s context, the captain (global government) is intent on meeting the goals of the company (the globalised economy) and does not want to be bothered by the leak (GHGs and dwindling resources). He is pinning his hope on technological advances that, in a miraculous way, will come to the rescue and give his company a further lease on life. Stories are making the rounds on the ship to bolster his views and calm the passengers (the public) who have heard of the leak and don’t know what to believe. One particularly revealing story circulating on the internet is the long-forgotten horse manure problem faced by New York City a hundred years ago. Apparently, the manure of horse-driven goods and human traffic piled up on the streets to such an extent that it impeded traffic and pedestrians and bred swarms of flies that spread diseases and caused many human deaths. The problem was solved, not by organising the removal of manure from the streets by giving it a value and thereby putting it to use, but by inventing and deploying autos which replaced the horses; the moral intended by those circulating the story on the internet today is that human ingenuity will drive technological progress that will take care of problems as they arise. No need to worry about global warming.
The story actually has a powerful message for today, as one can see some history repeating itself. According to Morris (2007), a main topic of the first international urban planning conference held in New York in 1898 was dealing with horse manure, projected to rise to the third floor of Manhattan houses by 1930. “All efforts to mitigate the problem were proving woefully inadequate. Stumped by the crisis, the urban planning conference declared its work fruitless and broke up in three days instead of the scheduled ten.” The obvious solution of the horse manure would have been recycling it between consumer and producer: animal manure is a valuable resource and will have been required by the hay and grain producers to fertilise their fields, particularly because this was before artificial fertilisers were produced in factories. Thus, the fodder of the horses living in the city must have come from the surrounding pastures and agricultural fields and transported there by carriages and carts pulled by horses or mules (the trucks of the biofuel era). Since the resultant manure accumulated in the cities, the carts must have gone back empty, so one would expect that the soil of the hay fields would sooner or later become impoverished of nutrients. The same applies to the carts bringing the food to feed the inhabitants.
Introduction of the railways greatly increased the volume of goods entering the city, presumably also horse fodder from further afield, that had to be distributed by more horse-drawn carts. So this particular technological advance based on a new fuel—the coal-burning steam engine—actually exacerbated the horse-manure problem in the city (Morris 2007) and must have hastened degradation of surrounding cultivated soil, agricultural abandonment and rural exodus to the city and promoted deforestation further afield. Indeed, much of the eastern USA was deforested to make room for agriculture during the nineteenth century (Drummond and Loveland 2010), of which a significant part will have been production of hay and oats to feed the horses and mules employed for civilian and military transportation. Significantly, although the causes are not elaborated, in the eastern USA “historical land-use trends have been dominated by a decline in agriculture and subsequent reforestation as cropland, pasture, and other cleared lands were abandoned in the 19th and early 20th centuries” (Drummond and Loveland 2010). This is surprising in view of the fact that food demand for humans and horses was increasing in that period. It is very likely that soil impoverishment was a major cause of agriculture failure which would imply that the rural populations surrounding the cities were the real losers of the horse-manure problem.
So the larger-scale lesson to be learned from the story of how automobiles came to the rescue of the horse-manure problem will emerge from examining why it was not possible to implement the obvious, logical solution—establishing a recycling network between the consumers and producers of horse food simply by using the existing transport infrastructure more efficiently. This would have entailed taxation of the profits of the horse owners to lend value to the manure to pay for collecting and transporting it back to the farmers. Keeping the city clean would have been to the greater good by creating jobs, slowing immigration to the city and setting an example of sustainable growth. One wonders, from today’s vantage point, what prevented the citizens of New York City from electing politicians willing to tackle a growing problem adversely affecting everyday life, the solution for which must have seemed so obvious. Surely, there will be lessons to be drawn that are highly pertinent for today’s problems.
The horse-manure problem for agriculture was indeed solved by a major technological advance: artificial production of nitrogen fertiliser. It was projected in 1898 that the nitrogen demand would exceed the supply from saltpetre mines in Chile in 20 years and that, unless chemistry came to the rescue, western countries would be threatened by famine (Wikipedia). Indeed, the Haber-Bosch process solved the looming problem more or less in time but now we are reaching the end of phosphate reserves for which no chemical advance is in sight—mining accumulated wastes (e.g. lake sediments) and recycling them is the only long-term solution. Today, the cars’ ‘manure’ is in the sky and the real manure from industrial livestock farming based on artificial fertiliser is now a problem for the farmers and continues to be an environmental threat simultaneously draining phosphate and potash reserves. Plastic ‘manure’ clutters the land and oceans and safe disposal of radioactive wastes has yet to be solved; humanity has clearly reached the end of the tether of the non-sustainable way of life, the one that made a problem out of horse manure.
Where there’s a will there’s a way
The horse-manure problem and its repercussions are paradigmatic to the one-way-street mode of wasteful consumption that has been extending its grip over the earth during the past decades. The lesson to be learned is that technological advances have, very obviously, vastly improved the quality of life of the majority of people on the globe but their implementation needs to be controlled and their wastes guided into the path of sustainability. Thus, the chemical industry, yielding to public pressure exerted by green NGOs, cleaned up its wastes in the western countries by the end of the last century and is continuing to thrive and grow but in a much cleaner manner there. It is to be hoped that similar policies will be enforced in developing countries sooner than later to curb air and water pollution in the interests of the greater good for human inhabitants and their environment. The benefits might not be so obvious this time as chokingly polluted air will be replaced by sweltering heat edging above the bearable mark, particularly in South Asia.
The current impasse in international negotiations to curb GHG emissions is not likely to last long, given the increasing likelihood of severe weather events and the ongoing, inexorable melting of both the polar ice caps to which the world will have to wake up—the sooner the better. India will be impacted particularly severely—by killing heat waves, erratic monsoons and millions of refugees driven from the deltas by sea level rise—so must be at the forefront of guiding one-way-street consumerism into recycling channels. Indeed, India has the oldest continuous civilisation in the world, implying that the art of sustainability has been practised since ancient times. India now has an obligation to show the rest of the world how maintaining a balance was managed. Thus, in India animal manure has always been, and still is, a valuable resource even in the cities. Unlike western cattle, that are crammed with high-quality food to maximise meat and milk production, a main function of Indian cattle in traditional agriculture is to convert plant wastes and vegetation (including leaves from lopped trees) from the surrounding countryside into fuel and fertiliser, a task they largely accomplish by foraging for themselves. Further, oxen play a key role for transport and ploughing and cow milk is an added benefit. The cattle ‘pastures’ in the outskirts serve as the extensive toilets of the village, so plant nutrients are recycled and rivers stay clean. In this recycling system every cow is valuable so it is no wonder that, to the bewilderment of the rest of the world, cattle are not slaughtered for food in India (Harris 1975). The derogatory connotations of the widely used metaphor ‘sacred or holy cow’ indicates that the merits of the unique recycling system established in ancient India, and still practised widely today, has not been appreciated. No doubt, cattle contribute to global warming by producing methane and India harbours the largest livestock population in the world; further, soot aerosols from burning manure-based fuel exacerbate global warming. Nevertheless, comparing Indian cows to those of developed countries is like comparing bicycles to cars—by producing fuel from sources difficult for humans to tap, the cows of traditional rural India helped in conserving natural habitat. The steep rise in population of the past decades has pushed the traditional system to the limits of its viability. Technological innovations are called for that keep the principle but change the ways in which it is practised.
Another unique feature of India, related to the role of the cow, is the widespread prevalence of vegetarianism, which became a part of religion 2,500 years ago. The substantial contribution of a low-meat diet to sustainability can be illustrated with the following example of Germany. The current German diet in terms of calories is 60 per cent plant (bread, potatoes, vegetables, fruit) and 40 per cent animal (dairy, egg and meat products). The plant food consumed by humans is 10 million tonnes carbon per year, but the animal food demand is 50 million tonnes carbon, of which 10 million tonnes are imported (German National Academy of Sciences Leopoldina, 2012). Clearly, reducing carnivory in western countries by just 50 per cent will open up land for carbon sequestration, reduce carbon emissions from intensive agriculture and also help mitigate the looming phosphate shortage due to industrial livestock farming. India’s contribution to reducing the attractiveness of meat dishes would be to popularise the Indian vegetarian cuisine, which is both healthy and tasty if properly prepared. The task cannot be left to Indian restaurants abroad that vary greatly in the quality of food they serve, but will need to be developed in an organised manner, by training cooks within India to prepare traditional and modern Indian vegetarian dishes in foreign countries. Demand for such trained cooks, equipped with degrees from reputable training centres, could be raised by an intelligent, multi-pronged advertising campaign addressed at the growing numbers of people in the west interested in reducing their meat intake for whatever reason, whether safeguarding their own health, compassion towards animals or reducing their carbon footprint. The cuisine campaign would debunk the popular conception in the west that, by denying themselves the pleasures of meat-eating, vegetarians purposely reduce their happiness index. Clearly, the sensory enjoyment of a well-prepared vegetable thali or a dish of chaat could work wonders in this regard. An attractive alternative to meat provided by a new generation of trained cooks, analogous to the army of Indian IT specialists, is just one facet of what India would have to offer a world on the brink of crisis.
Yet another unique feature of India, taken for granted within the country and not adequately appreciated abroad, is peaceful coexistence with potentially dangerous wild animals in the cause of biodiversity preservation. Thus, Indian civilisation was the only one outside Africa which preserved the megafauna—elephants, rhinos, wild cattle, lions and tigers—and their habitat across broad stretches of the territory it occupied throughout South and Southeastern Asia well into the last millennium. Elsewhere in the world, 90 per cent of the megafauna, including various kinds of elephants (mammoths and mastodons), perished by the end of the last ice age together with the rise, or first appearance, of humans. The origins of the attitude which enabled peaceful coexistence are shrouded in mystery but compassion for animals has always been a hallmark of Indian civilisation. Thus, hunting is listed as one of the sins (or vices) in Indian scriptures although it was widely practised since ancient times. Big-game hunting, at first a privilege of the nobility, became a popular pastime in the first half of the 20th century, as much as it did in western countries. However, when India realised in 1973 that the tiger was on the brink of extinction, a law banning hunting altogether was adopted in both houses of parliament at short notice and former hunters turned ardent conservationists (Rangarajan, 2001). If democratic India can ban hunting to save the ferocious tiger at the drop of a hat, what else can she not do? Can one imagine western hunters locking up their cherished guns for good because of popular sentiment? I speak from personal experience as I was an avid hunter in the Kumaon foothills in my youth and am now ashamed of what I once used to boast about. Condemnation of hunting is a sentiment gaining ground in the West. In a few decades, Indians should be proudly saying, “We led the world in respecting the rights of wild animals not to be terrorised by hunters.”
Today, the dire status of tigers is front-page news in India and, were it not for the obscenely high prices paid for tiger parts outside India, the tiger population would have increased substantially, as it did before tiger-superstitious individuals became wealthy. This also applies to the ivory trade which has apparently dried up in India, despite an ancient tradition, in contrast to Japan and China where it still flourishes. Pre-colonial India was a proverbially rich country whose contemplative way of life and religion, coupled with respect for the environment and compassion for animals, was voluntarily adopted widely in eastern and south-eastern Asia in the course of the past two millennia (Sen, 2007). The economic structure on which the way of life of pre-colonial India was based, in particular the role of recycling, needs to be systematically researched by interdisciplinary teams applying modern scientific methods, and the appropriate lessons distilled for promulgation. Similarly, evolution of the mindset which did not destroy the fierce megafauna, protected the environment by establishing ‘sacred groves’ and subsequently developed Indian vegetarian cuisine would be a promising field of research in the context of the ‘happiness index’ of a people. Increasing awareness of the traits unique to India dealt with above could be raised by the media and the various new communication networks. Awareness based on the best scientific knowledge could be channelled into an enthusiastic, nation-wide effort to combat global climate change and habitat loss that could, in turn, serve as an incentive to the rest of the world. There is much more to ‘Incredible India’, slumbering in the roots of her ancient, continuous civilisation, than the image projected currently to the rest of the world.
I am greatly indebted to Paula Kullberg for regular updates from the climate front.
- Barrow, C.J. (2012), ‘Biochar: Potential for countering land degradation and for improving agriculture’, Applied Geography, 34: 21-28, doi:10.1016/j.apgeog.2011.09.008, ISSN- 0143-6228.
- Coumou, D. Rahmstorf, S. (2012), ‘A decade of weather extremes’, Nature Climate Change, 2, 491-496, doi: 10.1038/nclimate1452.
- Drummond, M.A., Loveland, T.R. (2010), ‘Land-use pressure and a transition to forest-cover loss in the eastern United States’, BioScience, 60, 286-298, ISSN- 006-3568
- Falkowski, P.G., Barber, R.T., Smetacek, V. (1998), ‘Biogeochemical controls and feedbacks on ocean primary production’, Science, 281, 200- 206, doi: 10.1126/science.281.5374.200.
- Falkowski, P., Scholes, R.J., Boyle, E., Canadell, J., Canfield, D., Elser, J., Gruber, N., Hibbard, K., Högberg, P., Linder, S., Mackenzie, F.T., Moore III, B., Pedersen, T., Rosenthal,Y., Seitzinger, S., Smetacek, V., Steffen, W. (2000), ‘The global carbon cycle: A test of our knowledge of earth as a system’, Science, 290: 291-296.
- German National Academy of Sciences Leopoldina (2012), ‘Bioenergy : Chances and Limits’, ISBN: 978-3-8047-3081-6, www.leopoldina.org: 1-118.
- Grantham; J. (2012), ‘Be persuasive. Be brave. Be arrested (if necessary)’, Nature: World View, 491: 303, doi: 10.1038/491303a.
- Harris, M. (1975), ‘Cows, Pigs, Wars and Witches: The Riddles of Culture’, Hutchinson & Co.: London, ISBN- 0-09-122750-X.
- Intergovernemental Panel on Climate Change (2007), ‘WG1 Fourth Assessment Report: The Physical Science Basis’, Climate Change 2007, www.ipcc.ch/SPM2feb07.pdf
- Koven, C.D., Ringevak, B., Friedlingstein, P., Clais, P., Cadule, P., Khvorostyanov, D., Krinner, G. & Tarnocai, C. (2011), ‘Permafrost carbon-climate feedbacks accelerate global warming’, Proceedings of theNational Academy of Sciences, USA, 108(36),14769–14774. doi:10.1073/pnas.1103910108
- Kurz, W.A., Stinson, G., Rampley, G.J., Dymond, C.C., Neilson, E.T. (2008), ‘Risk of natural disturbance makes future contribution of Canada’s forests to the global carbon cycle highly uncertain’, Proceedings of the National Academies of Scienes, USA, 105, 1551-1555.
- Lehmann, J. (2007), ‘A handful of carbon’, Nature, 447, 143-144, doi:10.1038/447143a.
- Le Quéré, C., Andres, R. J., Boden, T., Conway, T., Houghton, R. A., House, J. I., Marland, G., Peters, G. P., van der Werf, G., Ahlström, A., Andrew, R. M., Bopp, L., Canadell, J. G., Ciais, P., Doney, S. C., Enright, C., Friedlingstein, P., Huntingford, C., Jain, A. K., Jourdain, C., Kato, E., Keeling, R., Levis, S., Levy, P., Lomas, M., Poulter, B., Raupach, M. R., Schwinger, J., Sitch, S., Stocker, B. D., Viovy, N., Zaehle, S., and Zeng, N. (2012), ‘The global carbon budget 1959-2011’, Earth System Science Data-Discussions, 5, 1107-1157, doi: 10.5194/essdd-5-1107-2012.
- Martin, J. H. (1990), ‘Glacial-interglacial CO2 change: The iron hypothesis’, Paleoceanography 5, 1-13, doi: 10.1029/PA005i001p00001.
- Morris, E. (2007), ‘From Horse Power to Horsepower’, ACCESS, 30, 1-9.
- Naqvi S.W.A., Smetacek V., (2011), ‘Ocean iron fertilization’, Oceans: the new frontier: A Planet for life 2011: Jacquet P., Pachauri R.K, Tubiana L. (Eds.), TERI Press, 197-205.
- Rangarajan, M. (2001), ‘India’s Wildlife History’, Permanent Black, New Delhi, 135, ISBN- 81 7824 011 4 (hbk).
- Schimel, D. (2007) ‘Carbon cycle conundrums’, Proceedings of the National Academy of Sciences: USA, 104, 18353-18354.
- Sen, A. (2005), ‘The Argumentative Indian: Writings on Indian History, Culture and Identity’, Penguin Books, ISBN: 0141012110.
- Smetacek, V. and Naqvi, S.W.A. (2008), ‘The next generation of iron fertilization experiments in the Southern Ocean’, Philosophical Transactions of The Royal Society A: Mathematical, Physical and Engineering Sciences, 366, 3947-3967.
- Smetacek, V., Klaas, C., Strass, V.H., Assmy, P., Montresor, M., Cisewski, B., Savoye, N., Webb, A., D’ovidio, F., Arrieta, J.M., Bathmann, U., Bellerby, R., Berg, G.M., Croot, P., Gonzalez, S., Henjes, J., Herndl, G.J., Hoffmann, L.J., Leach, H., Losch, M., Mills, M.M., Neill, C., Peeken, I., Roettgers, R., Sachs, O., Sauter, E., Schmidt, M.M., Schwarz, J., Terbrueggen, A. & Wolf-Gladrow, D. (2012), ‘Deep carbon export from a Southern Ocean iron-fertilized diatom bloom’, Nature, 487: 313-319, doi: 10.1038/nature 11229.