Britannica LaunchPacks | Human Populations and Their Environment

Human Populations and Their Environment For Grades 9-12

This Pack contains:

6 ARTICLES 6 IMAGES 1 VIDEO

© 2020 Encyclopædia Britannica, Inc. 1 of 106 Britannica LaunchPacks | Human Populations and Their Environment

acid rain

also called acid precipitation or acid deposition, precipitation possessing a pH of about 5.2 or below primarily produced from the emission of sulfur dioxide (SO ) and nitrogen oxides (NO ; the combination of NO and NO ) 2 x 2 from human activities, mostly the combustion of fossil fuels. In acid-sensitive landscapes, acid deposition can reduce the pH of surface waters and lower biodiversity. It weakens trees and increases their susceptibility to damage from other stressors, such as drought, extreme cold, and pests. In acid-sensitive areas, acid rain also depletes soil of important plant nutrients and buffers, such as calcium and magnesium, and can release aluminum, bound to soil particles and rock, in its toxic dissolved form. Acid rain contributes to the corrosion of surfaces exposed to air pollution and is responsible for the deterioration of limestone and marble buildings and monuments.

The phrase acid rain was first used in 1852 by Scottish chemist Robert Angus Smith during his investigation of rainwater chemistry near industrial cities in England and Scotland. The phenomenon became an important part of his book Air and Rain: The Beginnings of a Chemical Climatology (1872). It was not until the late 1960s and early 1970s, however, that acid rain was recognized as a regional environmental issue affecting large areas of western Europe and eastern North America. Acid rain also occurs in Asia and parts of Africa, South America, and Australia. As a global environmental issue, it is frequently overshadowed by climate change. Although the problem of acid rain has been significantly reduced in some areas, it remains an important environmental issue within and downwind from major industrial and industrial agricultural regions worldwide. Chemistry of acid deposition

Acid rain is a popular expression for the more scientific term acid deposition, which refers to the many ways in which acidity can move from the atmosphere to Earth’s surface. Acid deposition includes acidic rain as well as other forms of acidic wet deposition—such as snow, sleet, hail, and fog (or cloud water). Acid deposition also includes the dry deposition of acidic particles and gases, which can affect landscapes during dry periods. Thus, acid deposition is capable of affecting landscapes and the living things that reside within them even when precipitation is not occurring.

© 2020 Encyclopædia Britannica, Inc. 2 of 106 Britannica LaunchPacks | Human Populations and Their Environment

The nitrogen cycle.

Encyclopædia Britannica, Inc.

Acidity is a measure of the concentration of hydrogen ions (H+) in a solution. The pH scale measures whether a solution is acidic or basic. Substances are considered acidic below a pH of 7, and each unit of pH below 7 is 10

times more acidic, or has 10 times more H+, than the unit above it. For example, rainwater with a pH of 5.0 has

a concentration of 10 microequivalents of H+ per litre, whereas rainwater with a pH of 4.0 has a concentration of

100 microequivalents of H+ per litre.

Major sulfur-producing sources include sedimentary rocks, which release hydrogen sulfide gas, and…

Encyclopædia Britannica, Inc.

Normal rainwater is weakly acidic because of the absorption of carbon dioxide (CO ) from the atmosphere—a 2 process that produces carbonic acid—and from organic acids generated from biological activity. In addition, volcanic activity can produce sulfuric acid (H SO ), nitric acid (HNO ), and hydrochloric acid (HCl) depending on 2 4 3 the emissions associated with specific volcanoes. Other natural sources of acidification include the production of

© 2020 Encyclopædia Britannica, Inc. 3 of 106 Britannica LaunchPacks | Human Populations and Their Environment

nitrogen oxides from the conversion of atmospheric molecular nitrogen (N ) by lightning and the conversion of 2 organic nitrogen by wildfires. However, the geographic extent of any given natural source of acidification is small, and in most cases it lowers the pH of precipitation to no more than about 5.2.

SO2 and NOx emissions in the U.S., 2008.

Encyclopædia Britannica, Inc.

Anthropogenic activities, particularly the burning of fossil fuels (coal, oil, natural gas) and the smelting of metalores, are the major causes of acid deposition. In the United States, electric utilities produce nearly 70 percent of SO and about 20 percent of NO emissions. Fossil fuels burned by vehicles account for nearly 60 2 x percent of NO emissions in the United States. In the atmosphere, sulfuric and nitric acids are generated when x SO and NO , respectively, react with water. The simplest reactions are: 2 x

SO + H O → H SO ←→ H+ + HSO ←→ 2H+ + SO 2 2 2 2 4 4 4

NO + H O → HNO ←→ H+ + NO 2 2 3 3

These reactions in the aqueous phase (for example, in cloud water) create wet deposition products. In the gaseous phase they can produce acidic dry deposition. Acid formation can also occur on particles in the atmosphere.

Where fossil fuel consumption is large and emission controls are not in place to reduce SO and NO emissions, 2 x acid deposition will occur in areas downwind of emission sources, often hundreds to thousands of kilometres away. In such areas the pH of precipitation can average 4.0 to 4.5 annually, and the pH of individual rain events can sometimes drop below 3.0. In addition, cloud water and fog in polluted areas may be many times more acidic than rain falling over the same region.

Many air pollution and atmospheric deposition problems are intertwined with one another, and these problems are often derived from the same cause, namely the burning of fossil fuels. In addition to acid deposition, NO x emissions along with hydrocarbon emissions are key ingredients in ground-level ozone (photochemical smog) formation, which is one of the most widespread forms of air pollution. The SO and NO emissions can generate 2 x fine particulates, which are harmful to human respiratory systems. Coal combustion is the leading source of atmospheric mercury, which also enters ecosystems by wet and dry deposition. (A number of other heavy metals, such as lead and cadmium, and various particulates are also products of unregulated fossil fuel combustion.) Acid deposition of nitrogen derived from NO emissions creates additional environmental problems. x For example, many lake, estuarine, and coastal marine systems receive too much nitrogen from atmospheric deposition and terrestrial runoff. This eutrophication (or over-enrichment) causes the overgrowth of plants and algae. When these organisms die and decompose, they deplete the dissolved oxygen supply necessary for most aquatic life in water bodies. Eutrophication is considered to be a major environmental problem in lake, coastal marine, and estuarine ecosystems worldwide.

© 2020 Encyclopædia Britannica, Inc. 4 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Ecological effects of acid deposition

Effects on lakes and rivers

The regional effects of acid deposition were first noted in parts of western Europe and eastern North America in the late 1960s and early 1970s when changes in the chemistry of rivers and lakes, often in remote locations, were linked to declines in the health of aquatic organisms such as resident fish, crayfish, and clampopulations. Increasing amounts of acid deposition in sensitive areas caused tens of thousands of lakes and streams in Europe and North America to become much more acidic than they had been in previous decades. Acid-sensitive areas are those that are predisposed to acidification because the region’s soils have a low buffering capacity, or low acid-neutralizing capacity (ANC). In addition, acidification can release aluminum bound to soils, which in its dissolved form can be toxic to both plant and animal life. High concentrations of dissolved aluminum released from soils often enter streams and lakes. In conjunction with rising acidity in aquatic environments, aluminum can damage fish gills and thus impair respiration. In the Adirondack Mountain region of New York state, research has shown that the number of fish species drops from five in lakes with a pH of 6.0 to 7.0 to only one in lakes with a pH of 4.0 to 4.5. Other organisms are also negatively affected, so that acidified bodies of water lose plant and animal diversity overall. These effects can ripple throughout the food chain.

High acidity, especially from sulfur deposition, can accelerate the conversion of elemental mercury to its deadliest form: methyl mercury, a neurological toxin. This conversion most commonly occurs in wetlands and water-saturated soils where low-oxygen environments provide ideal conditions for the formation of methyl mercury by bacteria. Methyl mercury concentrates in organisms as it moves up the food chain, a phenomenon known as bioaccumulation. Small concentrations of methyl mercury present in phytoplankton and zooplankton accumulate in the fatcells of the animals that consume them. Since animals at higher tiers of the food chain must always consume large numbers of organisms from lower ones, the concentrations of methyl mercury in top predators, which often include humans, increase to levels where they could become harmful. The bioaccumulation of methyl mercury in the tissues of fishes is the leading reason for government health advisories that recommend reduced consumption of fish from fresh and marine waters.

In addition, aquatic acidification may be episodic, especially in colder climates. Sulfuric and nitric acid accumulating in a snowpack can leach out rapidly during the initial snowmelt and result in a pulse of acidic meltwater. Such pulses may be much more acidic than any individual snowfall event over the course of a winter, and these events can be deadly to acid-sensitive aquatic organisms throughout the food web.

© 2020 Encyclopædia Britannica, Inc. 5 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Effects on forested and mountainous regions

Spruce trees damaged by acid rain in Karkonosze National Park, Poland.

Simon Fraser—Science Photo Library/Photo Researchers, Inc.

Areas affected by acid deposition contrasted with regions of high acid sensitivity.

Encyclopædia Britannica, Inc.

In the 1970s and ’80s, forested areas in central Europe, southern Scandinavia, and eastern North America showed alarming signs of forest dieback and tree mortality. A 1993 survey in 27 European countries revealed air pollution damage or mortality in 23 percent of the 100,000 trees surveyed. It is likely that the dieback was the result of many factors, including acid deposition (e.g., soil acidification and loss of buffering capacity, mobilization of toxic aluminum, direct effects of acid on foliage), exposure to ground-level ozone, possible excess fertilization from the deposition of nitrogencompounds (such as nitrates, ammonium, and ammonia compounds), and general stress caused by a combination of these factors. Once a tree is in a weakened condition, it is more likely to succumb to other environmental stressors such as drought, insect infestation, and infection by pathogens. The areas of forest dieback were often found to be associated with regions with low buffering capacity where damage to aquatic ecosystems due to acid deposition was also occurring.

Acid deposition has been implicated in the alteration of soil chemistry and the decline of several tree species through both direct and indirect means. Poorly buffered soils are particularly susceptible to acidification because

© 2020 Encyclopædia Britannica, Inc. 6 of 106 Britannica LaunchPacks | Human Populations and Their Environment

they lack significant amounts of base cations (positively charged ions), which neutralize acidity. Calcium, magnesium, sodium, and potassium, which are the base cations that account for most of the acid-neutralizing capacity of soils, are derived from the weathering of rocks and from wet and dry deposition. Some of these base cations (such as calcium and magnesium) are also secondary plant nutrients that are necessary for proper plant growth. The supply of these base cations declines as they neutralize the acids present in wet and dry deposition and are leached from the soils. Thus, a landscape formerly rich in base cations can become acid-sensitive when soil-formation processes are slow and base cations are not replaced through weathering or deposition processes.

Soil acidification can also occur where deposition of ammonia (NH ) and ammonium (NH +) is high. Ammonia 3 4 and ammonium deposition leads to the production of H+ (which results in acidification) when these chemicals

are converted to nitrate (NO −) by bacteria in a process called nitrification: 3

NH + O → NO − + 3H+ + 2e− 3 2 2

NO − + H O → NO − + 2H+ + 2e− 2 2 3

The sources of NH and NH + are largely agricultural activities, especially livestock (chickens, hogs, and cattle) 3 4 production. Around 80 percent of NH emissions in the United States and Europe come from the agricultural 3 sector. The evaporation or volatilization of animal wastes releases NH into the atmosphere. This process often 3 results in the deposition of ammonia near the emission source. However, NH can be converted to particulate 3 ammonium that may be transported and deposited as wet and dry deposition hundreds of kilometres away from the emission source.

Besides negatively altering soil chemistry, acid deposition has been shown to affect some tree species directly. Red spruce (Picea rubens) trees found at higher elevations in the eastern United States are harmed by acids leaching calcium from the cell membranes in their needles, making the needles more susceptible to damage from freezing during winter. The damage is often greatest in mountainous regions, because these areas often receive more acid deposition than lower areas and the winter environment is more extreme. Mountainous regions are subjected to highly acidic cloud and fog water along with other environmental stresses. In addition, red spruce can be damaged by the increased concentration of toxic aluminum in the soil. These processes can reduce nutrient uptake by the tree roots. Sugar maple (Acer saccharum) populations are also declining in the northeastern United States and parts of eastern Canada. High soil aluminum and low soil calcium concentrations resulting from acid deposition have been implicated in this decline. Other trees in this region that are negatively affected by acidic deposition include aspen (Populus), birch (Betula), and ash (Fraxinus).

Some scientists argue that acid deposition may influence the geology of some regions. A 2018 study examining the 2009 Jiweishan landslide in southwest China proposed that acid rain may have weakened a layer of shale that separated the rock layers containing an aquifer above from the rock layers containing a mine below, which caused a large mass of rock to slip off the mountainside and kill 74 people.

© 2020 Encyclopædia Britannica, Inc. 7 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Effects on human-made structures

Statue eroded by acid rain.

julius fekete—iStock/Thinkstock

Repairing acid rain damage to Cologne Cathedral.

Encyclopædia Britannica, Inc.

Acid deposition also affects human-made structures. The most notable effects occur on marble and limestone, which are common building materials found in many historic structures, monuments, and gravestones. Sulfur dioxide, an acid rain precursor, can react directly with limestone in the presence of water to form gypsum, which eventually flakes off or is dissolved by water. In addition, acid rain can dissolve limestone and marble through direct contact.

History © 2020 Encyclopædia Britannica, Inc. 8 of 106 Britannica LaunchPacks | Human Populations and Their Environment

History

U.S. emissions of SO2, NOx, and NH3, 1970–85 (five-year intervals) and 1990–2008 (one-year…

Encyclopædia Britannica, Inc.

Modern anthropogenic acid deposition began in Europe and eastern North America after World War II, as countries in those areas greatly increased their consumption of fossil fuels. International cooperation to address air pollution and acid deposition began with the 1972 Conference on the Human Environment in Stockholm, Sweden. In 1979 the Geneva Convention on Long-range Transboundary Air Pollution created the framework for reducing air pollution and acid deposition in Europe. The convention produced the first legally binding international agreement to reduce air pollution on a broad regional basis. This first agreement has been extended by several protocols since its original inception.

In the United States, reductions in acid deposition stem from the Clean Air Act of 1970 and its amendments in 1990. Work toward developing a Memorandum of Intent between the U.S. and Canada to reduce air pollution and acid deposition began in the 1970s. However, it was not formalized until the Canada–United States Air Quality Agreement in 1991, which placed permanent caps on SO emissions and guided the reduction of NO 2 x emissions in both countries. The SO emissions in the United States and Canada peaked in the late 1970s, but 2 they have subsequently declined as a result of the adoption of government-mandated air pollution standards. The first phase of emission reductions ordered by the U.S. Clean Air Act Amendments of 1990 was begun in 1995, mainly by the regulation of coal-fired power plant emissions. This development marked the beginning of further significant SO reductions in the United States and resulted in an 88 percent decline in SO emissions 2 2 between 1990 and 2017.

Map of precipitation pH in the continental United States in 1994.

Encyclopædia Britannica, Inc.

© 2020 Encyclopædia Britannica, Inc. 9 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Map of precipitation pH in the continental United States in 2008.

Encyclopædia Britannica, Inc.

In contrast, NO emissions in the United States peaked about 1980 and remained relatively stable until the end x of the 1990s, when emissions began to decline more substantially because of controls on emissions from power plants and vehicles. NO emissions have exceeded SO emissions since about 1980, but they too have fallen x 2 with the implementation of the Clean Air Act. NO emissions, for example, declined by 50 percent between 1990 2 and 2017. The combined reductions of SO and NO emissions during this period led to a significant drop in acid 2 x deposition, as well as sulfate (SO 2) and nitrate (NO ) deposition. Ammonia (NH ) and ammonium deposition 4 3 3 continue to increase in some parts of the United States, especially in areas with intensive agriculture and livestock production.

Graph of hydrogen ion concentration in water collected at Hubbard Brook Experimental Forest between…

Encyclopædia Britannica, Inc.

As a result of actions and agreements such as those described above, acid deposition in both Europe and eastern North America has been significantly reduced. The longest continuous record of precipitation chemistry

in North America is from the Hubbard Brook Experimental Forest in New Hampshire, U.S., where H+ concentration in precipitation declined by about 86 percent from the mid-1960s through 2016. Similar trends were also reflected in data collected at measuring stations located across the eastern United States, which

reported a decrease of approximately 40 percent in H+ concentration between 1994 and 2008. EPA monitoring sites in largely urban areas have shown that annual average SO and nitrogen concentrations present in both 2 wet and dry acid deposition decreased dramatically across the eastern United States between 1989 and 2015, and the greatest declines occurred in the area of dry sulfur deposition, which fell by roughly 82 percent (when regional figures for the Mid-Atlantic, Midwest, Northeast, and Southeast were considered).

© 2020 Encyclopædia Britannica, Inc. 10 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Despite significant reductions in acid deposition, some European and North American ecosystems impaired by acid deposition have been slow to recover. Decades of acid deposition in these sensitive regions have depleted the acid-neutralizing capacity of soils. As a result, these soils are even more susceptible to continued acid deposition, even at reduced levels. Further reductions in NO and SO emissions will be necessary to protect x 2 such acid-sensitive ecosystems.

In contrast to Europe and North America, acid deposition is increasing in other parts of the world. For example, Asia has seen a steady increase in emissions of SO and NO as well as NH —a phenomenon most apparent in 2 x 3 parts of China and India, where coal burning for industrial and electricity production has greatly expanded since about 2000. However, the introduction of stringent emission controls in China in 2007 produced a 75 percent decline in the country’s SO emissions by 2019, whereas India’s SO emissions continued to increase. 2 2

Gene E. LikensThomas J. Butler Additional Reading

General treatments of acid rain are found in HANS TAMMEMAGI, Air: Our Planet’s Ailing Atmosphere (2009) ; MARK Z.

JACOBSON, Atmospheric Pollution: History, Science, and Regulation (2002) ; C.T. DRISCOLL et al., “Acidic Deposition in the Northeastern U.S.: Sources and Inputs, Ecosystem Effects, and Management Strategies,” BioScience, 51(3):

180–198 (2001); WILLIAM N. ROM and STEVEN MARKOWITZ (eds.), Environmental and Occupational Medicine, 4th ed.

(2006) ; and B.J. MASON, Acid Rain: Its Causes and Its Effects on Inland Waters (1992) . A readable account of the

history of the acid rain phenomenon and the legacy of its effects in one part of the eastern United States is JERRY

C. JENKINS et al., Acid Rain in the Adirondacks: An Environmental History (2007) .

Primary references include S. ODEN, “The Acidification of Air and Precipitation and Its Consequences on the Natural Environment,” Bulletin of Ecological Research Communications, Energy Committee Bulletin 1, Swedish

National Science Research Council, trans. by TRANSLATION CONSULTANTS, LTD. (1968); and G.E. LIKENS et al., “Acid Rain,” Environment, 14:33–40 (1972).

Additional technical treatments include JOHN H. SEINFELD and SPYROS N. PANDIS, Atmospheric Chemistry and Physics:

From Air Pollution to Climate Change, 3rd ed. (2016) ; J.C.I. KUYLENSTIERNA et al., “Acidification in Developing Countries: Ecosystem Sensitivity and the Critical Load Approach on a Global Scale,” Ambio, 30:20–28 (2001);

TIMOTHY J. SULLIVAN, Aquatic Effects of Acidic Deposition (2000) ; J.L. STODDARD et al., “Regional Trends in Aquatic

Recovery from Acidification in North America and Europe,” Nature, 401(6753):575–578 (1999); and GENE E. LIKENS

, Biogeochemistry of a Forested Ecosystem, 3rd ed. (2013) . G.E. LIKENS et al., “Long-Term Effects of Acid Rain:

Response and Recovery of a Forest Ecosystem,” Science, 272(5259):244–246 (1996); GENE E. LIKENS, “The Role of Science in Decision Making: Does Evidence-Based Science Drive Environmental Policy?” Frontiers of Ecology and

the Environment, 8(6):e1–e8 (2010); CARTER N. LANE (ed.), Acid Rain: Overview and Abstracts (2003) ; and PETER

BRIMBLECOMBE et al. (eds.), Acid Rain - Deposition to Recovery (2007) , are collections of scientific summaries on acid rain research.

Gene E. LikensThomas J. Butler

Citation (MLA style):

"Acid rain." Britannica LaunchPacks: Human Populations and Their Environment, Encyclopædia Britannica, 19 Mar. 2019. packs-preview.eb.com. Accessed 10 Aug. 2021.

© 2020 Encyclopædia Britannica, Inc. 11 of 106 Britannica LaunchPacks | Human Populations and Their Environment

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

deforestation

the clearing or thinning of forests by humans. Deforestation represents one of the largest issues in global land use. Estimates of deforestation traditionally are based on the area of forest cleared for human use, including removal of the trees for wood products and for croplands and grazing lands. In the practice of clear-cutting, all the trees are removed from the land, which completely destroys the forest. In some cases, however, even partial logging and accidental fires thin out the trees enough to change the forest structure dramatically.

A section of clear-cut forest in Romania.

© Ionescu Bogdan/Fotolia History

Conversion of forests to land used for other purposes has a long history. Earth’s croplands, which cover about 49 million square km (18.9 million square miles), are mostly deforested land. Most present-day croplands receive enough rain and are warm enough to have once supported forests of one kind or another. Only about 1 million square km (390,000 square miles) of cropland are in areas that would have been cool boreal forests, as in Scandinavia and northern Canada. Much of the remainder was once moist subtropical or tropical forest or, in eastern North America, western Europe, and eastern China, temperate forest.

The coastal forest of Rio de Janeiro state, Brazil, badly fragmented as portions were cleared for…

Courtesy, Stuart L. Pimm

© 2020 Encyclopædia Britannica, Inc. 12 of 106 Britannica LaunchPacks | Human Populations and Their Environment

The extent to which forests have become Earth’s grazing lands is much more difficult to assess. Cattle or sheep pastures in North America or Europe are easy to identify, and they support large numbers of animals. At least 2 million square km (772,204 square miles) of such forests have been cleared for grazing lands. Less certain are the humid tropical forests and some drier tropical woodlands that have been cleared for grazing. These often support only very low numbers of domestic grazing animals, but they may still be considered grazing lands by national authorities. Almost half the world is made up of “drylands”—areas too dry to support large numbers of trees—and most are considered grazing lands. There, goats, sheep, and cattle may harm what few trees are able to grow.

Although most of the areas cleared for crops and grazing represent permanent and continuing deforestation, deforestation can be transient. About half of eastern North America lay deforested in the 1870s, almost all of it having been deforested at least once since European colonization in the early 1600s. Since the 1870s the region’ s forest cover has increased, though most of the trees are relatively young. Few places exist in eastern North America that retain stands of uncut old-growth forests. Modern deforestation

Tropical forests and deforestation in the early 21st century.

Encyclopædia Britannica, Inc.

The United NationsFood and Agriculture Organization (FAO) estimates that the annual rate of deforestation is about 1.3 million square km per decade, though the rate has slowed in some places in the early 21st century as a result of enhanced forest management practices and the establishment of nature preserves. The greatest deforestation is occurring in the tropics, where a wide variety of forests exists. They range from rainforests that are hot and wet year-round to forests that are merely humid and moist, to those in which trees in varying proportions lose their leaves in the dry season, and to dry open woodlands. Because boundaries between these categories are inevitably arbitrary, estimates differ regarding how much deforestation has occurred in the tropics.

© 2020 Encyclopædia Britannica, Inc. 13 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Deforestation of the Amazon River basin has followed a pattern of cutting, burning, farming, and…

Encyclopædia Britannica, Inc.

A major contributor to tropical deforestation is the practice of slash-and-burn agriculture, or swidden agriculture ( see alsoshifting agriculture). Small-scale farmers clear forests by burning them and then grow crops in the soils fertilized by the ashes. Typically, the land produces for only a few years and then must be abandoned and new patches of forest burned. Fire is also commonly used to clear forests in Southeast Asia, tropical Africa, and the Americas for permanent oil palm plantations.

Additional human activities that contribute to tropical deforestation include commercial logging and land clearing for cattle ranches and plantations of rubber trees, oil palm, and other economically valuable trees.

Colour-coded Landsat satellite images of Brazil's Carajás mining area, documenting extensive…

NASA Landsat Pathfinder/Tropical Rainforest Information Center

The Amazon Rainforest is the largest remaining block of humid tropical forest, and about two-thirds of it is in Brazil. (The rest lies along that country’s borders to the west and to the north.) Studies in the Amazon reveal that about 5,000 square km (1,931 square miles) are at least partially logged each year. In addition, each year fires burn an area about half as large as the areas that are cleared. Even when the forest is not entirely cleared, what remains is often a patchwork of forests and fields or, in the event of more intensive deforestation, “islands” of forest surrounded by a “sea” of deforested areas.

Deforested lands are being replanted in some areas. Some of this replanting is done to replenish logging areas for future exploitation, and some replanting is done as a form of ecological restoration, with the reforested areas made into protected land. Additionally, significant areas are planted as monotypic plantations for lumber or paper production. These are often plantations of eucalyptus or fast-growing pines—and almost always of species that are not native to the places where they are planted. The FAO estimates that there are approximately 1.3 million square km (500,000 square miles) of such plantations on Earth.

Many replanting efforts are led and funded by the United Nations and nongovernmental organizations. However, some national governments have also undertaken ambitious replanting projects. For example, starting in 2017,

© 2020 Encyclopædia Britannica, Inc. 14 of 106 Britannica LaunchPacks | Human Populations and Their Environment

the government of New Zealand sought to plant more than 100 million trees per year within its borders, but perhaps the most ambitious replanting project took place in India on a single day in 2017, when citizens planted some 66 million trees. Effects

Landsat images showing the amount of deforestation in Borneo from 2000 to 2018.

M.C. Hansen et al., University of Maryland, Google, USGS, NASA

Deforestation has important global consequences. Forests sequester carbon in the form of wood and other biomass as the trees grow, taking up carbon dioxide from the atmosphere (seecarbon cycle). When forests are burned, their carbon is returned to the atmosphere as carbon dioxide, a greenhouse gas that has the potential to alter global climate (seegreenhouse effect; global warming), and the trees are no longer present to sequester more carbon.

In addition, most of the planet’s valuable biodiversity is within forests, particularly tropical ones. Moist tropical forests such as the Amazon have the greatest concentrations of animal and plant species of any terrestrial ecosystem; perhaps two-thirds of Earth’s species live only in these forests. As deforestation proceeds, it has the potential to cause the extinction of increasing numbers of these species.

On a more local scale, the effects of forest clearing, selective logging, and fires interact. Selective logging increases the flammability of the forest because it converts a closed, wetter forest into a more open, drier one. This leaves the forest vulnerable to the accidental movement of fires from cleared adjacent agricultural lands and to the killing effects of natural droughts. As wildfires, logging, and droughts continue, the forest can become progressively more open until all the trees are lost. Additionally, the burning of tropical forests is generally a seasonal phenomenon and can severely impact air quality. Record-breaking levels of air pollution have occurred in Southeast Asia as the result of burning for oil palm plantations.

In the tropics, much of the deforested land exists in the form of steep mountain hillsides. The combination of steep slopes, high rainfall, and the lack of tree roots to bind the soil can lead to disastrous landslides that destroy fields, homes, and human lives. With the significant exception of the forests destroyed for the oil palm industry, many of the humid forests that have been cleared are soon abandoned as croplands or only used for low-density grazing because the soils are extremely poor in nutrients. (To clear forests, the vegetation that contains most of the nutrients is often burned, and the nutrients literally “go up in smoke” or are washed away in the next rain.)

Although forests may regrow after being cleared and then abandoned, this is not always the case, especially if the remaining forests are highly fragmented. Such habitat fragmentation isolates populations of plant and animal species from each other, making it difficult to reproduce without genetic bottlenecks, and the fragments may be too small to support large or territorial animals. Furthermore, deforested lands that are planted with

© 2020 Encyclopædia Britannica, Inc. 15 of 106 Britannica LaunchPacks | Human Populations and Their Environment

commercially important trees lack biodiversity and do not serve as habitats for native plants and animals, many of which are endangered species.

Stuart L. Pimm

Citation (MLA style):

"Deforestation." Britannica LaunchPacks: Human Populations and Their Environment, Encyclopædia Britannica, 22 Apr. 2019. packs-preview.eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

endangered species

any species that is at risk of extinction because of a sudden rapid decrease in its population or a loss of its critical habitat. Previously, any species of plant or animal that was threatened with extinction could be called an endangered species. The need for separate definitions of “endangered” and “threatened” species resulted in the development of various categorization systems, each containing definitions and criteria by which a species can be classified according to its risk of extinction. As a rule, a range of criteria must be analyzed before a species can be placed in one category or another.

Giant panda (Ailuropoda melanoleuca) feeding in a bamboo forest, Sichuan (Szechwan) province, China.

Wolfshead—Ben Osborne/Ardea London

Often such categorization systems are linked directly to national legislation, such as the United States Endangered Species Act (ESA) or the Canadian Species at Risk Act (SARA). In addition, regional agreements, such as the European Union’s Habitats Directive (Council Directive 92/43/EEC), and international conservation agreements, such as the Convention on the Conservation of Migratory Species of Wild Animals (CMS) or the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), are connected to species-assessment systems. One of the most-recognized independent international systems of species

© 2020 Encyclopædia Britannica, Inc. 16 of 106 Britannica LaunchPacks | Human Populations and Their Environment

assessment is the Red List of Threatened Species, created by the International Union for Conservation of Nature (IUCN). Human beings and endangered species

The coastal forest of Rio de Janeiro state, Brazil, badly fragmented as portions were cleared for…

Courtesy, Stuart L. Pimm

Roughly 99 percent of threatened species are at risk because of human activities alone. By the early 21st century it could be said that human beings (Homo sapiens) are the greatest threat to biodiversity. The principal threats to species in the wild are:

Habitat loss and habitat degradationThe spread of introduced species (that is, non-native species that negatively affect the ecosystems they become part of)The growing influence of global warming and chemical pollutionUnsustainable huntingDisease

Edith's checkerspot (Euphydryas editha), native to North America. Two subspecies are listed as…

© Kerry Hargrove/Shutterstock.com

© 2020 Encyclopædia Britannica, Inc. 17 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Alula (Brighamia insignis), a rare and critically endangered plant native to Hawaii.

© Pavaphon Supanantananont/Shutterstock.com

Although some of these hazards occur naturally, most are caused by human beings and their economic and cultural activities. The most pervasive of these threats is habitat loss and degradation—that is, the large-scale conversion of land in previously undisturbed areas driven by the growing demand for commercial agriculture, logging, and infrastructure development. Because the rates of loss are highest in some of the most biologically diverse regions on Earth, a perpetual battle is waged to manage destructive activities there while limiting the impact that such restrictions may have on the well-being of local communities. The relative importance of each threat differs within and among taxa. So far, incidental mortality from ecological disturbance, temporary or limited human disturbance, and persecution have caused limited reductions in the total number of species; however, these phenomena can be serious for some susceptible groups. In addition, global warming has emerged as a widespread threat, and much research is being conducted to identify its potential effects on specific species, populations, and ecosystems.

Adult male mountain gorilla (Gorilla gorilla beringei) in Virunga National Park, Democratic Republic …

© erwinf—iStock/Getty Images

© 2020 Encyclopædia Britannica, Inc. 18 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Conflicts between human activities and conservation are at the root of many of these phenomena. Such controversies are often highly politicized and widely publicized in the global press and through social media. For example, habitat loss and species loss have resulted from the unregulated exploitation of coltan (the rare ore for tantalum used in consumer electronics products such as mobile phones and computers) in Kahuzi-Beiga National Park, one of the Democratic Republic of the Congo’s premier forest parks. The park is also home to much of the population of the threatened Eastern Lowland gorilla (Gorilla beringei graueri). Mining has increased gorilla mortality by reducing the animal’s food resources and leading many people displaced by the mining to kill gorillas for their meat. In addition, the mountain gorilla (G. beringei beringei), a close relative of the Eastern Lowland gorilla, is also at risk of extinction. However, authorities cite poaching, disease, and crossfire between warring political groups in the vicinity of Virunga National Park as the primary sources of its population decline.

Albino axolotl (Ambystoma mexicanum).

Vsion

Another example of a widely publicized wildlife controversy involves the relatively recent declines in amphibian populations. Known to be important global indicators of environmental health, amphibians have experienced some of the most serious population declines to date of all groups that have been assessed globally through the IUCN Red List process (see below). Amphibians (a group that includes salamanders, frogs, toads, and caecilians [wormlike amphibians]), being particularly sensitive to environmental changes, are severely threatened by habitat destruction, pollution, the spread of a disease called amphibian chytridiomycosis, and climate change.

A wildlife specialist holding a brown tree snake (Boiga irregularis) that was captured on a military …

Master Sgt. Lance Cheung/U.S. Air Force

Beyond these notable examples, many of the world’s birds are also at risk. The populations of some bird species (such as some albatrosses, petrels, and penguins) are declining because of longline fishing, whereas those of others (such as certain cranes, rails, parrots, pheasants, and pigeons) have become victims of habitat destruction. On many Pacific islands, the accidental introduction of the brown tree snake (Boiga irregularis) has wreaked havoc on many bird populations.

© 2020 Encyclopædia Britannica, Inc. 19 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Many fishes and other forms of aquatic and marine life are also threatened. Among them are long-lived species that have life history strategies requiring many years to reach sexual maturity. As a result, they are particularly susceptible to exploitation. The meat and fins of many sharks, rays, chimaeras, and whales fetch high prices in many parts of the world, which has resulted in the unsustainable harvest of several of those species.

Moreover, freshwater habitats worldwide are progressively threatened by pollution from industry, agriculture, and human settlements. Additional threats to freshwater ecosystems include introduced invasive species (such as the sea lamprey [Petromyzon marinus] in the Great Lakes), the canalization of rivers (such as in the streams that empty into the Everglades in Florida), and the overharvesting of freshwater species (as in the case of the extinct Yunnan box turtle [Cuora yunnanensis] in China). While an estimated 45,000 described species rely on freshwater habitats, it is important to note that humans are also seriously affected by the degradation of freshwater species and ecosystems.

Against this backdrop of threats related to urban expansion and food production, the unsustainable harvest of animal and plant products for traditional medicine and the pet trade is a growing concern in many parts of the world. These activities have implications for local ecosystems and habitats by exacerbating population declines through overharvesting. In addition, they have cross-border repercussions in terms of trade and illegal trafficking. IUCN Red List of Threatened Species

After a species is evaluated by the IUCN, it is placed into one of eight categories based on its…

Encyclopædia Britannica, Inc.

Cardboard palm (Zamia furfuracea), an endangered cycad listed on the IUCN Red List of Threatened…

© Wagner Campelo/Shutterstock.com

One of the most well-known objective assessment systems for declining species is the approach unveiled by the International Union for Conservation of Nature (IUCN) in 1994. It contains explicit criteria and categories to classify the conservation status of individual species on the basis of their probability of extinction. This classification is based on thorough, science-based species assessments and is published as the IUCN Red List of Threatened Species, more commonly known as the IUCN Red List. It is important to note that the IUCN cites very

© 2020 Encyclopædia Britannica, Inc. 20 of 106 Britannica LaunchPacks | Human Populations and Their Environment

specific criteria for each of these categories, and the descriptions given below have been condensed to highlight two or three of the category’s most salient points. In addition, three of the categories (CR, EN, and VU) are contained within the broader notion of “threatened.” The list recognizes several categories of species status:

Extinct (EX), species in which the last individual has died or where systematic and time-appropriate surveys have been unable to log even a single individualExtinct in the Wild (EW), species whose members survive only in captivity or as artificially supported populations far outside their historical geographic rangeCritically Endangered (CR), species that possess an extremely high risk of extinction as a result of rapid population declines of 80 to more than 90 percent over the previous 10 years (or three generations), a current population size of fewer than 50 individuals, or other factors (such as severely fragmented populations, long generation times, or isolated habitats)Endangered (EN), species that possess a very high risk of extinction as a result of rapid population declines of 50 to more than 70 percent over the previous 10 years (or three generations), a current population size of fewer than 250 individuals, or other factorsVulnerable (VU), species that possess a very high risk of extinction as a result of rapid population declines of 30 to more than 50 percent over the previous 10 years (or three generations), a current population size of fewer than 1,000 individuals, or other factorsNear Threatened (NT), species that are close to becoming threatened or may meet the criteria for threatened status in the near futureLeast Concern (LC), a category containing species that are pervasive and abundant after careful assessmentData Deficient (DD), a condition applied to species in which the amount of available data related to its risk of extinction is lacking in some way. Consequently, a complete assessment cannot be performed. Thus, unlike the other categories in this list, this category does not describe the conservation status of a species.Not Evaluated (NE), a category used to include any of the nearly 1.9 million species described by science but not yet assessed by the IUCN.

The IUCN system uses five quantitative criteria to assess the extinction risk of a given species. In general, these criteria consider:

The rate of population declineThe geographic rangeWhether the species already possesses a small population sizeWhether the species is very small or lives in a restricted areaWhether the results of a quantitative analysis indicates a high probability of extinction in the wild

© 2020 Encyclopædia Britannica, Inc. 21 of 106 Britannica LaunchPacks | Human Populations and Their Environment

The Svalbard Global Seed Vault safeguards the biodiversity of the seeds of the world's food plants…

© Dale Shelton/Dreamstime.com

All else being equal, a species experiencing a 90 percent decline over 10 years (or three generations), for example, would be classified as critically endangered. Likewise, another species undergoing a 50 percent decline over the same period would be classified as endangered, and one experiencing a 30 percent reduction over the same time frame would be considered vulnerable. It is important to understand, however, that a species cannot be classified by using one criterion alone; it is essential for the scientist doing the assessment to consider all five criteria to determine the status. Each year, thousands of scientists around the world assess or reassess species according to these criteria, and the IUCN Red List is subsequently updated with these new data once the assessments have been checked for accuracy to help provide a continual spotlight on the status of the world’s species.

The IUCN Red List brings into focus the ongoing decline of Earth’s biodiversity and the influence humans have on life on the planet. It provides a globally accepted standard with which to measure the conservation status of species over time. By 2019 more than 96,500 species had been assessed by using the IUCN Red List categories and criteria. Today the list itself is an online database available to the public. Scientists can analyze the percentage of species in a given category and the way these percentages change over time. They can also analyze the threats and conservation measures that underpin the observed trends.

© 2020 Encyclopædia Britannica, Inc. 22 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Other conservation agreements

The United States Endangered Species Act

Rusty patched bumblebee (Bombus affinis) on wild bergamot. The insect is listed as an endangered…

Jill Utrup/U.S. Fish and Wildlife Service

In the United States, the U.S. Fish and Wildlife Service (USFWS) of the Department of the Interior and the National Oceanic and Atmospheric Administration (NOAA) of the Department of Commerce are responsible for the conservation and management of fish and wildlife, including endangered species, and their habitats. The Endangered Species Act (ESA) of 1973 obligates federal and state governments to protect all life threatened with extinction, and this process is aided by the creation and continued maintenance of an endangered species list, which contains 1,662 domestic and 686 foreign species of endangered or threatened animals and plants as of 2019. According to the USFWS, the species definition extends to subspecies or any distinct population segment capable of interbreeding. Consequently, threatened subsets of species may also be singled out for protection. Furthermore, the ESA includes provisions for threatened species—that is, any species expected to become endangered within a substantial portion of its geographic home range. It also promotes the protection of critical habitats (that is, areas designated as essential to the survival of a given species).

Bald eagle (Haliaeetus leucocephalus).

Alexander Sprunt, IV

© 2020 Encyclopædia Britannica, Inc. 23 of 106 Britannica LaunchPacks | Human Populations and Their Environment

The ESA is credited with the protection and recovery of several prominent species within the borders of the United States, such as the bald eagle (Haliaeetus leucocephalus), the American alligator (Alligator mississippiensis), and the gray wolf (Canis lupus).

CITES

To prevent the overexploitation of species as they are traded across national boundaries, the Convention on International Trade in Endangered Species of Wild Flora and Fauna (CITES) was created by international agreement in 1973 and put into effect in 1975. The agreement sorts over 5,800 animal and 30,000 plant species into three categories (denoted by its three appendixes). Appendix I lists the species in danger of extinction. It also prohibits outright the commercial trade of these species; however, some can be traded in extraordinary situations for scientific or educational reasons. In contrast, Appendix II lists particular plants and animals that are less threatened but still require stringent controls. Appendix III lists species that are protected in at least one country that has petitioned other countries for help in controlling international trade in that species. As of 2017, CITES had been signed by 183 countries. Species assessment and management

The role that physical appearance plays in the prioritization of saving endangered species at the…

© MinuteEarth

Together, the thousands of scientists and conservation organizations that contribute to the IUCN Red List and other systems of assessment provide the world’s largest knowledge base on the global status of species. The aim of these systems is to provide the general public, conservationists, nongovernmental organizations, the media, decision makers, and policy makers with comprehensive and scientifically rigorous information on the conservation status of the world’s species and the threats that drive the observed patterns of population decline. Scientists in conservation and protected area management agencies use data on species status in the development of conservation planning and prioritization, the identification of important sites and species for dedicated conservation action and recovery planning, and educational programs. Although the IUCN Red List and other similar species-assessment tools do not prescribe the action to be taken, the data within the list are often used to inform legislation and policy and to determine conservation priorities at regional, national, and international levels. In contrast, the listing criteria of other categorization systems (such as the United States Endangered Species Act, CITES, and CMS) are prescriptive; they often require that landowners and various governmental agencies take specific mandatory steps to protect species falling within particular categories of threat.

It is likely that many undescribed or unassessed species of plants, animals, and other organisms have become or are in the process of becoming extinct. To maintain healthy populations of both known and unknown species, assessments and reassessments are valuable tools. Such monitoring work must continue so that the most

© 2020 Encyclopædia Britannica, Inc. 24 of 106 Britannica LaunchPacks | Human Populations and Their Environment

current knowledge can be applied to effective environmental monitoring and management efforts. For many threatened species, large well-protected conservation areas (biological reserves) often play major roles in curbing population declines. Such reserves are often cited by conservation biologists and other authorities as the best way to protect individual species as well as the ecosystems they inhabit. In addition, large biological reserves may harbour several undescribed and unassessed species. Despite the creation of several large reserves around the world, poaching and illegal trafficking plague many areas. Consequently, even species in those areas require continued monitored and periodic assessment.

Holly Dublin

Citation (MLA style):

"Endangered species." Britannica LaunchPacks: Human Populations and Their Environment, Encyclopædia Britannica, 22 Nov. 2019. packs-preview.eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

global warming

the phenomenon of increasing average air temperatures near the surface of Earth over the past one to two centuries. Climate scientists have since the mid-20th century gathered detailed observations of various weather phenomena (such as temperatures, precipitation, and storms) and of related influences on climate (such as ocean currents and the atmosphere’s chemical composition). These data indicate that Earth’s climate has changed over almost every conceivable timescale since the beginning of geologic time and that the influence of human activities since at least the beginning of the Industrial Revolution has been deeply woven into the very fabric of climate change.

© 2020 Encyclopædia Britannica, Inc. 25 of 106 Britannica LaunchPacks | Human Populations and Their Environment

During the second half of the 20th century and early part of the 21st century, global average…

Encyclopædia Britannica, Inc.

An overview of the role greenhouse gases play in modifying Earth's climate.

Encyclopædia Britannica, Inc.

Graph of the predicted increase in Earth's average surface temperature according to a series of…

Encyclopædia Britannica, Inc.

© 2020 Encyclopædia Britannica, Inc. 26 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Giving voice to a growing conviction of most of the scientific community, the Intergovernmental Panel on Climate Change (IPCC) was formed in 1988 by the World Meteorological Organization (WMO) and the United Nations Environment Program (UNEP). In 2013 the IPCC reported that the interval between 1880 and 2012 saw an increase in global average surface temperature of approximately 0.9 °C (1.5 °F). The increase is closer to 1.1 °C (2.0 °F) when measured relative to the preindustrial (i.e., 1750–1800) mean temperature.

A special report produced by the IPCC in 2018 honed this estimate further, noting that human beings and human activities have been responsible for a worldwide average temperature increase of between 0.8 and 1.2 °C (1.4 and 2.2 °F) of global warming since preindustrial times, and most of the warming observed over the second half of the 20th century could be attributed to human activities. It predicted that the global mean surface temperature would increase between 3 and 4 °C (5.4 and 7.2 °F) by 2100 relative to the 1986–2005 average should carbon emissions continue at their current rate. The predicted rise in temperature was based on a range of possible scenarios that accounted for future greenhouse gas emissions and mitigation (severity reduction) measures and on uncertainties in the model projections. Some of the main uncertainties include the precise role of feedback processes and the impacts of industrial pollutants known as aerosols, which may offset some warming.

Many climate scientists agree that significant societal, economic, and ecological damage would result if global average temperatures rose by more than 2 °C (3.6 °F) in such a short time. Such damage would include increased extinction of many plant and animal species, shifts in patterns of agriculture, and rising sea levels. By 2015 all but a few national governments had begun the process of instituting carbon reduction plans as part of the Paris Agreement, a treaty designed to help countries keep global warming to 1.5 °C (2.7 °F) above preindustrial levels in order to avoid the worst of the predicted effects. Authors of a special report published by the IPCC in 2018 noted that should carbon emissions continue at their present rate, the increase in average near- surface air temperatures would reach 1.5 °C sometime between 2030 and 2052. Past IPCC assessments reported that the global average sea level rose by some 19–21 cm (7.5–8.3 inches) between 1901 and 2010 and that sea levels rose faster in the second half of the 20th century than in the first half. It also predicted, again depending on a wide range of scenarios, that the global average sea level would rise 26–77 cm (10.2–30.3 inches) relative to the 1986–2005 average by 2100 for global warming of 1.5 °C, an average of 10 cm (3.9 inches) less than what would be expected if warming rose to 2 °C (3.6 °F) above preindustrial levels.

The greenhouse effect on Earth. Some incoming sunlight is reflected by Earth's atmosphere and…

Encyclopædia Britannica, Inc.

© 2020 Encyclopædia Britannica, Inc. 27 of 106 Britannica LaunchPacks | Human Populations and Their Environment

The scenarios referred to above depend mainly on future concentrations of certain trace gases, called greenhouse gases, that have been injected into the lower atmosphere in increasing amounts through the burning of fossil fuels for industry, transportation, and residential uses. Modern global warming is the result of an increase in magnitude of the so-called greenhouse effect, a warming of Earth’s surface and lower atmosphere caused by the presence of water vapour, carbon dioxide, methane, nitrous oxides, and other greenhouse gases. In 2014 the IPCC reported that concentrations of carbon dioxide, methane, and nitrous oxides in the atmosphere surpassed those found in ice cores dating back 800,000 years.

Of all these gases, carbon dioxide is the most important, both for its role in the greenhouse effect and for its role in the human economy. It has been estimated that, at the beginning of the industrial age in the mid-18th century, carbon dioxide concentrations in the atmosphere were roughly 280 parts per million (ppm). By the middle of 2018 they had risen to 406 ppm, and, if fossil fuels continue to be burned at current rates, they are projected to reach 550 ppm by the mid-21st century—essentially, a doubling of carbon dioxide concentrations in 300 years.

A vigorous debate is in progress over the extent and seriousness of rising surface temperatures, the effects of past and future warming on human life, and the need for action to reduce future warming and deal with its consequences. This article provides an overview of the scientific background and public policy debate related to the subject of global warming. It considers the causes of rising near-surface air temperatures, the influencing factors, the process of climate research and forecasting, the possible ecological and social impacts of rising temperatures, and the public policy developments since the mid-20th century. For a detailed description of Earth’ s climate, its processes, and the responses of living things to its changing nature, seeclimate. For additional background on how Earth’s climate has changed throughout geologic time, seeclimatic variation and change. For a full description of Earth’s gaseous envelope, within which climate change and global warming occur, see atmosphere. Climatic variation since the last glaciation

A series of photographs of the Grinnell Glacier taken from the summit of Mount Gould in Glacier…

1938-T.J. Hileman/Glacier National Park Archives, 1981 - Carl Key/USGS, 1998 - Dan Fagre/USGS, 2006 - Karen Holzer/USGS

Global warming is related to the more general phenomenon of climate change, which refers to changes in the totality of attributes that define climate. In addition to changes in air temperature, climate change involves changes to precipitation patterns, winds, ocean currents, and other measures of Earth’s climate. Normally, climate change can be viewed as the combination of various natural forces occurring over diverse timescales. Since the advent of human civilization, climate change has involved an “anthropogenic,” or exclusively human- caused, element, and this anthropogenic element has become more important in the industrial period of the past two centuries. The term global warming is used specifically to refer to any warming of near-surface air during the past two centuries that can be traced to anthropogenic causes.

To define the concepts of global warming and climate change properly, it is first necessary to recognize that the climate of Earth has varied across many timescales, ranging from an individual human life span to billions of

© 2020 Encyclopædia Britannica, Inc. 28 of 106 Britannica LaunchPacks | Human Populations and Their Environment

years. This variable climate history is typically classified in terms of “regimes” or “epochs.” For instance, the Pleistocene glacial epoch (about 2,600,000 to 11,700 years ago) was marked by substantial variations in the global extent of glaciers and ice sheets. These variations took place on timescales of tens to hundreds of millennia and were driven by changes in the distribution of solar radiation across Earth’s surface. The distribution of solar radiation is known as the insolation pattern, and it is strongly affected by the geometry of Earth’s orbit around the Sun and by the orientation, or tilt, of Earth’s axis relative to the direct rays of the Sun.

Worldwide, the most recent glacial period, or ice age, culminated about 21,000 years ago in what is often called the Last Glacial Maximum. During this time, continental ice sheets extended well into the middle latitude regions of Europe and North America, reaching as far south as present-day London and New York City. Global annual mean temperature appears to have been about 4–5 °C (7–9 °F) colder than in the mid-20th century. It is important to remember that these figures are a global average. In fact, during the height of this last ice age, Earth’s climate was characterized by greater cooling at higher latitudes (that is, toward the poles) and relatively little cooling over large parts of the tropical oceans (near the Equator). This glacial interval terminated abruptly about 11,700 years ago and was followed by the subsequent relatively ice-free period known as the Holocene Epoch. The modern period of Earth’s history is conventionally defined as residing within the Holocene. However, some scientists have argued that the Holocene Epoch terminated in the relatively recent past and that Earth currently resides in a climatic interval that could justly be called the Anthropocene Epoch—that is, a period during which humans have exerted a dominant influence over climate.

Though less dramatic than the climate changes that occurred during the Pleistocene Epoch, significant variations in global climate have nonetheless taken place over the course of the Holocene. During the early Holocene, roughly 9,000 years ago, atmospheric circulation and precipitation patterns appear to have been substantially different from those of today. For example, there is evidence for relatively wet conditions in what is now the Sahara Desert. The change from one climatic regime to another was caused by only modest changes in the pattern of insolation within the Holocene interval as well as the interaction of these patterns with large-scale climate phenomena such as monsoons and El Niño/Southern Oscillation (ENSO).

During the middle Holocene, some 5,000–7,000 years ago, conditions appear to have been relatively warm— indeed, perhaps warmer than today in some parts of the world and during certain seasons. For this reason, this interval is sometimes referred to as the Mid-Holocene Climatic Optimum. The relative warmth of average near- surface air temperatures at this time, however, is somewhat unclear. Changes in the pattern of insolation favoured warmer summers at higher latitudes in the Northern Hemisphere, but these changes also produced cooler winters in the Northern Hemisphere and relatively cool conditions year-round in the tropics. Any overall hemispheric or global mean temperature changes thus reflected a balance between competing seasonal and regional changes. In fact, recent theoretical climate model studies suggest that global mean temperatures during the middle Holocene were probably 0.2–0.3 °C (0.4–0.5 °F) colder than average late 20th-century conditions.

Over subsequent millennia, conditions appear to have cooled relative to middle Holocene levels. This period has sometimes been referred to as the “Neoglacial.” In the middle latitudes this cooling trend was associated with intermittent periods of advancing and retreating mountain glaciers reminiscent of (though far more modest than) the more substantial advance and retreat of the major continental ice sheets of the Pleistocene climate epoch.

© 2020 Encyclopædia Britannica, Inc. 29 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Causes of global warming

The greenhouse effect

The average surface temperature of Earth is maintained by a balance of various forms of solar and terrestrial radiation. Solar radiation is often called “shortwave” radiation because the frequencies of the radiation are relatively high and the wavelengths relatively short—close to the visible portion of the electromagnetic spectrum. Terrestrial radiation, on the other hand, is often called “longwave” radiation because the frequencies are relatively low and the wavelengths relatively long—somewhere in the infrared part of the spectrum. Downward-moving solar energy is typically measured in watts per square metre. The energy of the total incoming solar radiation at the top of Earth’s atmosphere (the so-called “solar constant”) amounts roughly to 1,366 watts per square metre annually. Adjusting for the fact that only one-half of the planet’s surface receives solar radiation at any given time, the average surface insolation is 342 watts per square metre annually.

The amount of solar radiation absorbed by Earth’s surface is only a small fraction of the total solar radiation entering the atmosphere. For every 100 units of incoming solar radiation, roughly 30 units are reflected back to space by either clouds, the atmosphere, or reflective regions of Earth’s surface. This reflective capacity is referred to as Earth’s planetary albedo, and it need not remain fixed over time, since the spatial extent and distribution of reflective formations, such as clouds and ice cover, can change. The 70 units of solar radiation that are not reflected may be absorbed by the atmosphere, clouds, or the surface. In the absence of further complications, in order to maintain thermodynamic equilibrium, Earth’s surface and atmosphere must radiate these same 70 units back to space. Earth’s surface temperature (and that of the lower layer of the atmosphere essentially in contact with the surface) is tied to the magnitude of this emission of outgoing radiation according to the Stefan-Boltzmann law.

Earth’s energy budget is further complicated by the greenhouse effect. Trace gases with certain chemical properties—the so-called greenhouse gases, mainly carbon dioxide (CO ), methane (CH ), and nitrous oxide (N 2 4 2 O)—absorb some of the infrared radiation produced by Earth’s surface. Because of this absorption, some fraction of the original 70 units does not directly escape to space. Because greenhouse gases emit the same amount of radiation they absorb and because this radiation is emitted equally in all directions (that is, as much downward as upward), the net effect of absorption by greenhouse gases is to increase the total amount of radiation emitted downward toward Earth’s surface and lower atmosphere. To maintain equilibrium, Earth’s surface and lower atmosphere must emit more radiation than the original 70 units. Consequently, the surface temperature must be higher. This process is not quite the same as that which governs a true greenhouse, but the end effect is similar. The presence of greenhouse gases in the atmosphere leads to a warming of the surface and lower part of the atmosphere (and a cooling higher up in the atmosphere) relative to what would be expected in the absence of greenhouse gases.

It is essential to distinguish the “natural,” or background, greenhouse effect from the “enhanced” greenhouse effect associated with human activity. The natural greenhouse effect is associated with surface warming properties of natural constituents of Earth’s atmosphere, especially water vapour, carbon dioxide, and methane. The existence of this effect is accepted by all scientists. Indeed, in its absence, Earth’s average temperature would be approximately 33 °C (59 °F) colder than today, and Earth would be a frozen and likely uninhabitable planet. What has been subject to controversy is the so-called enhanced greenhouse effect, which is associated with increased concentrations of greenhouse gases caused by human activity. In particular, the burning of fossil fuels raises the concentrations of the major greenhouse gases in the atmosphere, and these higher concentrations have the potential to warm the atmosphere by several degrees.

Radiative forcing © 2020 Encyclopædia Britannica, Inc. 30 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Radiative forcing

Since 1750 the concentration of carbon dioxide and other greenhouse gases has increased in Earth's…

Encyclopædia Britannica, Inc.

In light of the discussion above of the greenhouse effect, it is apparent that the temperature of Earth’s surface and lower atmosphere may be modified in three ways: (1) through a net increase in the solar radiation entering at the top of Earth’s atmosphere, (2) through a change in the fraction of the radiation reaching the surface, and (3) through a change in the concentration of greenhouse gases in the atmosphere. In each case the changes can be thought of in terms of “radiative forcing.” As defined by the IPCC, radiative forcing is a measure of the influence a given climatic factor has on the amount of downward-directed radiant energy impinging upon Earth’s surface. Climatic factors are divided between those caused primarily by human activity (such as greenhouse gas emissions and aerosol emissions) and those caused by natural forces (such as solar irradiance); then, for each factor, so-called forcing values are calculated for the time period between 1750 and the present day. “Positive forcing” is exerted by climatic factors that contribute to the warming of Earth’s surface, whereas “negative forcing” is exerted by factors that cool Earth’s surface.

On average, about 342 watts of solar radiation strike each square metre of Earth’s surface per year, and this quantity can in turn be related to a rise or fall in Earth’s surface temperature. Temperatures at the surface may also rise or fall through a change in the distribution of terrestrial radiation (that is, radiation emitted by Earth) within the atmosphere. In some cases, radiative forcing has a natural origin, such as during explosive eruptions from volcanoes where vented gases and ash block some portion of solar radiation from the surface. In other cases, radiative forcing has an anthropogenic, or exclusively human, origin. For example, anthropogenic increases in carbon dioxide, methane, and nitrous oxide are estimated to account for 2.3 watts per square metre of positive radiative forcing. When all values of positive and negative radiative forcing are taken together and all interactions between climatic factors are accounted for, the total net increase in surface radiation due to human activities since the beginning of the Industrial Revolution is 1.6 watts per square metre.

© 2020 Encyclopædia Britannica, Inc. 31 of 106 Britannica LaunchPacks | Human Populations and Their Environment

The influences of human activity on climate

Map of annual carbon dioxide emissions by country in 2014.

Encyclopædia Britannica, Inc.

Petroleum refinery at Ras Tanura, Saudi Arabia.

Herbert Lanks/Shostal Associates

Natural gas facility near Kursk, Russia.

© Pisotckii/Dreamstime.com

Human activity has influenced global surface temperatures by changing the radiative balance governing the Earth on various timescales and at varying spatial scales. The most profound and well-known anthropogenic influence is the elevation of concentrations of greenhouse gases in the atmosphere. Humans also influence climate by changing the concentrations of aerosols and ozone and by modifying the land cover of Earth’s surface.

© 2020 Encyclopædia Britannica, Inc. 32 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Greenhouse gases

Factories that burn fossil fuels help to cause global warming.

© jzehnder/Fotolia

As discussed above, greenhouse gases warm Earth’s surface by increasing the net downward longwave radiation reaching the surface. The relationship between atmospheric concentration of greenhouse gases and the associated positive radiative forcing of the surface is different for each gas. A complicated relationship exists between the chemical properties of each greenhouse gas and the relative amount of longwave radiation that each can absorb. What follows is a discussion of the radiative behaviour of each major greenhouse gas.

Water vapour

The present-day surface hydrologic cycle, in which water is transferred from the oceans through the…

Encyclopædia Britannica, Inc.

Water vapour is the most potent of the greenhouse gases in Earth’s atmosphere, but its behaviour is fundamentally different from that of the other greenhouse gases. The primary role of water vapour is not as a direct agent of radiative forcing but rather as a climate feedback—that is, as a response within the climate system that influences the system’s continued activity (see belowWater vapour feedback). This distinction arises from the fact that the amount of water vapour in the atmosphere cannot, in general, be directly modified by human behaviour but is instead set by air temperatures. The warmer the surface, the greater the evaporation rate of water from the surface. As a result, increased evaporation leads to a greater concentration of water vapour in the lower atmosphere capable of absorbing longwave radiation and emitting it downward.

© 2020 Encyclopædia Britannica, Inc. 33 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Carbon dioxide

Carbon is transported in various forms through the atmosphere, the hydrosphere, and geologic…

Encyclopædia Britannica, Inc.

Of the greenhouse gases, carbon dioxide (CO ) is the most significant. Natural sources of atmospheric CO 2 2 include outgassing from volcanoes, the combustion and natural decay of organic matter, and respiration by aerobic (oxygen-using) organisms. These sources are balanced, on average, by a set of physical, chemical, or biological processes, called “sinks,” that tend to remove CO from the atmosphere. Significant natural sinks 2 include terrestrial vegetation, which takes up CO during the process of photosynthesis. 2

Living organisms influence the cycling of carbon and oxygen through the environment.

Created and produced by QA International. © QA International, 2010. All rights reserved. www.qa-international.com

A number of oceanic processes also act as carbon sinks. One such process, called the “solubility pump,” involves the descent of surface seawater containing dissolved CO . Another process, the “biological pump,” involves the 2 uptake of dissolved CO by marine vegetation and phytoplankton (small free-floating photosynthetic organisms) 2 living in the upper ocean or by other marine organisms that use CO to build skeletons and other structures 2 made of calcium carbonate (CaCO ). As these organisms expire and fall to the ocean floor, the carbon they 3 contain is transported downward and eventually buried at depth. A long-term balance between these natural sources and sinks leads to the background, or natural, level of CO in the atmosphere. 2

© 2020 Encyclopædia Britannica, Inc. 34 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Smoldering remains of a plot of deforested land in the Amazon Rainforest of Brazil. Annually, it is…

© Brasil2/iStock.com

In contrast, human activities increase atmospheric CO levels primarily through the burning of fossil fuels— 2 principally oil and coal and secondarily natural gas, for use in transportation, heating, and the generation of electrical power—and through the production of cement. Other anthropogenic sources include the burning of forests and the clearing of land. Anthropogenic emissions currently account for the annual release of about 7 gigatons (7 billion tons) of carbon into the atmosphere. Anthropogenic emissions are equal to approximately 3 percent of the total emissions of CO by natural sources, and this amplified carbon load from human activities 2 far exceeds the offsetting capacity of natural sinks (by perhaps as much as 2–3 gigatons per year).

The Keeling Curve, named after American climate scientist Charles David Keeling, tracks changes in…

Encyclopædia Britannica, Inc.

CO consequently accumulated in the atmosphere at an average rate of 1.4 ppm per year between 1959 and 2 2006 and roughly 2.0 ppm per year between 2006 and 2018. Overall, this rate of accumulation has been linear (that is, uniform over time). However, certain current sinks, such as the oceans, could become sources in the future (seeCarbon cycle feedbacks). This may lead to a situation in which the concentration of atmospheric CO 2 builds at an exponential rate (that is, its rate of increase is also increasing).

The natural background level of carbon dioxide varies on timescales of millions of years because of slow changes in outgassing through volcanic activity. For example, roughly 100 million years ago, during the Cretaceous Period (145 million to 66 million years ago), CO concentrations appear to have been several times higher than 2 they are today (perhaps close to 2,000 ppm). Over the past 700,000 years, CO concentrations have varied over 2 a far smaller range (between roughly 180 and 300 ppm) in association with the same Earth orbital effects linked to the coming and going of the Pleistocene ice ages (see belowNatural influences on climate). By the early 21st

© 2020 Encyclopædia Britannica, Inc. 35 of 106 Britannica LaunchPacks | Human Populations and Their Environment

century, CO levels had reached 384 ppm, which is approximately 37 percent above the natural background 2 level of roughly 280 ppm that existed at the beginning of the Industrial Revolution. Atmospheric CO levels 2 continued to increase, and by 2018 they had reached 410 ppm. Such levels are believed to be the highest in at least 800,000 years according to ice core measurements and may be the highest in at least 5 million years according to other lines of evidence.

Radiative forcing caused by carbon dioxide varies in an approximately logarithmic fashion with the concentration of that gas in the atmosphere. The logarithmic relationship occurs as the result of a saturation effect wherein it becomes increasingly difficult, as CO concentrations increase, for additional CO molecules to further influence 2 2 the “infrared window” (a certain narrow band of wavelengths in the infrared region that is not absorbed by atmospheric gases). The logarithmic relationship predicts that the surface warming potential will rise by roughly the same amount for each doubling of CO concentration. At current rates of fossil fuel use, a doubling of CO 2 2 concentrations over preindustrial levels is expected to take place by the middle of the 21st century (when CO 2 concentrations are projected to reach 560 ppm). A doubling of CO concentrations would represent an increase 2 of roughly 4 watts per square metre of radiative forcing. Given typical estimates of “climate sensitivity” in the absence of any offsetting factors, this energy increase would lead to a warming of 2 to 5 °C (3.6 to 9 °F) over preindustrial times (seeFeedback mechanisms and climate sensitivity). The total radiative forcing by anthropogenic CO emissions since the beginning of the industrial age is approximately 1.66 watts per square 2 metre.

Methane

Methane (CH ) is the second most important greenhouse gas. CH is more potent than CO because the 4 4 2 radiative forcing produced per molecule is greater. In addition, the infrared window is less saturated in the range of wavelengths of radiation absorbed by CH , so more molecules may fill in the region. However, CH exists in 4 4 far lower concentrations than CO in the atmosphere, and its concentrations by volume in the atmosphere are 2 generally measured in parts per billion (ppb) rather than ppm. CH also has a considerably shorter residence 4 time in the atmosphere than CO (the residence time for CH is roughly 10 years, compared with hundreds of 2 4 years for CO ). 2

Encyclopædia Britannica, Inc.

Natural sources of methane include tropical and northern wetlands, methane-oxidizing bacteria that feed on organic material consumed by termites, volcanoes, seepage vents of the seafloor in regions rich with organic sediment, and methane hydrates trapped along the continental shelves of the oceans and in polar permafrost.

© 2020 Encyclopædia Britannica, Inc. 36 of 106 Britannica LaunchPacks | Human Populations and Their Environment

The primary natural sink for methane is the atmosphere itself, as methane reacts readily with the hydroxyl radical (∙OH) within the troposphere to form CO and water vapour (H O). When CH reaches the stratosphere, 2 2 4 it is destroyed. Another natural sink is soil, where methane is oxidized by bacteria.

As with CO , human activity is increasing the CH concentration faster than it can be offset by natural sinks. 2 4 Anthropogenic sources currently account for approximately 70 percent of total annual emissions, leading to substantial increases in concentration over time. The major anthropogenic sources of atmospheric CH are rice 4 cultivation, livestock farming, the burning of coal and natural gas, the combustion of biomass, and the decomposition of organic matter in landfills. Future trends are particularly difficult to anticipate. This is in part due to an incomplete understanding of the climate feedbacks associated with CH emissions. In addition it is 4 difficult to predict how, as human populations grow, possible changes in livestock raising, rice cultivation, and energy utilization will influence CH emissions. 4

It is believed that a sudden increase in the concentration of methane in the atmosphere was responsible for a warming event that raised average global temperatures by 4–8 °C (7.2–14.4 °F) over a few thousand years during the so-called Paleocene-Eocene Thermal Maximum, or PETM. This episode took place roughly 55 million years ago, and the rise in CH appears to have been related to a massive volcanic eruption that interacted with 4 methane-containing flood deposits. As a result, large amounts of gaseous CH were injected into the 4 atmosphere. It is difficult to know precisely how high these concentrations were or how long they persisted. At very high concentrations, residence times of CH in the atmosphere can become much greater than the nominal 4 10-year residence time that applies today. Nevertheless, it is likely that these concentrations reached several ppm during the PETM.

Methane concentrations have also varied over a smaller range (between roughly 350 and 800 ppb) in association with the Pleistocene ice age cycles (seeNatural influences on climate). Preindustrial levels of CH in 4 the atmosphere were approximately 700 ppb, whereas levels exceeded 1,867 ppb in late 2018. (These concentrations are well above the natural levels observed for at least the past 650,000 years.) The net radiative forcing by anthropogenic CH emissions is approximately 0.5 watt per square metre—or roughly one-third the 4 radiative forcing of CO . 2

Surface-level ozone and other compounds

The next most significant greenhouse gas is surface, or low-level, ozone (O ). Surface O is a result of air 3 3 pollution; it must be distinguished from naturally occurring stratospheric O , which has a very different role in 3 the planetary radiation balance. The primary natural source of surface O is the subsidence of stratospheric O 3 3 from the upper atmosphere (see belowStratospheric ozone depletion). In contrast, the primary anthropogenic source of surface O is photochemical reactions involving the atmospheric pollutant carbon monoxide (CO). The 3 best estimates of the natural concentration of surface O are 10 ppb, and the net radiative forcing due to 3 anthropogenic emissions of surface O is approximately 0.35 watt per square metre. Ozone concentrations can 3 rise above unhealthy levels (that is, conditions where concentrations meet or exceed 70 ppb for eight hours or longer) in cities prone to photochemical smog.

© 2020 Encyclopædia Britannica, Inc. 37 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Nitrous oxides and fluorinated gases

Additional trace gases produced by industrial activity that have greenhouse properties include nitrous oxide (N 2 O) and fluorinated gases (halocarbons), the latter including sulfur hexafluoride, hydrofluorocarbons (HFCs), and perfluorocarbons (PFCs). Nitrous oxide is responsible for 0.16 watt per square metre radiative forcing, while fluorinated gases are collectively responsible for 0.34 watt per square metre. Nitrous oxides have small background concentrations due to natural biological reactions in soil and water, whereas the fluorinated gases owe their existence almost entirely to industrial sources.

Aerosols

The production of aerosols represents an important anthropogenic radiative forcing of climate. Collectively, aerosols block—that is, reflect and absorb—a portion of incoming solar radiation, and this creates a negative radiative forcing. Aerosols are second only to greenhouse gases in relative importance in their impact on near- surface air temperatures. Unlike the decade-long residence times of the “well-mixed” greenhouse gases, such as CO and CH , aerosols are readily flushed out of the atmosphere within days, either by rain or snow (wet 2 4 deposition) or by settling out of the air (dry deposition). They must therefore be continually generated in order to produce a steady effect on radiative forcing. Aerosols have the ability to influence climate directly by absorbing or reflecting incoming solar radiation, but they can also produce indirect effects on climate by modifying cloud formation or cloud properties. Most aerosols serve as condensation nuclei (surfaces upon which water vapour can condense to form clouds); however, darker-coloured aerosols may hinder cloud formation by absorbing sunlight and heating up the surrounding air. Aerosols can be transported thousands of kilometres from their sources of origin by winds and upper-level circulation in the atmosphere.

Perhaps the most important type of anthropogenic aerosol in radiative forcing is sulfate aerosol. It is produced from sulfur dioxide (SO ) emissions associated with the burning of coal and oil. Since the late 1980s, global 2 emissions of SO have decreased from about 151.5 million tonnes (167.0 million tons) to less than 100 million 2 tonnes (110.2 million tons) of sulfur per year.

Nitrate aerosol is not as important as sulfate aerosol, but it has the potential to become a significant source of negative forcing. One major source of nitrate aerosol is smog (the combination of ozone with oxides of nitrogen in the lower atmosphere) released from the incomplete burning of fuel in internal-combustion engines. Another source is ammonia (NH ), which is often used in fertilizers or released by the burning of plants and other organic 3 materials. If greater amounts of atmospheric nitrogen are converted to ammonia and agricultural ammonia emissions continue to increase as projected, the influence of nitrate aerosols on radiative forcing is expected to grow.

Both sulfate and nitrate aerosols act primarily by reflecting incoming solar radiation, thereby reducing the amount of sunlight reaching the surface. Most aerosols, unlike greenhouse gases, impart a cooling rather than warming influence on Earth’s surface. One prominent exception is carbonaceous aerosols such as carbon black or soot, which are produced by the burning of fossil fuels and biomass. Carbon black tends to absorb rather than reflect incident solar radiation, and so it has a warming impact on the lower atmosphere, where it resides. Because of its absorptive properties, carbon black is also capable of having an additional indirect effect on climate. Through its deposition in snowfall, it can decrease the albedo of snow cover. This reduction in the amount of solar radiation reflected back to space by snow surfaces creates a minor positive radiative forcing.

Natural forms of aerosol include windblown mineral dust generated in arid and semiarid regions and sea salt produced by the action of waves breaking in the ocean. Changes to wind patterns as a result of climate

© 2020 Encyclopædia Britannica, Inc. 38 of 106 Britannica LaunchPacks | Human Populations and Their Environment

modification could alter the emissions of these aerosols. The influence of climate change on regional patterns of aridity could shift both the sources and the destinations of dust clouds. In addition, since the concentration of sea salt aerosol, or sea aerosol, increases with the strength of the winds near the ocean surface, changes in wind speed due to global warming and climate change could influence the concentration of sea salt aerosol. For example, some studies suggest that climate change might lead to stronger winds over parts of the North Atlantic Ocean. Areas with stronger winds may experience an increase in the concentration of sea salt aerosol.

Other natural sources of aerosols include volcanic eruptions, which produce sulfate aerosol, and biogenic sources (e.g., phytoplankton), which produce dimethyl sulfide (DMS). Other important biogenic aerosols, such as terpenes, are produced naturally by certain kinds of trees or other plants. For example, the dense forests of the Blue Ridge Mountains of Virginia in the United States emit terpenes during the summer months, which in turn interact with the high humidity and warm temperatures to produce a natural photochemical smog. Anthropogenic pollutants such as nitrate and ozone, both of which serve as precursor molecules for the generation of biogenic aerosol, appear to have increased the rate of production of these aerosols severalfold. This process appears to be responsible for some of the increased aerosol pollution in regions undergoing rapid urbanization.

Human activity has greatly increased the amount of aerosol in the atmosphere compared with the background levels of preindustrial times. In contrast to the global effects of greenhouse gases, the impact of anthropogenic aerosols is confined primarily to the Northern Hemisphere, where most of the world’s industrial activity occurs. The pattern of increases in anthropogenic aerosol over time is also somewhat different from that of greenhouse gases. During the middle of the 20th century, there was a substantial increase in aerosol emissions. This appears to have been at least partially responsible for a cessation of surface warming that took place in the Northern Hemisphere from the 1940s through the 1970s. Since that time, aerosol emissions have leveled off due to antipollution measures undertaken in the industrialized countries since the 1960s. Aerosol emissions may rise in the future, however, as a result of the rapid emergence of coal-fired electric power generation in China and India.

The total radiative forcing of all anthropogenic aerosols is approximately –1.2 watts per square metre. Of this total, –0.5 watt per square metre comes from direct effects (such as the reflection of solar energy back into space), and –0.7 watt per square metre comes from indirect effects (such as the influence of aerosols on cloud formation). This negative radiative forcing represents an offset of roughly 40 percent from the positive radiative forcing caused by human activity. However, the relative uncertainty in aerosol radiative forcing (approximately 90 percent) is much greater than that of greenhouse gases. In addition, future emissions of aerosols from human activities, and the influence of these emissions on future climate change, are not known with any certainty. Nevertheless, it can be said that, if concentrations of anthropogenic aerosols continue to decrease as they have since the 1970s, a significant offset to the effects of greenhouse gases will be reduced, opening future climate to further warming.

© 2020 Encyclopædia Britannica, Inc. 39 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Land-use change

Land use in Europe.

Encyclopædia Britannica, Inc.

There are a number of ways in which changes in land use can influence climate. The most direct influence is through the alteration of Earth’s albedo, or surface reflectance. For example, the replacement of forest by cropland and pasture in the middle latitudes over the past several centuries has led to an increase in albedo, which in turn has led to greater reflection of incoming solar radiation in those regions. This replacement of forest by agriculture has been associated with a change in global average radiative forcing of approximately –0.2 watt per square metre since 1750. In Europe and other major agricultural regions, such land-use conversion began more than 1,000 years ago and has proceeded nearly to completion. For Europe, the negative radiative forcing due to land-use change has probably been substantial, perhaps approaching –5 watts per square metre. The influence of early land use on radiative forcing may help to explain a long period of cooling in Europe that followed a period of relatively mild conditions roughly 1,000 years ago. It is generally believed that the mild temperatures of this “medieval warm period,” which was followed by a long period of cooling, rivaled those of 20th-century Europe.

Land-use changes can also influence climate through their influence on the exchange of heat between Earth’s surface and the atmosphere. For example, vegetation helps to facilitate the evaporation of water into the atmosphere through evapotranspiration. In this process, plants take up liquid water from the soil through their root systems. Eventually this water is released through transpiration into the atmosphere, as water vapour through the stomata in leaves. While deforestation generally leads to surface cooling due to the albedo factor discussed above, the land surface may also be warmed as a result of the release of latent heat by the evapotranspiration process. The relative importance of these two factors, one exerting a cooling effect and the other a warming effect, varies by both season and region. While the albedo effect is likely to dominate in middle latitudes, especially during the period from autumn through spring, the evapotranspiration effect may dominate during the summer in the midlatitudes and year-round in the tropics. The latter case is particularly important in assessing the potential impacts of continued tropical deforestation.

The rate at which tropical regions are deforested is also relevant to the process of carbon sequestration (see Carbon cycle feedbacks), the long-term storage of carbon in underground cavities and biomass rather than in the atmosphere. By removing carbon from the atmosphere, carbon sequestration acts to mitigate global warming. Deforestation contributes to global warming, as fewer plants are available to take up carbon dioxide

© 2020 Encyclopædia Britannica, Inc. 40 of 106 Britannica LaunchPacks | Human Populations and Their Environment

from the atmosphere. In addition, as fallen trees, shrubs, and other plants are burned or allowed to slowly decompose, they release as carbon dioxide the carbon they stored during their lifetimes. Furthermore, any land- use change that influences the amount, distribution, or type of vegetation in a region can affect the concentrations of biogenic aerosols, though the impact of such changes on climate is indirect and relatively minor.

Stratosphericozone depletion

Since the 1970s the loss of ozone (O ) from the stratosphere has led to a small amount of negative radiative 3 forcing of the surface. This negative forcing represents a competition between two distinct effects caused by the fact that ozone absorbs solar radiation. In the first case, as ozone levels in the stratosphere are depleted, more solar radiation reaches Earth’s surface. In the absence of any other influence, this rise in insolation would represent a positive radiative forcing of the surface. However, there is a second effect of ozone depletion that is related to its greenhouse properties. As the amount of ozone in the stratosphere is decreased, there is also less ozone to absorb longwave radiation emitted by Earth’s surface. With less absorption of radiation by ozone, there is a corresponding decrease in the downward reemission of radiation. This second effect overwhelms the first and results in a modest negative radiative forcing of Earth’s surface and a modest cooling of the lower stratosphere by approximately 0.5 °C (0.9 °F) per decade since the 1970s.

Natural influences on climate

There are a number of natural factors that influence Earth’s climate. These factors include external influences such as explosive volcanic eruptions, natural variations in the output of the Sun, and slow changes in the configuration of Earth’s orbit relative to the Sun. In addition, there are natural oscillations in Earth’s climate that alter global patterns of wind circulation, precipitation, and surface temperatures. One such phenomenon is the El Niño/Southern Oscillation (ENSO), a coupled atmospheric and oceanic event that occurs in the Pacific Ocean every three to seven years. In addition, the Atlantic Multidecadal Oscillation (AMO) is a similar phenomenon that occurs over decades in the North Atlantic Ocean. Other types of oscillatory behaviour that produce dramatic shifts in climate may occur across timescales of centuries and millennia (seeclimatic variation and change).

Volcanicaerosols

A column of gas and ash rising from Mount Pinatubo in the Philippines on June 12, 1991, just days…

David H. Harlow/U.S.Geological Survey

Explosive volcanic eruptions have the potential to inject substantial amounts of sulfate aerosols into the lower stratosphere. In contrast to aerosol emissions in the lower troposphere (see aboveAerosols), aerosols that enter the stratosphere may remain for several years before settling out, because of the relative absence of turbulent motions there. Consequently, aerosols from explosive volcanic eruptions have the potential to affect Earth’s climate. Less-explosive eruptions, or eruptions that are less vertical in orientation, have a lower potential for

© 2020 Encyclopædia Britannica, Inc. 41 of 106 Britannica LaunchPacks | Human Populations and Their Environment

substantial climate impact. Furthermore, because of large-scale circulation patterns within the stratosphere, aerosols injected within tropical regions tend to spread out over the globe, whereas aerosols injected within midlatitude and polar regions tend to remain confined to the middle and high latitudes of that hemisphere. Tropical eruptions, therefore, tend to have a greater climatic impact than eruptions occurring toward the poles. In 1991 the moderate eruption of Mount Pinatubo in the Philippines provided a peak forcing of approximately –4 watts per square metre and cooled the climate by about 0.5 °C (0.9 °F) over the following few years. By comparison, the 1815 Mount Tambora eruption in present-day Indonesia, typically implicated for the 1816 “year without a summer” in Europe and North America, is believed to have been associated with a radiative forcing of approximately –6 watts per square metre.

While in the stratosphere, volcanic sulfate aerosol actually absorbs longwave radiation emitted by Earth’s surface, and absorption in the stratosphere tends to result in a cooling of the troposphere below. This vertical pattern of temperature change in the atmosphere influences the behaviour of winds in the lower atmosphere, primarily in winter. Thus, while there is essentially a global cooling effect for the first few years following an explosive volcanic eruption, changes in the winter patterns of surface winds may actually lead to warmer winters in some areas, such as Europe. Some modern examples of major eruptions include Krakatoa (Indonesia) in 1883, El Chichón (Mexico) in 1982, and Mount Pinatubo in 1991. There is also evidence that volcanic eruptions may influence other climate phenomena such as ENSO.

Variations in solar output

Twelve solar X-ray images obtained by Yohkoh between 1991 and 1995. The solar coronal brightness…

G.L. Slater and G.A. Linford; S.L. Freeland; the Yohkoh Project

The trend shown in the longer reconstruction was inferred by Lean (2000) from modeling the changes…

Encyclopædia Britannica, Inc.

© 2020 Encyclopædia Britannica, Inc. 42 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Monthly satellite measurements of total solar irradiance since 1980 comparing NASA's ACRIMSAT data…

Encyclopædia Britannica, Inc.

Direct measurements of solar irradiance, or solar output, have been available from satellites only since the late 1970s. These measurements show a very small peak-to-peak variation in solar irradiance (roughly 0.1 percent of the 1,366 watts per square metre received at the top of the atmosphere, for approximately 1.4 watts per square metre). However, indirect measures of solar activity are available from historical sunspot measurements dating back through the early 17th century. Attempts have been made to reconstruct graphs of solar irradiance variations from historical sunspot data by calibrating them against the measurements from modern satellites. However, since the modern measurements span only a few of the most recent 11-year solar cycles, estimates of solar output variability on 100-year and longer timescales are poorly correlated. Different assumptions regarding the relationship between the amplitudes of 11-year solar cycles and long-period solar output changes can lead to considerable differences in the resulting solar reconstructions. These differences in turn lead to fairly large uncertainty in estimating positive forcing by changes in solar irradiance since 1750. (Estimates range from 0.06 to 0.3 watt per square metre.) Even more challenging, given the lack of any modern analog, is the estimation of solar irradiance during the so-called Maunder Minimum, a period lasting from the mid-17th century to the early 18th century when very few sunspots were observed. While it is likely that solar irradiance was reduced at this time, it is difficult to calculate by how much. However, additional proxies of solar output exist that match reasonably well with the sunspot-derived records following the Maunder Minimum; these may be used as crude estimates of the solar irradiance variations.

In theory it is possible to estimate solar irradiance even farther back in time, over at least the past millennium, by measuring levels of cosmogenic isotopes such as carbon-14 and beryllium-10. Cosmogenic isotopes are isotopes that are formed by interactions of cosmic rays with atomic nuclei in the atmosphere and that subsequently fall to Earth, where they can be measured in the annual layers found in ice cores. Since their production rate in the upper atmosphere is modulated by changes in solar activity, cosmogenic isotopes may be used as indirect indicators of solar irradiance. However, as with the sunspot data, there is still considerable uncertainty in the amplitude of past solar variability implied by these data.

© 2020 Encyclopædia Britannica, Inc. 43 of 106 Britannica LaunchPacks | Human Populations and Their Environment

The layers of Earth's atmosphere, with a yellow line showing the air temperature at various heights.

Encyclopædia Britannica, Inc.

Solar forcing also affects the photochemical reactions that manufacture ozone in the stratosphere. Through this modulation of stratospheric ozone concentrations, changes in solar irradiance (particularly in the ultraviolet portion of the electromagnetic spectrum) can modify how both shortwave and longwave radiation in the lower stratosphere are absorbed. As a result, the vertical temperature profile of the atmosphere can change, and this change can in turn influence phenomena such as the strength of the winter jet streams.

Variations in Earth’s orbit

Earth's axis of rotation itself rotates, or precesses, completing one circle every 26,000 years.…

Encyclopædia Britannica, Inc.

On timescales of tens of millennia, the dominant radiative forcing of Earth’s climate is associated with slow variations in the geometry of Earth’s orbit about the Sun. These variations include the precession of the equinoxes (that is, changes in the timing of summer and winter), occurring on a roughly 26,000-year timescale;

© 2020 Encyclopædia Britannica, Inc. 44 of 106 Britannica LaunchPacks | Human Populations and Their Environment

changes in the tilt angle of Earth’s rotational axis relative to the plane of Earth’s orbit around the Sun, occurring on a roughly 41,000-year timescale; and changes in the eccentricity (the departure from a perfect circle) of Earth’s orbit around the Sun, occurring on a roughly 100,000-year timescale. Changes in eccentricity slightly influence the mean annual solar radiation at the top of Earth’s atmosphere, but the primary influence of all the orbital variations listed above is on the seasonal and latitudinal distribution of incoming solar radiation over Earth’s surface. The major ice ages of the Pleistocene Epoch were closely related to the influence of these variations on summer insolation at high northern latitudes. Orbital variations thus exerted a primary control on the extent of continental ice sheets. However, Earth’s orbital changes are generally believed to have had little impact on climate over the past few millennia, and so they are not considered to be significant factors in present- day climate variability.

Feedback mechanisms and climate sensitivity

There are a number of feedback processes important to Earth’s climate system and, in particular, its response to external radiative forcing. The most fundamental of these feedback mechanisms involves the loss of longwave radiation to space from the surface. Since this radiative loss increases with increasing surface temperatures according to the Stefan-Boltzmann law, it represents a stabilizing factor (that is, a negative feedback) with respect to near-surface air temperature.

Climate sensitivity can be defined as the amount of surface warming resulting from each additional watt per square metre of radiative forcing. Alternatively, it is sometimes defined as the warming that would result from a doubling of CO concentrations and the associated addition of 4 watts per square metre of radiative forcing. In 2 the absence of any additional feedbacks, climate sensitivity would be approximately 0.25 °C (0.45 °F) for each additional watt per square metre of radiative forcing. Stated alternatively, if the CO concentration of the 2 atmosphere present at the start of the industrial age (280 ppm) were doubled (to 560 ppm), the resulting additional 4 watts per square metre of radiative forcing would translate into a 1 °C (1.8 °F) increase in air temperature. However, there are additional feedbacks that exert a destabilizing, rather than stabilizing, influence (see below), and these feedbacks tend to increase the sensitivity of climate to somewhere between 0.5 and 1.0 °C (0.9 and 1.8 °F) for each additional watt per square metre of radiative forcing.

Water vapour feedback

Unlike concentrations of other greenhouse gases, the concentration of water vapour in the atmosphere cannot freely vary. Instead, it is determined by the temperature of the lower atmosphere and surface through a physical relationship known as the Clausius-Clapeyron equation, named for 19th-century German physicist Rudolf Clausius and 19th-century French engineer Émile Clapeyron. Under the assumption that there is a liquid water surface in equilibrium with the atmosphere, this relationship indicates that an increase in the capacity of air to hold water vapour is a function of increasing temperature of that volume of air. This assumption is relatively good over the oceans, where water is plentiful, but not over the continents. For this reason the relative humidity (the percent of water vapour the air contains relative to its capacity) is approximately 100 percent over ocean regions and much lower over continental regions (approaching 0 percent in arid regions). Not surprisingly, the average relative humidity of Earth’s lower atmosphere is similar to the fraction of Earth’s surface covered by the oceans (that is, roughly 70 percent). This quantity is expected to remain approximately constant as Earth warms or cools. Slight changes to global relative humidity may result from human land-use modification, such as tropical deforestation and irrigation, which can affect the relative humidity over land areas up to regional scales.

The amount of water vapour in the atmosphere will rise as the temperature of the atmosphere rises. Since water vapour is a very potent greenhouse gas, even more potent than CO , the net greenhouse effect actually 2

© 2020 Encyclopædia Britannica, Inc. 45 of 106 Britannica LaunchPacks | Human Populations and Their Environment

becomes stronger as the surface warms, which leads to even greater warming. This positive feedback is known as the “water vapour feedback.” It is the primary reason that climate sensitivity is substantially greater than the previously stated theoretical value of 0.25 °C (0.45 °F) for each increase of 1 watt per square metre of radiative forcing.

Cloud feedbacks

Different types of clouds form at different heights.

Encyclopædia Britannica, Inc.

It is generally believed that as Earth’s surface warms and the atmosphere’s water vapour content increases, global cloud cover increases. However, the effects on near-surface air temperatures are complicated. In the case of low clouds, such as marine stratus clouds, the dominant radiative feature of the cloud is its albedo. Here any increase in low cloud cover acts in much the same way as an increase in surface ice cover: more incoming solar radiation is reflected and Earth’s surface cools. On the other hand, high clouds, such as the towering cumulus clouds that extend up to the boundary between the troposphere and stratosphere, have a quite different impact on the surface radiation balance. The tops of cumulus clouds are considerably higher in the atmosphere and colder than their undersides. Cumulus cloud tops emit less longwave radiation out to space than the warmer cloud bottoms emit downward toward the surface. The end result of the formation of high cumulus clouds is greater warming at the surface.

The net feedback of clouds on rising surface temperatures is therefore somewhat uncertain. It represents a competition between the impacts of high and low clouds, and the balance is difficult to determine. Nonetheless, most estimates indicate that clouds on the whole represent a positive feedback and thus additional warming.

Ice albedo feedback

Another important positive climate feedback is the so-called ice albedo feedback. This feedback arises from the simple fact that ice is more reflective (that is, has a higher albedo) than land or water surfaces. Therefore, as global ice cover decreases, the reflectivity of Earth’s surface decreases, more incoming solar radiation is absorbed by the surface, and the surface warms. This feedback is considerably more important when there is relatively extensive global ice cover, such as during the height of the last ice age, roughly 25,000 years ago. On a global scale the importance of ice albedo feedback decreases as Earth’s surface warms and there is relatively less ice available to be melted.

© 2020 Encyclopædia Britannica, Inc. 46 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Carbon cycle feedbacks

Another important set of climate feedbacks involves the global carbon cycle. In particular, the two main reservoirs of carbon in the climate system are the oceans and the terrestrial biosphere. These reservoirs have historically taken up large amounts of anthropogenic CO emissions. Roughly 50–70 percent is removed by the 2 oceans, whereas the remainder is taken up by the terrestrial biosphere. Global warming, however, could decrease the capacity of these reservoirs to sequester atmospheric CO . Reductions in the rate of carbon uptake 2 by these reservoirs would increase the pace of CO buildup in the atmosphere and represent yet another 2 possible positive feedback to increased greenhouse gas concentrations.

In the world’s oceans, this feedback effect might take several paths. First, as surface waters warm, they would hold less dissolved CO . Second, if more CO were added to the atmosphere and taken up by the oceans, 2 2 bicarbonate ions (HCO –) would multiply and ocean acidity would increase. Since calcium carbonate (CaCO ) is 3 3 broken down by acidic solutions, rising acidity would threaten ocean-dwelling fauna that incorporate CaCO into 3 their skeletons or shells. As it becomes increasingly difficult for these organisms to absorb oceanic carbon, there would be a corresponding decrease in the efficiency of the biological pump that helps to maintain the oceans as a carbon sink (as described in the section Carbon dioxide). Third, rising surface temperatures might lead to a slowdown in the so-called thermohaline circulation (seeOcean circulation changes), a global pattern of oceanic flow that partly drives the sinking of surface waters near the poles and is responsible for much of the burial of carbon in the deep ocean. A slowdown in this flow due to an influx of melting fresh water into what are normally saltwater conditions might also cause the solubility pump, which transfers CO from shallow to deeper waters, to 2 become less efficient. Indeed, it is predicted that if global warming continued to a certain point, the oceans would cease to be a net sink of CO and would become a net source. 2

As large sections of tropical forest are lost because of the warming and drying of regions such as Amazonia, the overall capacity of plants to sequester atmospheric CO would be reduced. As a result, the terrestrial biosphere, 2 though currently a carbon sink, would become a carbon source. Ambient temperature is a significant factor affecting the pace of photosynthesis in plants, and many plant species that are well adapted to their local climatic conditions have maximized their photosynthetic rates. As temperatures increase and conditions begin to exceed the optimal temperature range for both photosynthesis and soil respiration, the rate of photosynthesis would decline. As dead plants decompose, microbial metabolic activity (a CO source) would increase and would 2 eventually outpace photosynthesis.

Under sufficient global warming conditions, methane sinks in the oceans and terrestrial biosphere also might become methane sources. Annual emissions of methane by wetlands might either increase or decrease, depending on temperatures and input of nutrients, and it is possible that wetlands could switch from source to sink. There is also the potential for increased methane release as a result of the warming of Arctic permafrost (on land) and further methane release at the continental margins of the oceans (a few hundred metres below sea level). The current average atmospheric methane concentration of 1,750 ppb is equivalent to 3.5 gigatons (3.5 billion tons) of carbon. There are at least 400 gigatons of carbon equivalent stored in Arctic permafrost and as much as 10,000 gigatons (10 trillion tons) of carbon equivalent trapped on the continental margins of the oceans in a hydrated crystalline form known as clathrate. It is believed that some fraction of this trapped methane could become unstable with additional warming, although the amount and rate of potential emission remain highly uncertain.

Climate research © 2020 Encyclopædia Britannica, Inc. 47 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Climate research

Learn about carbon dioxide and its relationship to warming conditions at Earth's surface, as…

Encyclopædia Britannica, Inc.

Modern research into climatic variation and change is based on a variety of empirical and theoretical lines of inquiry. One line of inquiry is the analysis of data that record changes in atmosphere, oceans, and climate from roughly 1850 to the present. In a second line of inquiry, information describing paleoclimatic changes is gathered from “proxy,” or indirect, sources such as ocean and lake sediments, pollen grains, corals, ice cores, and tree rings. Finally, a variety of theoretical models can be used to investigate the behaviour of Earth’s climate under different conditions. These three lines of investigation are described in this section.

Modern observations

Although a limited regional subset of land-based records is available from the 17th and 18th centuries, instrumental measurements of key climate variables have been collected systematically and at global scales since the mid-19th to early 20th century. These data include measurements of surface temperature on land and at sea, atmospheric pressure at sea level, precipitation over continents and oceans, sea ice extents, surface winds, humidity, and tides. Such records are the most reliable of all available climate data, since they are precisely dated and are based on well-understood instruments and physical principles. Corrections must be made for uncertainties in the data (for instance, gaps in the observational record, particularly during earlier years) and for systematic errors (such as an “urban heat island” bias in temperature measurements made on land).

Dr. Gavin Schmidt of the Goddard Institute for Space Studies (GISS) discussing the role of climate…

GSFC/NASA

Since the mid-20th century a variety of upper-air observations have become available (for example, of temperature, humidity, and winds), allowing climatic conditions to be characterized from the ground upward

© 2020 Encyclopædia Britannica, Inc. 48 of 106 Britannica LaunchPacks | Human Populations and Their Environment

through the upper troposphere and lower stratosphere. Since the 1970s these data have been supplemented by polar-orbiting and geostationary satellites and by platforms in the oceans that gauge temperature, salinity, and other properties of seawater. Attempts have been made to fill the gaps in early measurements by using various statistical techniques and “backward prediction” models and by assimilating available observations into numerical weather prediction models. These techniques seek to estimate meteorological observations or atmospheric variables (such as relative humidity) that have been poorly measured in the past.

Modern measurements of greenhouse gas concentrations began with an investigation of atmospheric carbon dioxide (CO ) concentrations by American climate scientist Charles Keeling at the summit of Mauna Loa in 2 Hawaii in 1958. Keeling’s findings indicated that CO concentrations were steadily rising in association with the 2 combustion of fossil fuels, and they also yielded the famous “Keeling curve,” a graph in which the longer-term rising trend is superimposed on small oscillations related to seasonal variations in the uptake and release of CO 2 from photosynthesis and respiration in the terrestrial biosphere. Keeling’s measurements at Mauna Loa apply primarily to the Northern Hemisphere.

Taking into account the uncertainties, the instrumental climate record indicates substantial trends since the end of the 19th century consistent with a warming Earth. These trends include a rise in global surface temperature of 0.9 °C (1.5 °F) between 1880 and 2012, an associated elevation of global sea level of 19–21 cm (7.5–8.3 inches) between 1901 and 2010, and a decrease in snow cover in the Northern Hemisphere of approximately 1.5 million square km (580,000 square miles). Records of average global temperatures kept by the World Meteorological Organization (WMO) indicate that the years 1998, 2005, and 2010 are statistically tied with one another as the warmest years since modern record keeping began in 1880; the WMO also noted that the decade 2001–10 was the warmest decade since 1880. Increases in global sea level are attributed to a combination of seawater expansion due to ocean heating and freshwater runoff caused by the melting of terrestrial ice. Reductions in snow cover are the result of warmer temperatures favouring a steadily shrinking winter season.

Climate data collected during the first two decades of the 21st century reveal that surface warming between 2005 and 2014 proceeded slightly more slowly than was expected from the effect of greenhouse gas increases alone. This fact was sometimes used to suggest that global warming had stopped or that it experienced a “hiatus” or “pause.” In reality, this phenomenon appears to have been influenced by several factors, none of which, however, implies that global warming stopped during this period or that global warming would not continue in the future. One factor was the increased burial of heat beneath the ocean surface by strong trade winds, a process assisted by La Niña conditions. The effects of La Niña manifest in the form of cooling surface waters along the western coast of South America. As a result, warming at the ocean surface was reduced, but the accumulation of heat in other parts of the ocean occurred at an accelerated rate. Another factor cited by climatologists was a small but potentially important increase in aerosols from volcanic activity, which may have blocked a small portion of incoming solar radiation and which were accompanied by a small reduction in solar output during the period. These factors, along with natural decades-long oscillations in the climate system, may have masked a portion of the greenhouse warming. (However, climatologists point out that these natural climate cycles are expected to add to greenhouse warming in the future when the oscillations eventually reverse direction.) For these reasons many scientists believe that it is an error to call this slowdown in detectable surface warming a “hiatus” or a “pause.”

Prehistorical climate records

In order to reconstruct climate changes that occurred prior to about the mid-19th century, it is necessary to use “proxy” measurements—that is, records of other natural phenomena that indirectly measure various climate conditions. Some proxies, such as most sediment cores and pollen records, glacial moraine evidence, and

© 2020 Encyclopædia Britannica, Inc. 49 of 106 Britannica LaunchPacks | Human Populations and Their Environment

geothermal borehole temperature profiles, are coarsely resolved or dated and thus are only useful for describing climate changes on long timescales. Other proxies, such as growth rings from trees or oxygenisotopes from corals and ice cores, can provide a record of yearly or even seasonal climate changes.

The data from these proxies should be calibrated to known physical principles or related statistically to the records collected by modern instruments, such as satellites. Networks of proxy data can then be used to infer patterns of change in climate variables, such as the behaviour of surface temperature over time and geography. Yearly reconstructions of climate variables are possible over the past 1,000 to 2,000 years using annually dated proxy records, but reconstructions farther back in time are generally based on more coarsely resolved evidence such as ocean sediments and pollen records. For these, records of conditions can be reconstructed only on timescales of hundreds or thousands of years. In addition, since relatively few long-term proxy records are available for the Southern Hemisphere, most reconstructions focus on the Northern Hemisphere.

The various proxy-based reconstructions of the average surface temperature of the Northern Hemisphere differ in their details. These differences are the result of uncertainties implicit in the proxy data themselves and also of differences in the statistical methods used to relate the proxy data to surface temperature. Nevertheless, all studies as reviewed in the IPCC’s Fourth Assessment Report (AR4), which was published in 2007, indicate that the average surface temperature since about 1950 is higher than at any time during the previous 1,000 years.

Theoretical climate models

To understand and explain the complex behaviour of Earth's climate, modern climate models…

Encyclopædia Britannica, Inc.

Theoretical models of Earth’s climate system can be used to investigate the response of climate to external radiative forcing as well as its own internal variability. Two or more models that focus on different physical processes may be coupled or linked together through a common feature, such as geographic location. Climate models vary considerably in their degree of complexity. The simplest models of energy balance describe Earth’s surface as a globally uniform layer whose temperature is determined by a balance of incoming and outgoing shortwave and longwave radiation. These simple models may also consider the effects of greenhouse gases. At the other end of the spectrum are fully coupled, three-dimensional, global climate models. These are complex models that solve for radiative balance; for laws of motion governing the atmosphere, ocean, and ice; and for exchanges of energy and momentum within and between the different components of the climate. In some

© 2020 Encyclopædia Britannica, Inc. 50 of 106 Britannica LaunchPacks | Human Populations and Their Environment

cases, theoretical climate models also include an interactive representation of Earth’s biosphere and carbon cycle.

Even the most-detailed climate models cannot resolve all the processes that are important in the atmosphere and ocean. Most climate models are designed to gauge the behaviour of a number of physical variables over space and time, and they often artificially divide Earth’s surface into a grid of many equal-sized “cells.” Each cell may neatly correspond to some physical process (such as summer near-surface air temperature) or other variable (such as land-use type), and it may be assigned a relatively straightforward value. So-called “sub-grid- scale” processes, such as those of clouds, are too small to be captured by the relatively coarse spacing of the individual grid cells. Instead, such processes must be represented through a statistical process that relates the properties of the atmosphere and ocean. For example, the average fraction of cloud cover over a hypothetical “grid box” (that is, a representative volume of air or water in the model) can be estimated from the average relative humidity and the vertical temperature profile of the grid cell. Variations in the behaviour of different coupled climate models arise in large part from differences in the ways sub-grid-scale processes are mathematically expressed.

Despite these required simplifications, many theoretical climate models perform remarkably well when reproducing basic features of the atmosphere, such as the behaviour of midlatitude jet streams or Hadley cell circulation. The models also adequately reproduce important features of the oceans, such as the Gulf Stream. In addition, models are becoming better able to reproduce the main patterns of internal climate variability, such as those of El Niño/Southern Oscillation (ENSO). Consequently, periodically recurring events—such as ENSO and other interactions between the atmosphere and ocean currents—are being modeled with growing confidence.

Climate models have been tested in their ability to reproduce observed changes in response to radiative forcing. In 1988 a team at NASA’s Goddard Institute for Space Studies in New York City used a fairly primitive climate model to predict warming patterns that might occur in response to three different scenarios of anthropogenic radiative forcing. Warming patterns were forecast for subsequent decades. Of the three scenarios, the middle one, which corresponds most closely to actual historical carbon emissions, comes closest to matching the observed warming of roughly 0.5 °C (0.9 °F) that has taken place since then. The NASA team also used a climate model to successfully predict that global mean surface temperatures would cool by about 0.5 °C for one to two years after the 1991 eruption of Mount Pinatubo in the Philippines.

More recently, so-called “detection and attribution” studies have been performed. These studies compare predicted changes in near-surface air temperature and other climate variables with patterns of change that have been observed for the past one to two centuries (see below). The simulations have shown that the observed patterns of warming of Earth’s surface and upper oceans, as well as changes in other climate phenomena such as prevailing winds and precipitation patterns, are consistent with the effects of an anthropogenic influence predicted by the climate models. In addition, climate model simulations have shown success in reproducing the magnitude and the spatial pattern of cooling in the Northern Hemisphere between roughly 1400 and 1850— during the Little Ice Age, which appears to have resulted from a combination of lowered solar output and heightened explosive volcanic activity.

© 2020 Encyclopædia Britannica, Inc. 51 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Potential effects of global warming

Graph of the predicted increase in the concentration of carbon dioxide (CO2) in Earth's atmosphere…

Encyclopædia Britannica, Inc.

The path of future climate change will depend on what courses of action are taken by society—in particular the emission of greenhouse gases from the burning of fossil fuels. A range of alternative emissions scenarios known as representative concentration pathways (RCPs) were proposed by the IPCC in the Fifth Assessment Report (AR5), which was published in 2014, to examine potential future climate changes. The scenarios depend on various assumptions concerning future rates of human population growth, economic development, energy demand, technological advancement, and other factors. Unlike the scenarios used in previous IPCC assessments, the AR5 RCPs explicitly account for climate change mitigation efforts.

The results of each scenario in the IPCC’s Fourth Assessment Report (2007) are depicted in the graph.

The AR5 scenario with the smallest increases in greenhouse gases is RCP 2.6, which denotes the net radiative forcing by 2100 in watts per square metre (a doubling of CO concentrations from preindustrial values of 280 2 ppm to 560 ppm represents roughly 3.7 watts per square metre). RCP 2.6 assumes substantial improvements in energy efficiency, a rapid transition away from fossil fuel energy, and a global population that peaks at roughly nine billion people in the 21st century. In that scenario CO concentrations remain below 450 ppm and actually 2 fall toward the end of the century (to about 420 ppm) as a result of widespread deployment of carbon-capture technology.

Scenario RCP 8.5, by contrast, might be described as “business as usual.” It reflects the assumption of an energy-intensive global economy, high population growth, and a reduced rate of technological development. CO 2 concentrations are more than three times greater than preindustrial levels (roughly 936 ppm) by 2100 and continue to grow thereafter. RCP 4.5 and RCP 6.0 envision intermediate policy choices, resulting in stabilization by 2100 of CO concentrations at 538 and 670 ppm, respectively. In all those scenarios, the cooling effect of 2 industrial pollutants such as sulfate particulates, which have masked some of the past century’s warming, is assumed to decline to near zero by 2100 because of policies restricting their industrial production.

Simulations of future climate change

The differences between the various simulations arise from disparities between the various climate models used and from assumptions made by each emission scenario. For example, best estimates of the predicted increases in global surface temperature between the years 2000 and 2100 range from about 0.3 to 4.8 °C (0.5 to 8.6 °F), depending on which emission scenario is assumed and which climate model is used. Relative to preindustrial (i. e., 1750–1800) temperatures, these estimates reflect an overall warming of the globe of 1.4 to 5.0 °C (2.5 to 9.0

© 2020 Encyclopædia Britannica, Inc. 52 of 106 Britannica LaunchPacks | Human Populations and Their Environment

°F). These projections are conservative in that they do not take into account potential positive carbon cycle feedbacks (see aboveFeedback mechanisms and climate sensitivity). Only the lower-end emissions scenario RCP 2.6 has a reasonable chance (roughly 50 percent) of holding additional global surface warming by 2100 to less than 2.0 °C (3.6 °F)—a level considered by many scientists to be the threshold above which pervasive and extreme climatic effects will occur.

Patterns of warming

Projected changes in mean surface temperatures by the late 21st century according to the A1B climate …

Encyclopædia Britannica, Inc.

The greatest increase in near-surface air temperature is occurring over the polar region of the…

Encyclopædia Britannica, Inc./Kenny Chmielewski

The greatest increase in near-surface air temperature since the 1990s is occurring over the polar region of the Northern Hemisphere largely because of the melting of sea ice and the associated reduction in surface albedo. Greater warming is predicted over land areas than over the ocean. Largely due to the delayed warming of the oceans and their greater specific heat, the Northern Hemisphere—with less than 40 percent of its surface area covered by water—is expected to warm faster than the Southern Hemisphere. Some of the regional variation in predicted warming is expected to arise from changes to wind patterns and ocean currents in response to surface

© 2020 Encyclopædia Britannica, Inc. 53 of 106 Britannica LaunchPacks | Human Populations and Their Environment

warming. For example, the warming of the region of the North Atlantic Ocean just south of Greenland is expected to be slight. This anomaly is projected to arise from a weakening of warm northward ocean currents combined with a shift in the jet stream that will bring colder polar air masses to the region.

Precipitation patterns

Flood control in the Netherlands.

Encyclopædia Britannica, Inc.

Projected changes in mean precipitation by the late 21st century according to the A1B climate change …

Encyclopædia Britannica, Inc.

The climate changes associated with global warming are also projected to lead to changes in precipitation patterns across the globe. Increased precipitation is predicted in the polar and subpolar regions, whereas decreased precipitation is projected for the middle latitudes of both hemispheres as a result of the expected poleward shift in the jet streams. Whereas precipitation near the Equator is predicted to increase, it is thought that rainfall in the subtropics will decrease. Both phenomena are associated with a forecasted strengthening of the tropical Hadley cell pattern of atmospheric circulation.

Changes in precipitation patterns are expected to increase the chances of both drought and flood conditions in many areas. Decreased summer precipitation in North America, Europe, and Africa, combined with greater rates of evaporation due to warming surface temperatures, is projected to lead to decreased soil moisture and drought in many regions. Furthermore, since anthropogenic climate change will likely lead to a more vigorous hydrologic cycle with greater rates of both evaporation and precipitation, there will be a greater probability for intense precipitation and flooding in many regions.

Regional predictions © 2020 Encyclopædia Britannica, Inc. 54 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Regional predictions

Regional predictions of future climate change remain limited by uncertainties in how the precise patterns of atmospheric winds and ocean currents will vary with increased surface warming. For example, some uncertainty remains in how the frequency and magnitude of El Niño/Southern Oscillation (ENSO) events will adjust to climate change. Since ENSO is one of the most prominent sources of interannual variations in regional patterns of precipitation and temperature, any uncertainty in how it will change implies a corresponding uncertainty in certain regional patterns of climate change. For example, increased El Niño activity would likely lead to more winter precipitation in some regions, such as the desert southwest of the United States. This might offset the drought predicted for those regions, but at the same time it might lead to less precipitation in other regions. Rising winter precipitation in the desert southwest of the United States might exacerbate drought conditions in locations as far away as South Africa.

Ice melt and sea level rise

NASA image showing locations on Antarctica where temperatures had increased between 1959 and 2009.…

GSFC Scientific Visualization Studio/NASA

A warming climate holds important implications for other aspects of the global environment. Because of the slow process of heat diffusion in water, the world’s oceans are likely to continue to warm for several centuries in response to increases in greenhouse concentrations that have taken place so far. The combination of seawater’s thermal expansion associated with this warming and the melting of mountain glaciers is predicted to lead to an increase in global sea level of 0.45–0.82 metre (1.4–2.7 feet) by 2100 under the RCP 8.5 emissions scenario. However, the actual rise in sea level could be considerably greater than this. It is probable that the continued warming of Greenland will cause its ice sheet to melt at accelerated rates. In addition, this level of surface warming may also melt the ice sheet of West Antarctica. Paleoclimatic evidence suggests that an additional 2 °C (3.6 °F) of warming could lead to the ultimate destruction of the Greenland Ice Sheet, an event that would add another 5 to 6 metres (16 to 20 feet) to predicted sea level rise. Such an increase would submerge a substantial number of islands and lowland regions. Coastal lowland regions vulnerable to sea level rise include substantial parts of the U.S. Gulf Coast and Eastern Seaboard (including roughly the lower third of Florida), much of the Netherlands and Belgium (two of the European Low Countries), and heavily populated tropical areas such as Bangladesh. In addition, many of the world’s major cities—such as Tokyo, New York, , Shanghai, and Dhaka—are located in lowland regions vulnerable to rising sea levels. With the loss of the West Antarctic ice sheet, additional sea level rise would approach 10.5 metres (34 feet).

© 2020 Encyclopædia Britannica, Inc. 55 of 106 Britannica LaunchPacks | Human Populations and Their Environment

While the current generation of models predicts that such global sea level changes might take several centuries to occur, it is possible that the rate could accelerate as a result of processes that tend to hasten the collapse of ice sheets. One such process is the development of moulins—large vertical shafts in the ice that allow surface meltwater to penetrate to the base of the ice sheet. A second process involves the vast ice shelves off Antarctica that buttress the grounded continental ice sheet of Antarctica’s interior. If those ice shelves collapse, the continental ice sheet could become unstable, slide rapidly toward the ocean, and melt, thereby further increasing mean sea level. Thus far, neither process has been incorporated into the theoretical models used to predict sea level rise.

Ocean circulation changes

Thermohaline circulation transports and mixes the water of the oceans. In the process it transports…

Another possible consequence of global warming is a decrease in the global ocean circulation system known as the “thermohaline circulation” or “great ocean conveyor belt.” This system involves the sinking of cold saline waters in the subpolar regions of the oceans, an action that helps to drive warmer surface waters poleward from the subtropics. As a result of this process, a warming influence is carried to Iceland and the coastal regions of Europe that moderates the climate in those regions. Some scientists believe that global warming could shut down this ocean current system by creating an influx of fresh water from melting ice sheets and glaciers into the subpolar North Atlantic Ocean. Since fresh water is less dense than saline water, a significant intrusion of fresh water would lower the density of the surface waters and thus inhibit the sinking motion that drives the large- scale thermohaline circulation. It has also been speculated that, as a consequence of large-scale surface warming, such changes could even trigger colder conditions in regions surrounding the North Atlantic. Experiments with modern climate models suggest that such an event would be unlikely. Instead, a moderate weakening of the thermohaline circulation might occur that would lead to a dampening of surface warming— rather than actual cooling—in the higher latitudes of the North Atlantic Ocean.

Tropical cyclones

One of the more controversial topics in the science of climate change involves the impact of global warming on tropical cyclone activity. It appears likely that rising tropical ocean temperatures associated with global warming will lead to an increase in the intensity (and the associated destructive potential) of tropical cyclones. In the Atlantic a close relationship has been observed between rising ocean temperatures and a rise in the strength of hurricanes. Trends in the intensities of tropical cyclones in other regions, such as in the tropical Pacific and Indian oceans, are more uncertain due to a paucity of reliable long-term measurements.

While the warming of oceans favours increased tropical cyclone intensities, it is unclear to what extent rising temperatures affect the number of tropical cyclones that occur each year. Other factors, such as wind shear, could play a role. If climate change increases the amount of wind shear—a factor that discourages the formation

© 2020 Encyclopædia Britannica, Inc. 56 of 106 Britannica LaunchPacks | Human Populations and Their Environment

of tropical cyclones—in regions where such storms tend to form, it might partially mitigate the impact of warmer temperatures. On the other hand, changes in atmospheric winds are themselves uncertain—because of, for example, uncertainties in how climate change will affect ENSO.

Environmental consequences of global warming

The perceptible warming of Earth over the past 150 years has been caused by an increase in the…

Created and produced by QA International. © QA International, 2010. All rights reserved. www.qa-international.com

Global warming and climate change have the potential to alter biological systems. More specifically, changes to near-surface air temperatures will likely influence ecosystem functioning and thus the biodiversity of plants, animals, and other forms of life. The current geographic ranges of plant and animal species have been established by adaptation to long-term seasonal climate patterns. As global warming alters these patterns on timescales considerably shorter than those that arose in the past from natural climate variability, relatively sudden climatic changes may challenge the natural adaptive capacity of many species.

A large fraction of plant and animal species are likely to be at an increased risk of extinction if global average surface temperatures rise another 1.5 to 2.5 °C (2.7 to 4.5 °F) by the year 2100. Species loss estimates climb to as much as 40 percent for a warming in excess of 4.5 °C (8.1 °F)—a level that could be reached in the IPCC’s higher emissions scenarios. A 40 percent extinction rate would likely lead to major changes in the food webs within ecosystems and have a destructive impact on ecosystem function.

Learn how global warming affects the migratory patterns of birds.

Contunico © ZDF Enterprises GmbH, Mainz

Surface warming in temperate regions is likely to lead changes in various seasonal processes—for instance, earlier leaf production by trees, earlier greening of vegetation, altered timing of egg laying and hatching, and shifts in the seasonal migration patterns of birds, fishes, and other migratory animals. In high-latitude ecosystems, changes in the seasonal patterns of sea ice threaten predators such as polar bears and walruses;

© 2020 Encyclopædia Britannica, Inc. 57 of 106 Britannica LaunchPacks | Human Populations and Their Environment

both species rely on broken sea ice for their hunting activities. Also in the high latitudes, a combination of warming waters, decreased sea ice, and changes in ocean salinity and circulation is likely to lead to reductions or redistributions in populations of algae and plankton. As a result, fish and other organisms that forage upon algae and plankton may be threatened. On land, rising temperatures and changes in precipitation patterns and drought frequencies are likely to alter patterns of disturbance by fires and pests.

Numerous ecologists, conservation biologists, and other scientists studying climate warn that rising surface temperatures will bring about an increased extinction risk. In 2015 one study that examined 130 extinction models developed in previous studies predicted that 5.2 percent of species would be lost with a rise in average temperatures of 2 °C (3.6 °F) above temperature benchmarks from before the onset of the Industrial Revolution. The study also predicted that 16 percent of Earth’s species would be lost if surface warming increased to about 4.3 °C (7.7 °F) above preindustrial temperature benchmarks.

Other likely impacts on the environment include the destruction of many coastal wetlands, salt marshes, and mangrove swamps as a result of rising sea levels and the loss of certain rare and fragile habitats that are often home to specialist species that are unable to thrive in other environments. For example, certain amphibians limited to isolated tropical cloud forests either have become extinct already or are under serious threat of extinction. Cloud forests—tropical forests that depend on persistent condensation of moisture in the air—are disappearing as optimal condensation levels move to higher elevations in response to warming temperatures in the lower atmosphere.

Cross section of a generalized coral polyp.

Encyclopædia Britannica, Inc.

In many cases a combination of stresses caused by climate change as well as human activity represents a considerably greater threat than either climatic stresses or nonclimatic stresses alone. A particularly important example is coral reefs, which contain much of the ocean’s biodiversity. Rising ocean temperatures increase the tendency for coral bleaching (a condition where zooxanthellae, or yellow-green algae, living in symbiosis with coral either lose their pigments or abandon the coral polyps altogether), and they also raise the likelihood of greater physical damage by progressively more destructive tropical cyclones. In many areas coral is also under stress from increased ocean acidification (see above), marine pollution, runoff from agricultural fertilizer, and physical damage by boat anchors and dredging.

© 2020 Encyclopædia Britannica, Inc. 58 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Another example of how climate and nonclimatic stresses combine is illustrated by the threat to migratory animals. As these animals attempt to relocate to regions with more favourable climate conditions, they are likely to encounter impediments such as highways, walls, artificial waterways, and other man-made structures.

Anopheles mosquito, carrier of the malarial parasite.

© Razvan Cornel Constantin/Dreamstime.com

Warmer temperatures are also likely to affect the spread of infectious diseases, since the geographic ranges of carriers, such as insects and rodents, are often limited by climatic conditions. Warmer winter conditions in New York in 1999, for example, appear to have facilitated an outbreak of West Nile virus, whereas the lack of killing frosts in New Orleans during the early 1990s led to an explosion of disease-carrying mosquitoes and cockroaches. Warmer winters in the Korean peninsula and southern Europe have allowed the spread of the Anopheles mosquito, which carries the malaria parasite, whereas warmer conditions in Scandinavia in recent years have allowed for the northward advance of encephalitis.

In the southwestern United States, alternations between drought and flooding related in part to the ENSO phenomenon have created conditions favourable for the spread of hantaviruses by rodents. The spread of mosquito-borne Rift Valley fever in equatorial East Africa has also been related to wet conditions in the region associated with ENSO. Severe weather conditions conducive to rodents or insects have been implicated in infectious disease outbreaks—for instance, the outbreaks of cholera and leptospirosis that occurred after Hurricane Mitch struck Central America in 1998. Global warming could therefore affect the spread of infectious disease through its influence on ENSO or on severe weather conditions.

Socioeconomic consequences of global warming

Socioeconomic impacts of global warming could be substantial, depending on the actual temperature increases over the next century. Models predict that a net global warming of 1 to 3 °C (1.8 to 5.4 °F) beyond the late 20th- century global average would produce economic losses in some regions (particularly the tropics and high latitudes) and economic benefits in others. For warming beyond those levels, benefits would tend to decline and costs increase. For warming in excess of 4 °C (7.2 °F), models predict that costs will exceed benefits on average, with global mean economic losses estimated between 1 and 5 percent of gross domestic product. Substantial disruptions could be expected under those conditions, specifically in the areas of agriculture, food and forest products, water and energy supply, and human health.

Agricultural productivity might increase modestly in temperate regions for some crops in response to a local warming of 1–3 °C (1.8–5.4 °F), but productivity will generally decrease with further warming. For tropical and subtropical regions, models predict decreases in crop productivity for even small increases in local warming. In some cases, adaptations such as altered planting practices are projected to ameliorate losses in productivity for

© 2020 Encyclopædia Britannica, Inc. 59 of 106 Britannica LaunchPacks | Human Populations and Their Environment

modest amounts of warming. An increased incidence of drought and flood events would likely lead to further decreases in agricultural productivity and to decreases in livestock production, particularly among subsistence farmers in tropical regions. In regions such as the African Sahel, decreases in agricultural productivity have already been observed as a result of shortened growing seasons, which in turn have occurred as a result of warmer and drier climatic conditions. In other regions, changes in agricultural practice, such as planting crops earlier in the growing season, have been undertaken. The warming of oceans is predicted to have an adverse impact on commercial fisheries by changing the distribution and productivity of various fish species, whereas commercial timber productivity may increase globally with modest warming.

Water resources are likely to be affected substantially by global warming. At current rates of warming, a 10–40 percent increase in average surface runoff and water availability has been projected in higher latitudes and in certain wet regions in the tropics by the middle of the 21st century, while decreases of similar magnitude are expected in other parts of the tropics and in the dry regions in the subtropics. This would be particularly severe during the summer season. In many cases water availability is already decreasing or expected to decrease in regions that have been stressed for water resources since the turn of the 21st century. Such regions as the African Sahel, western North America, southern Africa, the Middle East, and western Australia continue to be particularly vulnerable. In these regions drought is projected to increase in both magnitude and extent, which would bring about adverse effects on agriculture and livestock raising. Earlier and increased spring runoff is already being observed in western North America and other temperate regions served by glacial or snow-fed streams and rivers. Fresh water currently stored by mountain glaciers and snow in both the tropics and extratropics is also projected to decline and thus reduce the availability of fresh water for more than 15 percent of the world’s population. It is also likely that warming temperatures, through their impact on biological activity in lakes and rivers, may have an adverse impact on water quality, further diminishing access to safe water sources for drinking or farming. For example, warmer waters favour an increased frequency of nuisance algal blooms, which can pose health risks to humans. Risk-management procedures have already been taken by some countries in response to expected changes in water availability.

Energy availability and use could be affected in at least two distinct ways by rising surface temperatures. In general, warmer conditions would favour an increased demand for air-conditioning; however, this would be at least partially offset by decreased demand for winter heating in temperate regions. Energy generation that requires water either directly, as in hydroelectric power, or indirectly, as in steam turbines used in coal-fired power plants or in cooling towers used in nuclear power plants, may become more difficult in regions with reduced water supplies.

As discussed above, it is expected that human health will be further stressed under global warming conditions by potential increases in the spread of infectious diseases. Declines in overall human health might occur with increases in the levels of malnutrition due to disruptions in food production and by increases in the incidence of afflictions. Such afflictions could include diarrhea, cardiorespiratory illness, and allergic reactions in the midlatitudes of the Northern Hemisphere as a result of rising levels of pollen. Rising heat-related mortality, such as that observed in response to the 2003 European heat wave, might occur in many regions, especially in impoverished areas where air-conditioning is not generally available.

The economic infrastructure of most countries is predicted to be severely strained by global warming and climate change. Poor countries and communities with limited adaptive capacities are likely to be disproportionately affected. Projected increases in the incidence of severe weather, heavy flooding, and wildfires associated with reduced summer ground moisture in many regions will threaten homes, dams, transportation networks and other facets of human infrastructure. In high-latitude and mountain regions, melting permafrost is likely to lead to ground instability or rock avalanches, further threatening structures in those regions. Rising sea

© 2020 Encyclopædia Britannica, Inc. 60 of 106 Britannica LaunchPacks | Human Populations and Their Environment

levels and the increased potential for severe tropical cyclones represent a heightened threat to coastal communities throughout the world. It has been estimated that an additional warming of 1–3 °C (1.8–5.4 °F) beyond the late 20th-century global average would threaten millions more people with the risk of annual flooding. People in the densely populated, poor, low-lying regions of Africa, Asia, and tropical islands would be the most vulnerable, given their limited adaptive capacity. In addition, certain regions in developed countries, such as the Low Countries of Europe and the Eastern Seaboard and Gulf Coast of the United States, would also be vulnerable to the effects of rising sea levels. Adaptive steps are already being taken by some governments to reduce the threat of increased coastal vulnerability through the construction of dams and drainage works.

Michael E. Mann Global warming and public policy

A timeline of important developments in climate change.

Encyclopædia Britannica, Inc./Patrick O'Neill Riley

Since the 19th century, many researchers working across a wide range of academic disciplines have contributed to an enhanced understanding of the atmosphere and the global climate system. Concern among prominent climate scientists about global warming and human-induced (or “anthropogenic”) climate change arose in the mid-20th century, but most scientific and political debate over the issue did not begin until the 1980s. Today, leading climate scientists agree that many of the ongoing changes to the global climate system are largely caused by the release into the atmosphere of greenhouse gases—gases that enhance Earth’s natural greenhouse effect. Most greenhouse gases are released by the burning of fossil fuels for heating, cooking, electrical generation, transportation, and manufacturing, but they are also released as a result of the natural decomposition of organic materials, wildfires, deforestation, and land-clearing activities (seeThe influences of human activity on climate). Opponents of this view have often stressed the role of natural factors in past climatic variation and have accentuated the scientific uncertainties associated with data on global warming and climate change. Nevertheless, a growing body of scientists has called upon governments, industries, and citizens to reduce their emissions of greenhouse gases.

All countries emit greenhouse gases, but highly industrialized countries and more populous countries emit significantly greater quantities than others. Countries in North America and Europe that were the first to undergo the process of industrialization have been responsible for releasing most greenhouse gases in absolute cumulative terms since the beginning of the Industrial Revolution in the mid-18th century. Today these countries are being joined by large developing countries such as China and India, where rapid industrialization is being accompanied by a growing release of greenhouse gases. The United States, possessing approximately 5 percent of the global population, emitted almost 21 percent of global greenhouse gases in 2000. The same year, the

© 2020 Encyclopædia Britannica, Inc. 61 of 106 Britannica LaunchPacks | Human Populations and Their Environment

then 25 member states of the European Union (EU)—possessing a combined population of 450 million people— emitted 14 percent of all anthropogenic greenhouse gases. This figure was roughly the same as the fraction released by the 1.2 billion people of China. In 2000 the average American emitted 24.5 tons of greenhouse gases, the average person living in the EU released 10.5 tons, and the average person living in China discharged only 3.9 tons. Although China’s per capita greenhouse gas emissions remained significantly lower than those of the EU and the United States, it was the largest greenhouse gas emitter in 2006 in absolute terms.

The IPCC and the scientific consensus

An important first step in formulating public policy on global warming and climate change is the gathering of relevant scientific and socioeconomic data. In 1988 the Intergovernmental Panel on Climate Change (IPCC) was established by the World Meteorological Organization and the United Nations Environment Programme. The IPCC is mandated to assess and summarize the latest scientific, technical, and socioeconomic data on climate change and to publish its findings in reports presented to international organizations and national governments all over the world. Many thousands of the world’s leading scientists and experts in the areas of global warming and climate change have worked under the IPCC, producing major sets of assessments in 1990, 1995, 2001, 2007, and 2014. Those reports evaluated the scientific basis of global warming and climate change, the major issues relating to the reduction of greenhouse gas emissions, and the process of adjusting to a changing climate.

The first IPCC report, published in 1990, stated that a good deal of data showed that human activity affected the variability of the climate system; nevertheless, the authors of the report could not reach a consensus on the causes and effects of global warming and climate change at that time. The 1995 IPCC report stated that the balance of evidence suggested “a discernible human influence on the climate.” The 2001 IPCC report confirmed earlier findings and presented stronger evidence that most of the warming over the previous 50 years was attributable to human activities. The 2001 report also noted that observed changes in regional climates were beginning to affect many physical and biological systems and that there were indications that social and economic systems were also being affected.

The IPCC’s fourth assessment, issued in 2007, reaffirmed the main conclusions of earlier reports, but the authors also stated—in what was regarded as a conservative judgment—that they were at least 90 percent certain that most of the warming observed over the previous half century had been caused by the release of greenhouse gases through a multitude of human activities. Both the 2001 and 2007 reports stated that during the 20th century there had been an increase in global average surface temperature of 0.6 °C (1.1 °F), within a margin of error of ±0.2 °C (0.4 °F). Whereas the 2001 report forecast an additional rise in average temperature by 1.4 to 5.8 °C (2.5 to 10.4 °F) by 2100, the 2007 report refined this forecast to an increase of 1.8–4.0 °C (3.2–7.2 °F) by the end of the 21st century. Those forecasts were based on examinations of a range of scenarios that characterized future trends in greenhouse gas emissions (seePotential effects of global warming.

The IPCC’s fifth assessment, released in 2014, further refined projected increases in global average temperature and sea level. The 2014 report stated that the interval between 1880 and 2012 saw an increase in global average temperature of approximately 0.85 °C (1.5 °F) and that the interval between 1901 and 2010 saw an increase in global average sea level of about 19–21 cm (7.5–8.3 inches). The report predicted that by the end of the 21st century surface temperatures across the globe would increase between 0.3 and 4.8 °C (0.5 and 8.6 °F), and sea level could rise between 26 and 82 cm (10.2 and 32.3 inches) relative to the 1986–2005 average.

Each IPCC report has helped to build a scientific consensus that elevated concentrations of greenhouse gases in the atmosphere are the major drivers of rising near-surface air temperatures and their associated ongoing climatic changes. In this respect, the current episode of climatic change, which began about the middle of the 20th century, is seen to be fundamentally different from earlier periods in that critical adjustments have been

© 2020 Encyclopædia Britannica, Inc. 62 of 106 Britannica LaunchPacks | Human Populations and Their Environment

caused by activities resulting from human behaviour rather than nonanthropogenic factors. The IPCC’s 2007 assessment projected that future climatic changes could be expected to include continued warming, modifications to precipitation patterns and amounts, elevated sea levels, and “changes in the frequency and intensity of some extreme events.” Such changes would have significant effects on many societies and on ecological systems around the world (seeEnvironmental consequences of global warming).

The UN Framework Convention and the Kyoto Protocol

U.S. Vice Pres. Al Gore delivering the opening speech of the conference in Kyōto, Japan, that led to …

Katsumi Kasahara/AP Images

The reports of the IPCC and the scientific consensus they reflect have provided one of the most prominent bases for the formulation of climate-change policy. On a global scale, climate-change policy is guided by two major treaties: the United Nations Framework Convention on Climate Change (UNFCCC) of 1992 and the associated 1997 Kyoto Protocol to the UNFCCC (named after the city in Japan where it was concluded).

The UNFCCC was negotiated between 1991 and 1992. It was adopted at the United Nations Conference on Environment and Development in Rio de Janeiro in June 1992 and became legally binding in March 1994. In Article 2 the UNFCCC sets the long-term objective of “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.” Article 3 establishes that the world’s countries have “common but differentiated responsibilities,” meaning that all countries share an obligation to act—though industrialized countries have a particular responsibility to take the lead in reducing emissions because of their relative contribution to the problem in the past. To this end, the UNFCCC Annex I lists 41 specific industrialized countries and countries with economies in transition plus the European Community (EC; formally succeeded by the EU in 2009), and Article 4 states that these countries should work to reduce their anthropogenic emissions to 1990 levels. However, no deadline is set for this target. Moreover, the UNFCCC does not assign any specific reduction commitments to non-Annex I countries (that is, developing countries).

The follow-up agreement to the UNFCCC, the Kyoto Protocol, was negotiated between 1995 and 1997 and was adopted in December 1997. The Kyoto Protocol regulates six greenhouse gases released through human activities: carbon dioxide (CO ), methane (CH ), nitrous oxide (N O), perfluorocarbons (PFCs), 2 4 2 hydrofluorocarbons (HFCs), and sulfur hexafluoride (SF ). Under the Kyoto Protocol, Annex I countries are 6 required to reduce their aggregate emissions of greenhouse gases to 5.2 percent below their 1990 levels by no later than 2012. Toward this goal, the protocol sets individual reduction targets for each Annex I country. These targets require the reduction of greenhouse gases in most countries, but they also allow increased emissions from others. For example, the protocol requires the then 15 member states of the EU and 11 other European countries to reduce their emissions to 8 percent below their 1990 emission levels, whereas Iceland, a country

© 2020 Encyclopædia Britannica, Inc. 63 of 106 Britannica LaunchPacks | Human Populations and Their Environment

that produces relatively small amounts of greenhouse gases, may increase its emissions as much as 10 percent above its 1990 level. In addition, the Kyoto Protocol requires three countries—New Zealand, Ukraine, and Russia—to freeze their emissions at 1990 levels.

The Kyoto Protocol outlines five requisites by which Annex I parties can choose to meet their 2012 emission targets. First, it requires the development of national policies and measures that lower domestic greenhouse gas emissions. Second, countries may calculate the benefits from domestic carbon sinks that soak up more carbon than they emit (seeCarbon cycle feedbacks). Third, countries can participate in schemes that trade emissions with other Annex I countries. Fourth, signatory countries may create joint implementation programs with other Annex I parties and receive credit for such projects that lower emissions. Fifth, countries may receive credit for lowering the emissions in non-Annex I countries through a “clean development” mechanism, such as investing in the building of a new wind power project.

In order to go into effect, the Kyoto Protocol had to be ratified by at least 55 countries, including enough Annex I countries to account for at least 55 percent of that group’s total greenhouse gas emissions. More than 55 countries quickly ratified the protocol, including all the Annex I countries except for Russia, the United States, and Australia. (Russia and Australia ratified the protocol in 2005 and 2007, respectively.) It was not until Russia, under heavy pressure from the EU, ratified the protocol that it became legally binding in February 2005.

The most-developed regional climate-change policy to date has been formulated by the EU in part to meet its commitments under the Kyoto Protocol. By 2005 the 15 EU countries that have a collective commitment under the protocol reduced their greenhouse gas emissions to 2 percent below their 1990 levels, though it is not certain that they will meet their 8 percent reduction target by 2012. In 2007 the EU set a collective goal for all 27 member states to reduce their greenhouse gas emissions by 20 percent below 1990 levels by the year 2020. As part of its effort to achieve this goal, the EU in 2005 established the world’s first multilateral trading scheme for carbon dioxide emissions, covering more than 11,500 large installations across its member states.

In the United States, by contrast, Pres. George W. Bush and a majority of senators rejected the Kyoto Protocol, citing the lack of compulsory emission reductions for developing countries as a particular grievance. At the same time, U.S. federal policy did not set any mandatory restrictions on greenhouse gas emissions, and U.S. emissions increased over 16 percent between 1990 and 2005. Partly to make up for a lack of direction at the federal level, many individual U.S. states formulated their own action plans to address global warming and climate change and took a host of legal and political initiatives to curb emissions. These initiatives include: capping emissions from power plants, establishing renewable portfolio standards requiring electricity providers to obtain a minimum percentage of their power from renewable sources, developing vehicle emissions and fuel standards, and adopting “green building” standards.

Future climate-change policy

Each country's adoption status of the Paris Agreement. Convening in Paris in 2015, world leaders and …

© 2020 Encyclopædia Britannica, Inc. 64 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Encyclopædia Britannica, Inc./Kenny Chmielewski

Countries differ in opinion on how to proceed with international policy with respect to climate agreements. Long- term goals formulated in Europe and the United States seek to reduce greenhouse gas emissions by up to 80 percent by the middle of the 21st century. Related to these efforts, the EU set a goal of limiting temperature rises to a maximum of 2 °C (3.6 °F) above preindustrial levels. (Many climate scientists and other experts agree that significant economic and ecological damage will result should the global average of near-surface air temperatures rise more than 2 °C [3.6 °F] above preindustrial temperatures in the next century.)

Despite differences in approach, countries launched negotiations on a new treaty, based on an agreement made at the United Nations Climate Change Conference in 2007 in Bali, Indonesia, that would replace the Kyoto Protocol after it expired. At the 17th UNFCCC Conference of the Parties (COP17) held in Durban, South Africa, in 2011, the international community committed to the development of a comprehensive legally binding climate treaty that would replace the Kyoto Protocol by 2015. Such a treaty would require all greenhouse-gas-producing countries—including major carbon emitters not abiding by the Kyoto Protocol (such as China, India, and the United States)—to limit and reduce their emissions of carbon dioxide and other greenhouse gases. This commitment was reaffirmed by the international community at the 18th Conference of the Parties (COP18) held in Doha, Qatar, in 2012. Since the terms of the Kyoto Protocol were set to terminate in 2012, the COP17 and COP18 delegates agreed to extend the Kyoto Protocol to bridge the gap between the original expiration date and the date that the new climate treaty would become legally binding. Consequently, COP18 delegates decided that the Kyoto Protocol would terminate in 2020, the year in which the new climate treaty was expected to come into force. This extension had the added benefit of providing additional time for countries to meet their 2012 emission targets.

Overview of the impact on Earth of an increase in global average temperature.

Encyclopædia Britannica, Inc.

Convening in Paris in 2015, world leaders and other delegates at COP21 signed a global but nonbinding agreement to limit the increase of the world’s average temperature to no more than 2 °C (3.6 °F) above preindustrial levels while at the same time striving to keep this increase to 1.5 °C (2.7 °F) above preindustrial levels. The Paris Agreement was a landmark accord that mandated a progress review every five years and the development of a fund containing $100 billion by 2020—which would be replenished annually—to help developing countries adopt non-greenhouse-gas-producing technologies. The number of parties (signatories) to the convention stood at 197 by 2019, and 185 countries had ratified the agreement. Despite the United States having ratified the agreement in September 2016, the inauguration of Donald J. Trump as president in January 2017 heralded a new era in U.S. climate policy, and on June 1, 2017, Trump signaled his intention to pull the U.S. out of the climate agreement after the formal exiting process concluded, which could happen as early as November 4, 2020.

© 2020 Encyclopædia Britannica, Inc. 65 of 106 Britannica LaunchPacks | Human Populations and Their Environment

A growing number of the world’s cities are initiating a multitude of local and subregional efforts to reduce their emissions of greenhouse gases. Many of these municipalities are taking action as members of the International Council for Local Environmental Initiatives and its Cities for Climate Protection program, which outlines principles and steps for taking local-level action. In 2005 the U.S. Conference of Mayors adopted the Climate Protection Agreement, in which cities committed to reduce emissions to 7 percent below 1990 levels by 2012. In addition, many private firms are developing corporate policies to reduce greenhouse gas emissions. One notable example of an effort led by the private sector is the creation of the Chicago Climate Exchange as a means for reducing emissions through a trading process.

As public policies relative to global warming and climate change continue to develop globally, regionally, nationally, and locally, they fall into two major types. The first type, mitigation policy, focuses on different ways to reduce emissions of greenhouse gases. As most emissions come from the burning of fossil fuels for energy and transportation, much of the mitigation policy focuses on switching to less carbon-intensive energy sources (such as wind, solar, and hydropower), improving energy efficiency for vehicles, and supporting the development of new technology. In contrast, the second type, adaptation policy, seeks to improve the ability of various societies to face the challenges of a changing climate. For example, some adaptation policies are devised to encourage groups to change agricultural practices in response to seasonal changes, whereas other policies are designed to prepare cities located in coastal areas for elevated sea levels.

In either case, long-term reductions in greenhouse gas discharges will require the participation of both industrial countries and major developing countries. In particular, the release of greenhouse gases from Chinese and Indian sources is rising quickly in parallel with the rapid industrialization of those countries. In 2006 China overtook the United States as the world’s leading emitter of greenhouse gases in absolute terms (though not in per capita terms), largely because of China’s increased use of coal and other fossil fuels. Indeed, all the world’s countries are faced with the challenge of finding ways to reduce their greenhouse gas emissions while promoting environmentally and socially desirable economic development (known as “sustainable development” or “smart growth”). Whereas some opponents of those calling for corrective action continue to argue that short-term mitigation costs will be too high, a growing number of economists and policy makers argue that it will be less costly, and possibly more profitable, for societies to take early preventive action than to address severe climatic changes in the future. Many of the most harmful effects of a warming climate are likely to take place in developing countries. Combating the harmful effects of global warming in developing countries will be especially difficult, as many of these countries are already struggling and possess a limited capacity to meet challenges from a changing climate.

It is expected that each country will be affected differently by the expanding effort to reduce global greenhouse gas emissions. Countries that are relatively large emitters will face greater reduction demands than will smaller emitters. Similarly, countries experiencing rapid economic growth are expected to face growing demands to control their greenhouse gas emissions as they consume increasing amounts of energy. Differences will also occur across industrial sectors and even between individual companies. For example, producers of oil, coal, and natural gas—which in some cases represent significant portions of national export revenues—may see reduced demand or falling prices for their goods as their clients decrease their use of fossil fuels. In contrast, many producers of new, more climate-friendly technologies and products (such as generators of renewable energy) are likely to see increases in demand.

To address global warming and climate change, societies must find ways to fundamentally change their patterns of energy use in favour of less carbon-intensive energy generation, transportation, and forest and land use management. A growing number of countries have taken on this challenge, and there are many things individuals too can do. For instance, consumers have more options to purchase electricity generated from

© 2020 Encyclopædia Britannica, Inc. 66 of 106 Britannica LaunchPacks | Human Populations and Their Environment

renewable sources. Additional measures that would reduce personal emissions of greenhouse gases and also conserve energy include the operation of more energy-efficient vehicles, the use of public transportation when available, and the transition to more energy-efficient household products. Individuals might also improve their household insulation, learn to heat and cool their residences more effectively, and purchase and recycle more environmentally sustainable products.

Henrik Selin Additional Reading

Documentaries

Of the several productions describing the scientific concepts behind the global warming phenomenon, An

Inconvenient Truth (2006), produced by LAURIE DAVID, LAWRENCE BENDER, and SCOTT Z. BURNS and narrated

by ALBERT GORE, JR., is the most lauded. A feature placing special emphasis on solutions that reduce carbon dioxide production is Global Warming: What You Need to Know (2006), produced by the Discovery

Channel, the BBC, and NBC News Productions and narrated by TOM BROKAW. Other noted documentaries on global warming include two originally aired on PBS-TV: What’s Up with the Weather? (2007), produced

by JON PALFREMAN; and Global Warming: The Signs and the Science (2005), produced by DAVID KENNARD and

narrated by ALANIS MORISSETTE.

Scientific background

An excellent general overview of the factors governing Earth’s climate over all timescales is presented in

WILLIAM RUDDIMAN, Earth’s Climate: Past and Future (2000). In addition, RICHARD C.J. SOMERVILLE, The Forgiving Air: Understanding Environmental Change (1996, reissued 1998), is a readable introduction to

the science of climate and global environmental change. JOHN HOUGHTON, Global Warming: The Complete Briefing (1997), also offers an accessible treatment of the science of climate change as well as a

discussion of the policy and ethical overtones of climate change as an issue confronting society. SPENCER

WEART, Discovery of Global Warming (2003), provides a reasoned account of the history of climate change science.

A somewhat more technical introduction to the science of climate change is provided in DAVID ARCHER, Global Warming: Understanding the Forecast (2006). More advanced treatments of the science of global

warming and climate change are included in INTERGOVERNMENTAL PANEL ON CLIMATE CHANGE: WORKING GROUP I, Climate Change 2007: The Physical Science Basis: Summary for Policymakers: Fourth Assessment Report

(2007); and INTERGOVERNMENTAL PANEL ON CLIMATE CHANGE: WORKING GROUP II, Climate Change 2007: Climate Change Impacts, Adaptations, and Vulnerability: Fourth Assessment Report (2007). Possible solutions to

the challenges of global warming and climate change are detailed in INTERGOVERNMENTAL PANEL ON CLIMATE

CHANGE: WORKING GROUP III, Climate Change 2007: Mitigation of Climate Change: Fourth Assessment Report (2007).

A number of books present thoughtful discussions of global warming as an environmental and societal

issue. Still prescient is an early account provided in BILL MCKIBBEN, The End of Nature (1989). Other good

treatments include STEPHEN SCHNEIDER, Laboratory Earth (2001); ALBERT GORE, An Inconvenient Truth

(2006); ELIZABETH KOLBERT, Field Notes from a Catastrophe (2006); EUGENE LINDEN, The Winds of Change

© 2020 Encyclopædia Britannica, Inc. 67 of 106 Britannica LaunchPacks | Human Populations and Their Environment

(2006); TIM FLANNERY, The Weather Makers (2006); and MIKE HULME, Why We Disagree About Climate Change: Understanding Controversy, Inaction and Opportunity (2009). An excellent exposition for

younger readers is found in ANDREW REVKIN, The North Pole Was Here (2007).

Public policy background

STEPHEN H. SCHNEIDER, ARMIN ROSENCRANZ, and JOHN O. NILES (eds.), Climate Change Policy: A Survey (2002), is a primer on various aspects of the policy debate that explains alternatives for dealing with climate

change. A broad analysis of the climate change debate is imparted in ANDREW E. DESSLER and EDWARD A.

PARSON, The Science and Politics of Global Climate Change: A Guide to the Debate (2006). A summary of the quantitative aspects of greenhouse gas emissions designed to assist stakeholders and policy makers

is provided in KEVIN A. BAUMERT, TIMOTHY HERZOG, and JONATHAN PERSHING, Navigating the Numbers:

Greenhouse Gas Data and International Climate Policy (2005). JOHN T. HOUGHTON, Global Warming: The Complete Briefing, 3rd ed. (2004), offers a perspective on climate change from one of the leading

participants in the IPCC process. DANIEL SAREWITZ and ROGER PIELKE, JR., “Breaking the Global-Warming Gridlock,” The Atlantic Monthly, 286(1):55–64 (2000), presents an alternative view on how to make progress on climate policy by focusing on reducing vulnerability to climate impacts.

Thoughtful discussions of the politics underlying the issue of climate change are provided in ROSS GELBSPAN

, Boiling Point (2004); MARK LYNAS, High Tide (2004); and ROSS GELBSPAN, The Heat Is On (1998). The social justice implications involved in adapting the human population to changing climatic conditions are

presented in W. NEIL ADGER et al. (eds.), Fairness in Adaptation to Climate Change (2006).

Citation (MLA style):

"Global warming." Britannica LaunchPacks: Human Populations and Their Environment, Encyclopædia Britannica, 16 Mar. 2021. packs-preview.eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

population

in human biology, the whole number of inhabitants occupying an area (such as a country or the world) and continually being modified by increases (births and immigrations) and losses (deaths and emigrations). As with any biological population, the size of a human population is limited by the supply of food, the effect of diseases, and other environmental factors. Human populations are further affected by social customs governing reproduction and by the technological developments, especially in medicine and public health, that have reduced mortality and extended the life span.

© 2020 Encyclopædia Britannica, Inc. 68 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Graph of the world's estimated human population from 1700 until 2000, with population projections…

Encyclopædia Britannica, Inc.

Few aspects of human societies are as fundamental as the size, composition, and rate of change of their populations. Such factors affect economic prosperity, health, education, family structure, crime patterns, language, culture—indeed, virtually every aspect of human society is touched upon by population trends.

The study of human populations is called demography—a discipline with intellectual origins stretching back to the 18th century, when it was first recognized that human mortality could be examined as a phenomenon with statistical regularities. Demography casts a multidisciplinary net, drawing insights from economics, sociology, statistics, medicine, biology, anthropology, and history. Its chronological sweep is lengthy: limited demographic evidence for many centuries into the past, and reliable data for several hundred years are available for many regions. The present understanding of demography makes it possible to project (with caution) population changes several decades into the future. The basic components of population change

At its most basic level, the components of population change are few indeed. A closed population (that is, one in which immigration and emigration do not occur) can change according to the following simple equation: the population (closed) at the end of an interval equals the population at the beginning of the interval, plus births during the interval, minus deaths during the interval. In other words, only addition by births and reduction by deaths can change a closed population.

Populations of nations, regions, continents, islands, or cities, however, are rarely closed in the same way. If the assumption of a closed population is relaxed, in- and out-migration can increase and decrease population size in the same way as do births and deaths; thus, the population (open) at the end of an interval equals the population at the beginning of the interval, plus births during the interval, minus deaths, plus in-migrants, minus out-migrants. Hence the study of demographic change requires knowledge of fertility (births), mortality (deaths), and migration. These, in turn, affect not only population size and growth rates but also the composition of the population in terms of such attributes as sex, age, ethnic or racial composition, and geographic distribution.

Fertility

Demographers distinguish between fecundity, the underlying biological potential for reproduction, and fertility, the actual level of achieved reproduction. (Confusingly, these English terms have opposite meanings from their parallel terms in French, where fertilité is the potential and fécondité is the realized; similarly ambiguous usages also prevail in the biological sciences, thereby increasing the chance of misunderstanding.) The difference between biological potential and realized fertility is determined by several intervening factors, including the following: (1) most women do not begin reproducing immediately upon the onset of puberty, which itself does

© 2020 Encyclopædia Britannica, Inc. 69 of 106 Britannica LaunchPacks | Human Populations and Their Environment

not occur at a fixed age; (2) some women with the potential to reproduce never do so; (3) some women become widowed and do not remarry; (4) various elements of social behaviour restrain fertility; and (5) many human couples choose consciously to restrict their fertility by means of sexual abstinence, contraception, abortion, or sterilization.

The magnitude of the gap between potential and realized fertility can be illustrated by comparing the highest known fertilities with those of typical European and North American women in the late 20th century. A well- studied high-fertility group is the Hutterites of North America, a religious sect that views fertility regulation as sinful and high fertility as a blessing. Hutterite women who married between 1921 and 1930 are known to have averaged 10 children per woman. Meanwhile, women in much of Europe and North America averaged about two children per woman during the 1970s and 1980s—a number 80 percent less than that achieved by the Hutterites. Even the highly fertile populations of developing countries in Africa, Asia, and Latin America produce children at rates far below that of the Hutterites.

The general message from such evidence is clear enough: in much of the world, human fertility is considerably lower than the biological potential. It is strongly constrained by cultural regulations, especially those concerning marriage and sexuality, and by conscious efforts on the part of married couples to limit their childbearing.

Dependable evidence on historical fertility patterns in Europe is available back to the 18th century, and estimates have been made for several earlier centuries. Such data for non-European societies and for earlier human populations are much more fragmentary. The European data indicate that even in the absence of widespread deliberate regulation there were significant variations in fertility among different societies. These differences were heavily affected by socially determined behaviours such as those concerning marriage patterns. Beginning in France and Hungary in the 18th century, a dramatic decline in fertility took shape in the more developed societies of Europe and North America, and in the ensuing two centuries fertility declines of fully 50 percent took place in nearly all of these countries. Since the 1960s fertility has been intentionally diminished in many developing countries, and remarkably rapid reductions have occurred in the most populous, the People’ s Republic of China.

There is no dispute as to the fact and magnitudes of such declines, but theoretical explanation of the phenomena has proved elusive. (See below Population theories.)

Biological factors affecting human fertility

Reproduction is a quintessentially biological process, and hence all fertility analyses must consider the effects of biology. Such factors, in rough chronological order, include:

the age of onset of potential fertility (or fecundability in demographic terminology);

the degree of fecundability—i.e., the monthly probability of conceiving in the absence of contraception;

the incidence of spontaneous abortion and stillbirth;

the duration of temporary infecundability following the birth of a child; and

the age of onset of permanent sterility.

The age at which women become fecund apparently declined significantly during the 20th century; as measured by the age of menarche (onset of menstruation), British data suggest a decline from 16–18 years in the mid-19th century to less than 13 years in the late 20th century. This decline is thought to be related to improving standards of nutrition and health. Since the average age of marriage in western Europe has long been far higher

© 2020 Encyclopædia Britannica, Inc. 70 of 106 Britannica LaunchPacks | Human Populations and Their Environment

than the age of menarche, and since most children are born to married couples, this biological lengthening of the reproductive period is unlikely to have had major effects upon realized fertility in Europe. In settings where early marriage prevails, however, declining age at menarche could increase lifetime fertility.

Fecundability also varies among women past menarche. The monthly probabilities of conception among newlyweds are commonly in the range of 0.15 to 0.25; that is, there is a 15–25-percent chance of conception each month. This fact is understandable when account is taken of the short interval (about two days) within each menstrual cycle during which fertilization can take place. Moreover, there appear to be cycles during which ovulation does not occur. Finally, perhaps one-third or more of fertilized ova fail to implant in the uterus or, even if they do implant, spontaneously abort during the ensuing two weeks, before pregnancy would be recognized. As a result of such factors, women of reproductive age who are not using contraceptive methods can expect to conceive within five to 10 months of becoming sexually active. As is true of all biological phenomena, there is surely a distribution of fecundability around average levels, with some women experiencing conception more readily than others.

Spontaneous abortion of recognized pregnancies and stillbirth also are fairly common, but their incidence is difficult to quantify. Perhaps 20 percent of recognized pregnancies fail spontaneously, most in the earlier months of gestation.

Following the birth of a child, most women experience a period of temporary infecundability, or biological inability to conceive. The length of this period seems to be affected substantially by breast-feeding. In the absence of breast-feeding, the interruption lasts less than two months. With lengthy, frequent breast-feeding it can last one or two years. This effect is thought to be caused by a complex of neural and hormonal factors stimulated by suckling.

A woman’s fecundability typically peaks in her 20s and declines during her 30s; by their early 40s as many as 50 percent of women are affected by their own or their husbands’ sterility. After menopause, essentially all women are sterile. The average age at menopause is in the late 40s, although some women experience it before reaching 40 and others not until nearly 60.

Contraception

Contraceptive practices affect fertility by reducing the probability of conception. Contraceptive methods vary considerably in their theoretical effectiveness and in their actual effectiveness in use (“use-effectiveness”). Modern methods such as oral pills and intrauterine devices (IUDs) have use-effectiveness rates of more than 95 percent. Older methods such as the condom and diaphragm can be more than 90-percent effective when used regularly and correctly, but their average use-effectiveness is lower because of irregular or incorrect use.

The effect upon fertility of contraceptive measures can be dramatic: if fecundability is 0.20 (a 20-percent chance of pregnancy per month of exposure), then a 95-percent effective method will reduce this to 0.01 (a 1-percent chance).

Abortion

Induced abortion reduces fertility not by affecting fecundability but by terminating pregnancy. Abortion has long been practiced in human societies and is quite common in some settings. The officially registered fraction of pregnancies terminated by abortion exceeds one-third in some countries, and significant numbers of unregistered abortions probably occur even in countries reporting very low rates.

© 2020 Encyclopædia Britannica, Inc. 71 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Sterilization

Complete elimination of fecundability can be brought about by sterilization. The surgical procedures of tubal ligation and vasectomy have become common in diverse nations and cultures. In the United States, for example, voluntary sterilization has become the most prevalent single means of terminating fertility, typically adopted by couples who have achieved their desired family size. In India, sterilization has been encouraged on occasion by various government incentive programs and, for a short period during the 1970s, by quasi-coercive measures.

Mortality

As noted above, the science of demography has its intellectual roots in the realization that human mortality, while consisting of unpredictable individual events, has a statistical regularity when aggregated across a large group. This recognition formed the basis of a wholly new industry—that of life assurance, or insurance. The basis of this industry is the life table, or mortality table, which summarizes the distribution of longevity—observed over a period of years—among members of a population. This statistical device allows the calculation of premiums— the prices to be charged the members of a group of living subscribers with specified characteristics, who by pooling their resources in this statistical sense provide their heirs with financial benefits.

Overall human mortality levels can best be compared by using the life-table measure life expectancy at birth (often abbreviated simply as life expectancy), the number of years of life expected of a newborn baby on the basis of current mortality levels for persons of all ages. Life expectancies of premodern populations, with their poor knowledge of sanitation and health care, may have been as low as 25–30 years. The largest toll of death was that exacted in infancy and childhood: perhaps 20 percent of newborn children died in their first 12 months of life and another 30 percent before they reached five years of age.

In the developing countries by the 1980s, average life expectancy lay in the range of 55 to 60 years, with the highest levels in Latin America and the lowest in Africa. In the same period, life expectancy in the developed countries of western Europe and North America approached 75 years, and fewer than 1 percent of newborn children died in their first 12 months.

For reasons that are not well understood, life expectancy of females usually exceeds that of males, and this female advantage has grown as overall life expectancy has increased. In the late 20th century this female advantage was seven years (78 years versus 71 years) in the industrial market economies (comprising western Europe, North America, Japan, Australia, and New Zealand). It was eight years (74 years versus 66 years) in the nonmarket economies of eastern Europe.

The epidemiologic transition

The epidemiologic transition is that process by which the pattern of mortality and disease is transformed from one of high mortality among infants and children and episodic famine and epidemic affecting all age groups to one of degenerative and man-made diseases (such as those attributed to smoking) affecting principally the elderly. It is generally believed that the epidemiologic transitions prior to the 20th century (i.e., those in today’s industrialized countries) were closely associated with rising standards of living, nutrition, and sanitation. In contrast, those occurring in developing countries have been more or less independent of such internal socioeconomic development and more closely tied to organized health care and disease control programs developed and financed internationally. There is no doubt that 20th-century declines in mortality in developing countries have been far more rapid than those that occurred in the 19th century in what are now the industrialized countries.

© 2020 Encyclopædia Britannica, Inc. 72 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Infant mortality

Infant mortality is conventionally measured as the number of deaths in the first year of life per 1,000 live births during the same year. Roughly speaking, by this measure worldwide infant mortality approximates 80 per 1,000; that is, about 8 percent of newborn babies die within the first year of life.

This global average disguises great differences. In certain countries of Asia and Africa, infant mortality rates exceed 150 and sometimes approach 200 per 1,000 (that is, 15 or 20 percent of children die before reaching the age of one year). Meanwhile, in other countries, such as Japan and Sweden, the rates are well below 10 per 1,000, or 1 percent. Generally, infant mortality is somewhat higher among males than among females.

In developing countries substantial declines in infant mortality have been credited to improved sanitation and nutrition, increased access to modern health care, and improved birth spacing through the use of contraception. In industrialized countries in which infant mortality rates were already low the increased availability of advanced medical technology for newborn—in particular, prematurely born—infants provides a partial explanation.

Infanticide

The deliberate killing of newborn infants has long been practiced in human societies. It seems to have been common in the ancient cultures of Greece, Rome, and China, and it was practiced in Europe until the 19th century. In Europe, infanticide included the practice of “overlaying” (smothering) an infant sharing a bed with its parents and the abandonment of unwanted infants to the custody of foundling hospitals, in which one-third to four-fifths of incumbents failed to survive.

In many societies practicing infanticide, infants were not deemed to be fully human until they underwent a rite of initiation that took place from a few days to several years after birth, and therefore killing before such initiation was socially acceptable. The purposes of infanticide were various: child spacing or fertility control in the absence of effective contraception; elimination of illegitimate, deformed, orphaned, or twin children; or sex preferences.

With the development and spread of the means of effective fertility regulation, infanticide has come to be strongly disapproved in most societies, though it continues to be practiced in some isolated traditional cultures.

Mortality among the elderly

During the 1970s and 1980s in industrialized countries there were unexpectedly large declines in mortality among the elderly, resulting in larger-than-projected numbers of the very old. In the United States, for example, the so-called frail elderly group aged 85 years and older increased nearly fourfold between 1950 and 1980, from 590,000 to 2,461,000. Given the high incidence of health problems among the very old, such increases have important implications for the organization and financing of health care.

Marriage

One of the main factors affecting fertility, and an important contributor to the fertility differences among societies in which conscious fertility control is uncommon, is defined by the patterns of marriage and marital disruption. In many societies in Asia and Africa, for example, marriage occurs soon after the sexual maturation of the woman, around age 17. In contrast, delayed marriage has long been common in Europe, and in some European countries the average age of first marriage approaches 25 years.

In the 20th century dramatic changes have taken place in the patterns of marital dissolution caused by widowhood and divorce. Widowhood has long been common in all societies, but the declines of mortality (as

© 2020 Encyclopædia Britannica, Inc. 73 of 106 Britannica LaunchPacks | Human Populations and Their Environment

discussed above) have sharply reduced the effects of this source of marital dissolution on fertility. Meanwhile, divorce has been transformed from an uncommon exception to an experience terminating a large proportion (sometimes more than a third) of marriages in some countries. Taken together, these components of marriage patterns can account for the elimination of as little as 20 percent to as much as 50 percent of the potential reproductive years.

Many Western countries have experienced significant increases in the numbers of cohabiting unmarried couples. In the 1970s some 12 percent of all Swedish couples living together aged 16 to 70 were unmarried. When in the United States in 1976 the number of such arrangements approached 1,000,000, the Bureau of the Census formulated a new statistical category—POSSLQ—denoting persons of the opposite sex sharing living quarters. Extramarital fertility as a percentage of overall fertility accordingly has risen in many Western countries, accounting for one in five births in the United States, one in five in Denmark, and one in three in Sweden.

Migration

Since any population that is not closed can be augmented or depleted by in-migration or out-migration, migration patterns must be considered carefully in analyzing population change. The common definition of human migration limits the term to permanent change of residence (conventionally, for at least one year), so as to distinguish it from commuting and other more frequent but temporary movements.

Human migrations have been fundamental to the broad sweep of human history and have themselves changed in basic ways over the epochs. Many of these historical migrations have by no means been the morally uplifting experiences depicted in mythologies of heroic conquerors, explorers, and pioneers; rather they frequently have been characterized by violence, destruction, bondage, mass mortality, and genocide—in other words, by human suffering of profound magnitudes.

Early human migrations

Early humans were almost surely hunters and gatherers who moved continually in search of food supplies. The superior technologies (tools, clothes, language, disciplined cooperation) of these hunting bands allowed them to spread farther and faster than had any other dominant species; humans are thought to have occupied all the continents except Antarctica within a span of about 50,000 years. As the species spread away from the tropical parasites and diseases of its African origins, mortality rates declined and population increased. This increase occurred at microscopically small rates by the standards of the past several centuries, but over thousands of years it resulted in a large absolute growth to a total that could no longer be supported by finding new hunting grounds. There ensued a transition from migratory hunting and gathering to migratory slash-and- burnagriculture. The consequence was the rapid geographical spread of crops, with wheat and barley moving east and west from the Middle East across the whole of Eurasia within only 5,000 years.

About 10,000 years ago a new and more productive way of life, involving sedentary agriculture, became predominant. This allowed greater investment of labour and technology in crop production, resulting in a more substantial and securer food source, but sporadic migrations persisted.

The next pulse of migration, beginning around 4000 to 3000 BCE, was stimulated by the development of seagoing sailing vessels and of pastoral nomadry. The Mediterranean Basin was the centre of the maritime culture, which involved the settlement of offshore islands and led to the development of deep-sea fishing and long-distance trade. Other favoured regions were those of the Indian Ocean and South China Sea. Meanwhile, pastoral nomadry involved biological adaptations both in humans (allowing them to digest milk) and in species of birds and mammals that were domesticated.

© 2020 Encyclopædia Britannica, Inc. 74 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Both seafarers and pastoralists were intrinsically migratory. The former were able to colonize previously uninhabited lands or to impose their rule by force over less mobile populations. The pastoralists were able to populate the extensive grassland of the Eurasian Steppe and the African and Middle Eastern savannas, and their superior nutrition and mobility gave them clear military advantages over the sedentary agriculturalists with whom they came into contact. Even as agriculture continued to improve with innovations such as the plow, these mobile elements persisted and provided important networks by which technological innovations could be spread widely and rapidly.

That complex of human organization and behaviour commonly termed Western civilization arose out of such

developments. Around 4000 BCE seafaring migrants from the south overwhelmed the local inhabitants of the Tigris–Euphrates floodplain and began to develop a social organization based upon the division of labour into highly skilled occupations, technologies such as irrigation, bronze metallurgy, and wheeled vehicles, and the growth of cities of 20,000–50,000 persons. Political differentiation into ruling classes and ruled masses provided a basis for imposition of taxes and rents that financed the development of professional soldiers and artisans, whose specialized skills far surpassed those of pastoralists and agriculturalists. The military and economic superiority that accompanied such skills allowed advanced communities to expand both by direct conquest and by the adoption of this social form by neighbouring peoples. Thus migration patterns played an important role in creating the early empires and cultures of the ancient world.

By about 2000 BCE such specialized human civilizations occupied much of the then-known world—the Middle East, the eastern Mediterranean, South Asia, and the Far East. Under these circumstances human migration was transformed from unstructured movements across unoccupied territories by nomads and seafarers into quite new forms of interaction among the settled civilizations.

These new forms of human migration produced disorder, suffering, and much mortality. As one population conquered or infiltrated another, the vanquished were usually destroyed, enslaved, or forcibly absorbed. Large numbers of people were captured and transported by slave traders. Constant turmoil accompanied the ebb and flow of populations across the regions of settled agriculture and the Eurasian and African grasslands. Important

examples include the Dorian incursions in ancient Greece in the 11th century BCE, the Germanic migrations

southward from the Baltic to the Roman Empire in the 4th to 6th century CE, the Norman raids and conquests of

Britain between the 8th and 12th centuries CE, and the Bantu migrations in Africa throughout the Christian Era.

Modern mass migrations

Mass migrations over long distances were among the new phenomena produced by the population increase and improved transportation that accompanied the Industrial Revolution. The largest of these was the so-called Great Atlantic Migration from Europe to North America, the first major wave of which began in the late 1840s with mass movements from Ireland and Germany. These were caused by the failure of the potato crop in Ireland and in the lower Rhineland, where millions had become dependent upon this single source of nutrition. These flows eventually subsided, but in the 1880s a second and even larger wave of mass migration developed from eastern and southern Europe, again stimulated in part by agricultural crises and facilitated by improvements in transportation and communication. Between 1880 and 1910 some 17,000,000 Europeans entered the United States; overall, the total amounted to 37,000,000 between 1820 and 1980.

Since World War II equally large long-distance migrations have occurred. In most cases groups from developing countries have moved into the industrialized countries of the West. Some 13,000,000 migrants have become permanent residents of western Europe since the 1960s. More than 10,000,000 permanent immigrants have been admitted legally to the United States since the 1960s, and illegal immigration has almost surely added several millions more.

© 2020 Encyclopædia Britannica, Inc. 75 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Forced migrations

Slave migrations and mass expulsions have been part of human history for millennia. The largest slave migrations were probably those compelled by European slave traders operating in Africa from the 16th to the 19th century. During that period perhaps 20,000,000 slaves were consigned to American markets, though substantial numbers died in the appalling conditions of the Atlantic passage.

The largest mass expulsion is probably that imposed by the Nazi government of Germany, which deported 7,000,000–8,000,000 persons, including some 5,000,000 Jews later exterminated in concentration camps. After World War II, 9,000,000–10,000,000 ethnic Germans were more or less forcibly transported into Germany, and perhaps 1,000,000 members of minority groups deemed politically unreliable by the Soviet government were forcibly exiled to Central Asia. Earlier deportations of this type included the movement of 150,000 British convicts to Australia between 1788 and 1867 and the 19th-century exile of 1,000,000 Russians to Siberia.

Forced migrations since World War II have been large indeed. Some 14,000,000 persons fled in one direction or the other at the partition of British India into India and Pakistan. Nearly 10,000,000 left East Pakistan (now Bangladesh) during the fighting in 1971; many of them stayed on in India. An estimated 3,000,000–4,000,000 persons fled from the war in Afghanistan during the early 1980s. More than 1,000,000 refugees have departed Vietnam, Cuba, Israel, and Ethiopia since World War II. Estimates during the 1980s suggested that approximately 10,000,000 refugees had not been resettled and were in need of assistance.

Internal migrations

The largest human migrations today are internal to nation-states; these can be sizable in rapidly increasing populations with large rural-to-urban migratory flows.

Early human movements toward urban areas were devastating in terms of mortality. Cities were loci of intense infection; indeed, many human viral diseases are not propagated unless the population density is far greater than that common under sedentary agriculture or pastoral nomadism. Moreover, cities had to import food and raw materials from the hinterlands, but transport and political disruptions led to erratic patterns of scarcity, famine, and epidemic. The result was that cities until quite recently (the mid-19th century) were demographic sinkholes, incapable of sustaining their own populations.

Urban growth since World War II has been very rapid in much of the world. In developing countries with high overall population growth rates the populations of some cities have been doubling every 10 years or less (see below Population composition).

Natural increase and population growth

Natural increase. Put simply, natural increase is the difference between the numbers of births and deaths in a population; the rate of natural increase is the difference between the birthrate and the death rate. Given the fertility and mortality characteristics of the human species (excluding incidents of catastrophic mortality), the range of possible rates of natural increase is rather narrow. For a nation, it has rarely exceeded 4 percent per year; the highest known rate for a national population—arising from the conjunction of a very high birthrate and a quite low death rate—is that experienced in Kenya during the 1980s, in which the natural increase of the population approximated 4.1 percent per annum. Rates of natural increase in other developing countries generally are lower; these countries averaged about 2.5 percent per annum during the same period. Meanwhile the rates of natural increase in industrialized countries are very low: the highest is approximately 1 percent, most are in the neighbourhood of several tenths of 1 percent, and some are slightly negative (that is, their populations are slowly decreasing).

Population growth © 2020 Encyclopædia Britannica, Inc. 76 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Population growth

The rate of population growth is the rate of natural increase combined with the effects of migration. Thus a high rate of natural increase can be offset by a large net out-migration, and a low rate of natural increase can be countered by a high level of net in-migration. Generally speaking, however, these migration effects on population growth rates are far smaller than the effects of changes in fertility and mortality.

Population “momentum”

An important and often misunderstood characteristic of human populations is the tendency of a highly fertile population that has been increasing rapidly in size to continue to do so for decades after the onset of even a substantial decline in fertility. This results from the youthful age structure of such a population, as discussed below. These populations contain large numbers of children who have still to grow into adulthood and the years of reproduction. Thus even a dramatic decline in fertility, which affects only the numbers at age zero, cannot prevent the continuing growth of the number of adults of childbearing age for at least two or three decades.

Eventually, of course, as these large groups pass through the childbearing years to middle and older age, the smaller numbers of children resulting from the fertility decline lead to a moderation in the rate of population growth. But the delays are lengthy, allowing very substantial additional population growth after fertility has declined. This phenomenon gives rise to the term population momentum, which is of great significance to developing countries with rapid population growth and limited natural resources. The nature of population growth means that the metaphor of a “population bomb” used by some lay analysts of population trends in the 1960s was really quite inaccurate. Bombs explode with tremendous force, but such force is rapidly spent. A more appropriate metaphor for rapid population growth is that of a glacier, since a glacier moves at a slow pace but with enormous effects wherever it goes and with a long-term momentum that is unstoppable. Population composition

The most important characteristics of a population—in addition to its size and the rate at which it is expanding or contracting—are the ways in which its members are distributed according to age, sex, ethnic or racial category, and residential status (urban or rural).

Age distribution

Perhaps the most fundamental of these characteristics is the age distribution of a population. Demographers commonly use population pyramids to describe both age and sex distributions of populations. A population pyramid is a bar chart or graph in which the length of each horizontal bar represents the number (or percentage) of persons in an age group; for example, the base of such a chart consists of a bar representing the youngest segment of the population, those persons less than, say, five years old. Each bar is divided into segments corresponding to the numbers (or proportions) of males and females. In most populations the proportion of older persons is much smaller than that of the younger, so the chart narrows toward the top and is more or less triangular, like the cross section of a pyramid; hence the name. Youthful populations are represented by pyramids with a broad base of young children and a narrow apex of older people, while older populations are characterized by more uniform numbers of people in the age categories. Population pyramids reveal markedly different characteristics for three nations: high fertility and rapid population growth (Mexico), low fertility and slow growth (United States), and very low fertility and negative growth (West Germany).

Contrary to a common belief, the principal factor tending to change the age distribution of a population—and, hence, the general shape of the corresponding pyramid—is not the death or mortality rates, but rather the rate of fertility. A rise or decline in mortality generally affects all age groups in some measure, and hence has only

© 2020 Encyclopædia Britannica, Inc. 77 of 106 Britannica LaunchPacks | Human Populations and Their Environment

limited effects on the proportion in each age group. A change in fertility, however, affects the number of people in only a single age group—the group of age zero, the newly born. Hence a decline or increase in fertility has a highly concentrated effect at one end of the age distribution and thereby can have a major influence on the overall age structure. This means that youthful age structures correspond to highly fertile populations, typical of developing countries. The older age structures are those of low-fertility populations, such as are common in the industrialized world.

Sex ratio

Factors that favour the production of more male offspring within human populations, which has…

© MinuteEarth

A second important structural aspect of populations is the relative numbers of males and females who compose it. Generally, slightly more males are born than females (a typical ratio would be 105 or 106 males for every 100 females). On the other hand, it is quite common for males to experience higher mortality at virtually all ages after birth. This difference is apparently of biological origin. Exceptions occur in countries such as India, where the mortality of females may be higher than that of males in childhood and at the ages of childbearing because of unequal allocation of resources within the family and the poor quality of maternal health care.

The general rules that more males are born but that females experience lower mortality mean that during childhood males outnumber females of the same age, the difference decreases as the age increases, at some point in the adult life span the numbers of males and females become equal, and as higher ages are reached the number of females becomes disproportionately large. For example, in Europe and North America, among persons more than 70 years of age in 1985, the number of males for every 100 females was only about 61 to 63. (According to the Population Division of the United Nations, the figure for the Soviet Union was only 40, which may be attributable to high male mortality during World War II as well as to possible increases in male mortality during the 1980s.)

The sex ratio within a population has significant implications for marriage patterns. A scarcity of males of a given age depresses the marriage rates of females in the same age group or usually those somewhat younger, and this in turn is likely to reduce their fertility. In many countries, social convention dictates a pattern in which males at marriage are slightly older than their spouses. Thus if there is a dramatic rise in fertility, such as that called the “baby boom” in the period following World War II, a “marriage squeeze” can eventually result; that is, the number of males of the socially correct age for marriage is insufficient for the number of somewhat younger females. This may lead to deferment of marriage of these women, a contraction of the age differential of marrying couples, or both. Similarly, a dramatic fertility decline in such a society is likely to lead eventually to an insufficiency of eligible females for marriage, which may lead to earlier marriage of these women, an expansion of the age gap at marriage, or both. All of these effects are slow to develop; it takes at least 20 to 25 years for even a dramatic fall or rise in fertility to affect marriage patterns in this way.

Ethnic or racial composition © 2020 Encyclopædia Britannica, Inc. 78 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Ethnic or racial composition

The populations of all nations of the world are more or less diverse with respect to ethnicity or race. (Ethnicity here includes national, cultural, religious, linguistic, or other attributes that are perceived as characteristic of distinct groups.) Such divisions in populations often are regarded as socially important, and statistics by race and ethnic group are therefore commonly available. The categories used for such groups differ from nation to nation, however; for example, a person of Pakistani origin is considered “black” or “coloured” in the United Kingdom but would probably be classified as “white” or “Asian” in the United States. For this reason, international comparisons of ethnic and racial groups are imprecise, and this component of population structure is far less objective as a measure than are the categories of age and sex discussed above.

Geographical distribution and urbanization

It goes without saying that populations are scattered across space. The typical measure of population in relation to land area, that of population density, is often a meaningless one, since different areas vary considerably in their value for agricultural or other human purposes. Moreover, a high population density in an agrarian society, dependent upon agriculture for its sustenance, is likely to be a severer constraint upon human welfare than would the same density in a highly industrialized society, in which the bulk of national product is not of agricultural origin.

Also of significance in terms of geographical distribution is the division between rural and urban areas. For many decades there has been a nearly universal flow of populations from rural into urban areas. While definitions of urban areas differ from country to country and region to region, the most highly urbanized societies in the world are those of western and northern Europe, Australia, New Zealand, temperate South America, and North America; in all of these the fraction of the population living in urban areas exceeds 75 percent, and it has reached 85 percent in West Germany. An intermediate stage of urbanization exists in the countries making up much of tropical Latin America, where 50 to 65 percent of the population lives in cities. Finally, in many of the developing countries of Asia and Africa the urbanization process has only recently begun, and it is not uncommon to find less than one-third of the population living in urban areas.

The rapidity of urbanization in some countries is quite astonishing. The population of Mexico City in 1960 was around 5,000,000; it was estimated to be about 17,000,000 in 1985 and was projected to reach 26,000,000 to 31,000,000 by 2000. A rule of thumb for much of the developing world is that the rate of growth of urban areas is twice that of the population as a whole. Thus in a population growing 3 percent annually (doubling in about 23.1 years), it is likely that the urban growth rate is at least 6 percent annually (doubling in about 11.6 years). Population theories

Population size and change play such a fundamental role in human societies that they have been the subject of theorizing for millennia. Most religious traditions have had something to say on these matters, as did many of the leading figures of the ancient world.

In modern times the subject of demographic change has played a central role in the development of the politico- economic theory of mercantilism; the classical economics of Adam Smith, David Ricardo, and others; the cornucopian images of utopians such as the Marquis de Condorcet; the contrasting views of Malthus as to the natural limits imposed on human population; the sociopolitical theories of Marx, Engels, and their followers; the scientific revolutions engendered by Darwin and his followers; and so on through the pantheon of human thought. Most of these theoretical viewpoints have incorporated demographic components as elements of far grander schemes. Only in a few cases have demographic concepts played a central role, as in the case of the

© 2020 Encyclopædia Britannica, Inc. 79 of 106 Britannica LaunchPacks | Human Populations and Their Environment

theory of the demographic transition that evolved during the 1930s as a counter to biological explanations of fertility declines that were then current.

Population theories in antiquity

The survival of ancient human societies despite high and unpredictable mortality implies that all societies that persisted were successful in maintaining high fertility. They did so in part by stressing the duties of marriage and procreation and by stigmatizing persons who failed to produce children. Many of these pronatalist motives were incorporated into religious dogma and mythology, as in the biblical injunction to “be fruitful and multiply, and populate the earth,” the Hindu laws of Manu, and the writings of Zoroaster.

The ancient Greeks were interested in population size, and Plato’s Republic incorporated the concept of an optimal population size of 5,040 citizens, among whom fertility was restrained by conscious birth control. The leaders of imperial Rome, however, advocated maximizing population size in the interest of power, and explicitly pronatalist laws were adopted during the reign of Augustus to encourage marriage and fertility.

The traditions of Christianity on this topic are mixed. The pronatalism of the Old Testament and the Roman Empire was embraced with some ambivalence by a church that sanctified celibacy among the priesthood. Later, during the time of Thomas Aquinas, the church moved toward more forceful support of high fertility and opposition to birth control.

Islamic writings on fertility were equally mixed. The 14th-century Arab historian Ibn Khaldūn incorporated demographic factors into his grand theory of the rise and fall of empires. According to his analysis, the decline of an empire’s population necessitates the importation of foreign mercenaries to administer and defend its territories, resulting in rising taxes, political intrigue, and general decadence. The hold of the empire on its hinterland and on its own populace weakens, making it a tempting target for a vigorous challenger. Thus Ibn Khaldūn saw the growth of dense human populations as generally favourable to the maintenance and increase of imperial power.

On the other hand, contraception was acceptable practice in Islam from the days of the Prophet, and extensive attention was given to contraceptive methods by the great physicians of the Islamic world during the Middle Ages. Moreover, under Islamic law the fetus is not considered a human being until its form is distinctly human, and hence early abortion was not forbidden.

Mercantilism and the idea of progress

The wholesale mortality caused by the Black Death during the 14th century contributed in fundamental ways to the development of mercantilism, the school of thought that dominated Europe from the 16th through the 18th century. Mercantilists and the absolute rulers who dominated many states of Europe saw each nation’s population as a form of national wealth: the larger the population, the richer the nation. Large populations provided a larger labour supply, larger markets, and larger (and hence more powerful) armies for defense and for foreign expansion. Moreover, since growth in the number of wage earners tended to depress wages, the wealth of the monarch could be increased by capturing this surplus. In the words of Frederick II the Great of Prussia, “the number of the people makes the wealth of states.” Similar views were held by mercantilists in Germany, France, Italy, and Spain. For the mercantilists, accelerating the growth of the population by encouraging fertility and discouraging emigration was consistent with increasing the power of the nation or the king. Most mercantilists, confident that any number of people would be able to produce their own subsistence, had no worries about harmful effects of population growth. (To this day similar optimism continues to be expressed by diverse schools of thought, from traditional Marxists on the left to “cornucopians” on the right.)

Physiocrats and the origins of demography © 2020 Encyclopædia Britannica, Inc. 80 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Physiocrats and the origins of demography

By the 18th century the Physiocrats were challenging the intensive state intervention that characterized the mercantilist system, urging instead the policy of laissez-faire. Their targets included the pronatalist strategies of governments; Physiocrats such as François Quesnay argued that human multiplication should not be encouraged to a point beyond that sustainable without widespread poverty. For the Physiocrats, economic surplus was attributable to land, and population growth could therefore not increase wealth. In their analysis of this subject matter the Physiocrats drew upon the techniques developed in England by John Graunt, Edmond Halley, Sir William Petty, and Gregory King, which for the first time made possible the quantitative assessment of population size, the rate of growth, and rates of mortality.

The Physiocrats had broad and important effects upon the thinking of the classical economists such as Adam Smith, especially with respect to the role of free markets unregulated by the state. As a group, however, the classical economists expressed little interest in the issue of population growth, and when they did they tended to see it as an effect rather than as a cause of economic prosperity.

Utopian views

In another 18th-century development, the optimism of mercantilists was incorporated into a very different set of ideas, those of the so-called utopians. Their views, based upon the idea of human progress and perfectibility, led to the conclusion that once perfected, mankind would have no need of coercive institutions such as police, criminal law, property ownership, and the family. In a properly organized society, in their view, progress was consistent with any level of population, since population size was the principal factor determining the amount of resources. Such resources should be held in common by all persons, and if there were any limits on population growth, they would be established automatically by the normal functioning of the perfected human society. Principal proponents of such views included Condorcet, William Godwin, and Daniel Malthus, the father of the Reverend Thomas Robert Malthus. Through his father the younger Malthus was introduced to such ideas relating human welfare to population dynamics, which stimulated him to undertake his own collection and analysis of data; these eventually made him the central figure in the population debates of the 19th and 20th centuries.

Malthus and his successors

In 1798 Malthus published An Essay on the Principle of Population as It Affects the Future Improvement of Society, with Remarks on the Speculations of Mr. Godwin, M. Condorcet, and Other Writers. This hastily written pamphlet had as its principal object the refutation of the views of the utopians. In Malthus’ view, the perfection of a human society free of coercive restraints was a mirage, because the capacity for the threat of population growth would always be present. In this, Malthus echoed the much earlier arguments of Robert Wallace in his Various Prospects of Mankind, Nature, and Providence (1761), which posited that the perfection of society carried with it the seeds of its own destruction, in the stimulation of population growth such that “the earth would at last be overstocked, and become unable to support its numerous inhabitants.”

Not many copies of Malthus’ essay, his first, were published, but it nonetheless became the subject of discussion and attack. The essay was cryptic and poorly supported by empirical evidence. Malthus’ arguments were easy to misrepresent, and his critics did so routinely.

The criticism had the salutary effect of stimulating Malthus to pursue the data and other evidence lacking in his first essay. He collected information on one country that had plentiful land (the United States) and estimated that its population was doubling in less than 25 years. He attributed the far lower rates of European population

© 2020 Encyclopædia Britannica, Inc. 81 of 106 Britannica LaunchPacks | Human Populations and Their Environment

growth to “preventive checks,” giving special emphasis to the characteristic late marriage pattern of western Europe, which he called “moral restraint.” The other preventive checks to which he alluded were birth control, abortion, adultery, and homosexuality, all of which as an Anglican minister he considered immoral.

In one sense, Malthus reversed the arguments of the mercantilists that the number of people determined the nation’s resources, adopting the contrary argument of the Physiocrats that the resource base determined the numbers of people. From this he derived an entire theory of society and human history, leading inevitably to a set of provocative prescriptions for public policy. Those societies that ignored the imperative for moral restraint— delayed marriage and celibacy for adults until they were economically able to support their children—would suffer the deplorable “positive checks” of war, famine, and epidemic, the avoidance of which should be every society’s goal. From this humane concern about the sufferings from positive checks arose Malthus’ admonition that poor laws (i.e., legal measures that provided relief to the poor) and charity must not cause their beneficiaries to relax their moral restraint or increase their fertility, lest such humanitarian gestures become perversely counterproductive.

Having stated his position, Malthus was denounced as a reactionary, although he favoured free medical assistance for the poor, universal education at a time that this was a radical idea, and democratic institutions at a time of elitist alarums about the French Revolution. Malthus was accused of blasphemy by the conventionally religious. The strongest denunciations of all came from Marx and his followers (see below). Meanwhile, the ideas of Malthus had important effects upon public policy (such as reforms in the English Poor Laws) and upon the ideas of the classical and neoclassical economists, demographers, and evolutionary biologists, led by Charles Darwin. Moreover, the evidence and analyses produced by Malthus dominated scientific discussion of population during his lifetime; indeed, he was the invited author of the article “Population” for the supplement (1824) to the fourth, fifth, and sixth editions of the Encyclopædia Britannica. Though many of Malthus’ gloomy predictions have proved to be misdirected, that article introduced analytical methods that clearly anticipated demographic techniques developed more than 100 years later.

The latter-day followers of Malthusian analysis deviated significantly from the prescriptions offered by Malthus. While these “neo-Malthusians” accepted Malthus’ core propositions regarding the links between unrestrained fertility and poverty, they rejected his advocacy of delayed marriage and his opposition to birth control. Moreover, leading neo-Malthusians such as Charles Bradlaugh and Annie Besant could hardly be described as reactionary defenders of the established church and social order. To the contrary, they were political and religious radicals who saw the extension of knowledge of birth control to the lower classes as an important instrument favouring social equality. Their efforts were opposed by the full force of the establishment, and both spent considerable time on trial and in jail for their efforts to publish materials—condemned as obscene—about contraception.

Marx, Lenin, and their followers

While both Karl Marx and Malthus accepted many of the views of the classical economists, Marx was harshly and implacably critical of Malthus and his ideas. The vehemence of the assault was remarkable. Marx reviled Malthus as a “miserable parson” guilty of spreading a “vile and infamous doctrine, this repulsive blasphemy against man and nature.” For Marx, only under capitalism does Malthus’ dilemma of resource limits arise. Though differing in many respects from the utopians who had provoked Malthus’ rejoinder, Marx shared with them the view that any number of people could be supported by a properly organized society. Under the socialism favoured by Marx, the surplus product of labour, previously appropriated by the capitalists, would be returned to its rightful owners, the workers, thereby eliminating the cause of poverty. Thus Malthus and Marx shared a strong concern about

© 2020 Encyclopædia Britannica, Inc. 82 of 106 Britannica LaunchPacks | Human Populations and Their Environment

the plight of the poor, but they differed sharply as to how it should be improved. For Malthus the solution was individual responsibility as to marriage and childbearing; for Marx the solution was a revolutionary assault upon the organization of society, leading to a collective structure called socialism.

The strident nature of Marx’s attack upon Malthus’ ideas may have arisen from his realization that they constituted a potentially fatal critique of his own analysis. “If [Malthus’] theory of population is correct,” Marx wrote in 1875 in his Critique of the Gotha Programme (published by Engels in 1891), “then I cannot abolish this [iron law of wages] even if I abolish wage-labor a hundred times, because this law is not only paramount over the system of wage-labor but also over every social system.”

The anti-Malthusian views of Marx were continued and extended by Marxians who followed him. For example, although in 1920 Lenin legalized abortion in the revolutionary Soviet Union as the right of every woman “to control her own body,” he opposed the practice of contraception or abortion for purposes of regulating population growth. Lenin’s successor, Joseph Stalin, adopted a pronatalist argument verging on the mercantilist, in which population growth was seen as a stimulant to economic progress. As the threat of war intensified in Europe in the 1930s, Stalin promulgated coercive measures to increase Soviet population growth, including the banning of abortion despite its status as a woman’s basic right. Although contraception is now accepted and practiced widely in most Marxist-Leninist states, some traditional ideologists continue to characterize its encouragement in Third-World countries as shabby Malthusianism.

The Darwinian tradition

Charles Darwin, whose scientific insights revolutionized 19th-century biology, acknowledged an important intellectual debt to Malthus in the development of his theory of natural selection. Darwin himself was not much involved in debates about human populations, but many who followed in his name as “social Darwinists” and “eugenicists” expressed a passionate if narrowly defined interest in the subject.

In Darwinian theory the engine of evolution is differential reproduction of different genetic stocks. The concern of many social Darwinists and eugenicists was that fertility among those they considered the superior human stocks was far lower than among the poorer—and, in their view, biologically inferior—groups, resulting in a gradual but inexorable decline in the quality of the overall population. While some attributed this lower fertility to deliberate efforts of people who needed to be informed of the dysgenic effects of their behaviour, others saw the fertility decline itself as evidence of biological deterioration of the superior stocks. Such simplistic biological explanations attracted attention to the socioeconomic and cultural factors that might explain the phenomenon and contributed to the development of the theory of the demographic transition.

Theory of the demographic transition

The classic explanation of European fertility declines arose in the period following World War I and came to be known as demographic transition theory. (Formally, transition theory is a historical generalization and not truly a scientific theory offering predictive and testable hypotheses.) The theory arose in part as a reaction to crude biological explanations of fertility declines; it rationalized them in solely socioeconomic terms, as consequences of widespread desire for fewer children caused by industrialization, urbanization, increased literacy, and declining infant mortality.

The factory system and urbanization led to a diminution in the role of the family in industrial production and a reduction of the economic value of children. Meanwhile, the costs of raising children rose, especially in urban settings, and universal primary education postponed their entry into the work force. Finally, the lessening of

© 2020 Encyclopædia Britannica, Inc. 83 of 106 Britannica LaunchPacks | Human Populations and Their Environment

infant mortality reduced the number of births needed to achieve a given family size. In some versions of transition theory, a fertility decline is triggered when one or more of these socioeconomic factors reach certain threshold values.

Until the 1970s transition theory was widely accepted as an explanation of European fertility declines, although conclusions based on it had never been tested empirically. More recently careful research on the European historical experience has forced reappraisal and refinement of demographic transition theory. In particular, distinctions based upon cultural attributes such as language and religion, coupled with the spread of ideas such as those of the nuclear family and the social acceptability of deliberate fertility control, appear to have played more important roles than were recognized by transition theorists. Trends in world population

Before considering modern population trends separately for developing and industrialized countries, it is useful to present an overview of older trends. It is generally agreed that only 5,000,000–10,000,000 humans (i.e., one one-thousandth of the present world population) were supportable before the agricultural revolution of about 10,000 years ago. By the beginning of the Christian era, 8,000 years later, the human population approximated

300,000,000, and there was apparently little increase in the ensuing millennium up to the year 1000 CE. Subsequent population growth was slow and fitful, especially given the plague epidemics and other catastrophes of the Middle Ages. By 1750, conventionally the beginning of the Industrial Revolution in Britain, world population may have been as high as 800,000,000. This means that in the 750 years from 1000 to 1750, the annual population growth rate averaged only about one-tenth of 1 percent.

The reasons for such slow growth are well known. In the absence of what is now considered basic knowledge of sanitation and health (the role of bacteria in disease, for example, was unknown until the 19th century), mortality rates were very high, especially for infants and children. Only about half of newborn babies survived to the age of five years. Fertility was also very high, as it had to be to sustain the existence of any population under such conditions of mortality. Modest population growth might occur for a time in these circumstances, but recurring famines, epidemics, and wars kept long-term growth close to zero.

From 1750 onward population growth accelerated. In some measure this was a consequence of rising standards of living, coupled with improved transport and communication, which mitigated the effects of localized crop failures that previously would have resulted in catastrophic mortality. Occasional famines did occur, however, and it was not until the 19th century that a sustained decline in mortality took place, stimulated by the improving economic conditions of the Industrial Revolution and the growing understanding of the need for sanitation and public health measures.

The world population, which did not reach its first 1,000,000,000 until about 1800, added another 1,000,000,000 persons by 1930. (To anticipate further discussion below, the third was added by 1960, the fourth by 1974, and the fifth before 1990.) The most rapid growth in the 19th century occurred in Europe and North America, which experienced gradual but eventually dramatic declines in mortality. Meanwhile, mortality and fertility remained high in Asia, Africa, and Latin America.

Beginning in the 1930s and accelerating rapidly after World War II, mortality went into decline in much of Asia and Latin America, giving rise to a new spurt of population growth that reached rates far higher than any previously experienced in Europe. The rapidity of this growth, which some described as the “population explosion,” was due to the sharpness in the falls in mortality that in turn were the result of improvements in public health, sanitation, and nutrition that were mostly imported from the developed countries. The external origins and the speed of the declines in mortality meant that there was little chance that they would be

© 2020 Encyclopædia Britannica, Inc. 84 of 106 Britannica LaunchPacks | Human Populations and Their Environment

accompanied by the onset of a decline in fertility. In addition, the marriage patterns of Asia and Latin America were (and continue to be) quite different from those in Europe; marriage in Asia and Latin America is early and nearly universal, while that in Europe is usually late and significant percentages of people never marry.

These high growth rates occurred in populations already of very large size, meaning that global population growth became very rapid both in absolute and in relative terms. The peak rate of increase was reached in the early 1960s, when each year the world population grew by about 2 percent, or about 68,000,000 people. Since that time both mortality and fertility rates have decreased, and the annual growth rate has fallen moderately, to about 1.7 percent. But even this lower rate, because it applies to a larger population base, means that the number of people added each year has risen from about 68,000,000 to 80,000,000.

The developing countries since 1950

After World War II there was a rapid decline in mortality in much of the developing world. In part this resulted from wartime efforts to maintain the health of armed forces from industrialized countries fighting in tropical areas. Since all people and governments welcome proven techniques to reduce the incidence of disease and death, these efforts were readily accepted in much of the developing world, but they were not accompanied by the kinds of social and cultural changes that had occurred earlier and had led to fertility declines in industrialized countries.

The reduction in mortality, unaccompanied by a reduction in fertility, had a simple and predictable outcome: accelerating population growth. By 1960 many developing countries had rates of increase as high as 3 percent a year, exceeding by two- or threefold the highest rates ever experienced by European populations. Since a population increasing at this rate will double in only 23 years, the populations of such countries expanded dramatically. In the 25 years between 1950 and 1975, the population of Mexico increased from 27,000,000 to 60,000,000; Iran from 14,000,000 to 33,000,000; Brazil from 53,000,000 to 108,000,000; and China from 554,000,000 to 933,000,000.

The greatest population growth rates were reached in Latin America and in Asia during the mid- to late 1960s. Since then, these regions have experienced variable but sometimes substantial fertility declines along with continuing mortality declines, resulting in usually moderate and occasionally large declines in population growth. The most dramatic declines have been those of the People’s Republic of China, where the growth rate was estimated to have declined from well over 2 percent per year in the 1960s to about half that in the 1980s, following official adoption of a concerted policy to delay marriage and limit childbearing within marriage. The predominance of the Chinese population in East Asia means that this region has experienced the most dramatic declines in population growth of any of the developing regions.

Over the same period population growth rates have declined only modestly—and in some cases have actually risen—in other developing regions. In South Asia the rate has declined only from 2.4 to 2.0 percent; in Latin America, from about 2.7 to about 2.3 percent. Meanwhile, in Africa population growth has accelerated from 2.6 percent to more than 3 percent over the same period, following belated significant declines in mortality not accompanied by similar reductions in fertility.

The industrialized countries since 1950

For many industrialized countries, the period after World War II was marked by a “baby boom.” One group of four countries in particular—the United States, Canada, Australia, and New Zealand—experienced sustained and substantial rises in fertility from the depressed levels of the prewar period. In the United States, for example, fertility rose by two-thirds, reaching levels in the 1950s not seen since 1910.

© 2020 Encyclopædia Britannica, Inc. 85 of 106 Britannica LaunchPacks | Human Populations and Their Environment

A second group of industrialized countries, including most of western Europe and some eastern European countries (notably Czechoslovakia and East Germany), experienced what might be termed “baby boomlets.” For a few years after the war, fertility increased as a result of marriages and births deferred during wartime. These increases were modest and relatively short-lived, however, when compared with those of the true baby-boom countries mentioned above. In many of these European countries fertility had been very low in the 1930s; their postwar baby boomlets appeared as three- to four-year “spikes” in the graph of their fertility rates, followed by two full decades of stable fertility levels. Beginning in the mid-1960s, fertility levels in these countries began to move lower again and, in many cases, fell to levels comparable to or lower than those of the 1930s.

A third group of industrialized countries, consisting of most of eastern Europe along with Japan, showed quite different fertility patterns. Most did not register low fertility in the 1930s but underwent substantial declines in the 1950s after a short-lived baby boomlet. In many of these countries the decline persisted into the 1960s, but in some it was reversed in response to governmental incentives.

Overview of the characteristics of the millennial generation—those born between 1981 and 1997—in the …

© CCTV America

By the 1980s the fertility levels in most industrialized countries were very low, at or below those needed to maintain stable populations. There are two reasons for this phenomenon: the postponement of marriage and childbearing by many younger women who entered the labour force, and a reduction in the numbers of children born to married women. Population projections

Demographic change is inherently a long-term phenomenon. Unlike populations of insects, human populations have rarely been subject to “explosion” or “collapse” in numbers. Moreover, the powerful long-term momentum that is built into the human age structure means that the effects of fertility changes become apparent only in the far future. For these and other reasons, it is by now conventional practice to employ the technology of population projection as a means of better understanding the implications of trends.

Population projections represent simply the playing out into the future of a set of assumptions about future fertility, mortality, and migration rates. It cannot be stated too strongly that such projections are not predictions, though they are misinterpreted as such frequently enough. A projection is a “what-if” exercise based on explicit assumptions that may or may not themselves be correct. As long as the arithmetic of a projection is done correctly, its utility is determined by the plausibility of its central assumptions. If the assumptions embody plausible future trends, then the projection’s outputs may be plausible and useful. If the assumptions are implausible, then so is the projection. Because the course of demographic trends is hard to anticipate very far into the future, most demographers calculate a set of alternative projections that, taken together, are expected

© 2020 Encyclopædia Britannica, Inc. 86 of 106 Britannica LaunchPacks | Human Populations and Their Environment

to define a range of plausible futures, rather than to predict or forecast any single future. Because demographic trends sometimes change in unexpected ways, it is important that all demographic projections be updated on a regular basis to incorporate new trends and newly developed data.

Projected population of the world, less developed countries, and more developed countries (high,…

Encyclopædia Britannica, Inc.

A standard set of projections for the world and for its constituent countries is prepared every two years by the Population Division of the United Nations. These projections include a low, medium, and high variant for each country and region. Additional Reading

General works

ROLAND PRESSAT, The Dictionary of Demography (1985; originally published in French, 1979); JOHN A. ROSS (ed.), International Encyclopedia of Population, 2 vol. (1982), a comprehensive reference work that contains articles on topics ranging from classic demography to current problems and that provides

coverage of world regions and countries as well as organizations and agencies active in the field; PETER R.

COX, Demography, 5th ed. (1976), an examination of methods used in the study of population; DENNIS H.

WRONG, Population and Society, 4th ed. (1977), an introduction to the main aspects of the population

dilemma; and WARREN S. THOMPSON and DAVID T. LEWIS, Population Problems, 5th ed. (1965), a comprehensive sociological study.

Population change

ARTHUR A. CAMPBELL, Manual of Fertility Analysis (1983), a methodological study: GERRY E. HENDERSHOT and

PAUL J. PLACEK (eds.), Predicting Fertility: Demographic Studies of Birth Expectations (1981), a survey of the

concepts, knowledge, and methods of fertility; NORMAN E. HIMES, Medical History of Contraception (1936,

reissued 1970), a classic historical treatise; JOHN BONGAARTS and ROBERT G. POTTER, Fertility, Biology, and Behavior: An Analysis of the Proximate Determinants (1983), a scholarly examination of such aspects of

the population problem as family planning, family size, birth intervals, and fertility; ANSLEY J. COALE and

SUSAN COTTS WATKINS (eds.), The Decline of Fertility in Europe (1985), a summary of studies of fertility

transition in Europe during the 1970s; MICHAEL S. TEITELBAUM and JAY M. WINTER, The Fear of Population Decline (1985), the history and bases of past and current concerns about low fertility, with special

emphasis on major Western countries; PRANAY GUPTE, The Crowded Earth: People and the Politics of

Population (1984), a study of the effects of Western development programs on overpopulation; SAMUEL H.

PRESTON. Mortality Patterns in National Populations: With Special Reference to Recorded Causes of Death

© 2020 Encyclopædia Britannica, Inc. 87 of 106 Britannica LaunchPacks | Human Populations and Their Environment

(1976), a discussion of the determinants and consequences of national mortality patterns, with attention

to the role of standards of living, sex differences, and major causes of death; ALAN A. BROWN and EGON

NEUBERGER (eds.), Internal Migration: A Comparative Perspective (1977), a collection of scholarly articles; International Migration Policies and Programmes: A World Survey (1982), a study of immigration and

refugee programs, one of the series of population studies conducted by the United Nations; DAVID GRIGG, Population Growth and Agrarian Change: An Historical Perspective (1980), a study of the relationship

between demography and economics; ELI S. MARKS, WILLIAM SELTZER, and KAROL J. KRÓTKI, Population Growth Estimation: A Handbook of Vital Statistics Measurement (1974), an examination of methods for

demographic estimates; and NATHAN KEYFITZ, Population Change and Social Policy (1982), a compendium of insightful essays. The 1984 issue of the annual World Development Report of the World Bank focuses on population changes in underdeveloped countries.

Population composition

COLIN MCEVEDY and RICHARD JONES, Atlas of World Population History (1978), comparisons of population

statistics; NATHAN KEYFITZ and WILHELM FLIEGER, World Population: An Analysis of Vital Data (1968), a

statistical analysis of demographic data for different countries; NATHAN KEYFITZ (ed.), Population and Biology: Bridge Between Disciplines (1984), a collection of papers on biological aspects of demography;

DOROTHY L. NORTMAN, Population and Family Planning Programs: Compendium of Data Through 1983, 12th

ed. (1985), including an analysis of demographic and social characteristics; DAVID M. HEER, Society and

Population, 2nd ed. (1975), a brief study of the population of nation-states; PHILIP M. HAUSERet al., Population and the Urban Future (1982), which examines various demographic topics, including the

quality of life of urbanized populations; RAYMOND F. DASMANN, Environmental Conservation, 5th ed. (1984),

a study of the human influence on nature and its consequences for human population; and COLIN CLARK, Population Growth and Land Use, 2nd ed. (1977), an analysis of specific problems.

Population theories

RONALD FREEDMAN (ed.), Population: The Vital Revolution (1964), is a collection of authoritative essays on major aspects of contemporary demographic analysis. The history of population theory is presented in

CHARLES EMIL STANGELAND, Pre-Malthusian Doctrines of Population: A Study in the History of Economic

Theory (1904, reprinted 1967); JOSEPH J. SPENGLER, French Predecessors of Malthus: A Study in Eighteenth-

Century Wage and Population Theory (1942, reprinted 1980); WILLIAM PETERSEN, Malthus (1979); PATRICIA

JAMES, Population Malthus, His Life and Times (1979); E.P. HUTCHINSON, The Population Debate: The

Development of Conflicting Theories up to 1900 (1967); and J. DUPÂQUIER, A. FAUVE-CHAMOUX, and E. GREBENIK

(eds.), Malthus Past and Present (1983). Other works include ALFRED SAUVY, General Theory of Population (1969; originally published in French, 1952–54), a study of the relationship between the demographic-

biological characteristics of societies and their economic and social circumstances; PHILIP M. HAUSER (ed.),

The Population Dilemma, 2nd rev. ed. (1969), a collection of papers by leading theoreticians; GARRETT

HARDIN, Population, Evolution, and Birth Control: A Collage of Controversial Ideas, 2nd ed. (1969), a survey

of the history of views and opinions; JONAS SALK and JONATHAN SALK, World Population and Human Values: A

New Reality (1981), which traces changes in attitudes to developments in population; and JULIAN L. SIMON, The Ultimate Resource (1981), an optimistic perspective for the world’s population.

Development trends

The spatial distribution of population characteristics is studied in GLENN T. TREWARTHA, The Less Developed

Realm: A Geography of Its Population (1972), and GLENN T. TREWARTHA (ed.), The More Developed Realm: A

© 2020 Encyclopædia Britannica, Inc. 88 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Geography of Its Population (1978). ALFRED SAUVY, Fertility and Survival: Population Problems from Malthus to Mao Tse-Tung (1961; originally published in French, 1959), presents an analysis of problems and proposes solutions. Other works include The Determinants and Consequences of Population Trends: New Summary of Findings of Interaction of Demographic, Economic, and Social Factors, 2 vol. (1973–78), a monumental compendium of theory and evidence on population trends compiled by the United Nations; the Demographic Yearbook, published by the Statistical Office of the United Nations, a basic source for

more than 200 countries and territories; and BERNARD BERELSON (ed.), Family Planning Programs: An International Survey (1969), and Family Planning and Population Programs: A Review of World Developments (1966), comprehensive assessments of data. Later sources on trends in population

development include ESTER BOSERUP, Population and Technological Change (1981); GAVIN W. JONES,

Population Growth and Educational Planning in Developing Nations (1975); JOHN CLELAND and JOHN HOBCRAFT (eds.), Reproductive Change in Developing Countries: Insights from the World Fertility Survey (1985);

GAVIN W. JONES (ed.), Demographic Transition in Asia (1984); C. ALISON MCINTOSH, Population Policy in

Western Europe: Responses to Low Fertility in France, Sweden, and West Germany (1983); DONALD J. BOGUE

, The Population of the United States: Historical Trends and Future Projections (1985); JOHN L. ANDRIOT

(ed.), Population Abstracts of the United States (1980); RICHARD L. RUBENSTEIN, The Age of Triage: Fear and

Hope in an Overcrowded World (1983); JANE MENKEN (ed.), World Population & U.S. Policy: The Choices Ahead (1986), papers by leading experts prepared for the second American Assembly on population

issues; and NATIONAL RESEARCH COUNCIL, COMMITTEE ON POPULATION, Population Growth and Economic Development: Policy Questions (1986).

Population projections

World Population Prospects: Estimates and Projections as Assessed in 1982 (1985), and Patterns of Urban and Rural Population Growth (1980), are publications of the United Nations Department of International

Economic and Social Affairs. See also JUST FAALAND (ed.), Population and the World Economy in the 21st

Century (1982), a collection of papers with economic and demographic forecasts; and MICHAEL D. BAYLES, Morality and Population Policy (1980), and Reproductive Ethics (1984), which discuss the moral aspects of long-term population policies.

For current discussions of relevant topics and reports of recent research the following periodicals may be recommended: Contraception (monthly); Demography (quarterly); Economic Development and Cultural Change (quarterly); Economic History Review (quarterly); Family Planning Perspectives (bimonthly); Human Biology (quarterly); International Labour Review (bimonthly); International Migration Review (quarterly); Journal of Biosocial Science (quarterly); Journal of Economic History (quarterly); Journal of Interdisciplinary History (quarterly); Monthly Labor Review; Population and Development Review (quarterly); Population Bulletin (quarterly); Population Index (quarterly); Population Studies (three times a year); Science (weekly); and Social Biology (quarterly). Many non-English periodicals in the field appear with systematic summaries in English: Annales: Économies, Sociétés, Civilisations (France, bimonthly); Demografia (Hungary, quarterly); Demografía y economía (Mexico, quarterly); Demografie (Czechoslovakia, quarterly); Genus (Italy, weekly); Journal of Population Problems (Japan, quarterly); and Population (France, bimonthly).

Citation (MLA style):

"Population." Britannica LaunchPacks: Human Populations and Their Environment, Encyclopædia Britannica, 24 Mar. 2020. packs-preview.eb.com. Accessed 10 Aug. 2021.

© 2020 Encyclopædia Britannica, Inc. 89 of 106 Britannica LaunchPacks | Human Populations and Their Environment

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

solid-waste management

the collecting, treating, and disposing of solid material that is discarded because it has served its purpose or is no longer useful. Improper disposal of municipal solid waste can create unsanitary conditions, and these conditions in turn can lead to pollution of the environment and to outbreaks of vector-borne disease—that is, diseases spread by rodents and insects. The tasks of solid-waste management present complex technical challenges. They also pose a wide variety of administrative, economic, and social problems that must be managed and solved.

Bulldozers working on a sanitary landfill.

© SergeyZavalnyuk—iStock/Getty Images Historical background

Early waste disposal

In ancient cities, wastes were thrown onto unpaved streets and roadways, where they were left to accumulate. It

was not until 320 BCE in Athens that the first known law forbidding this practice was established. At that time a system for waste removal began to evolve in Greece and in the Greek-dominated cities of the eastern Mediterranean. In ancient Rome, property owners were responsible for cleaning the streets fronting their property. But organized waste collection was associated only with state-sponsored events such as parades. Disposal methods were very crude, involving open pits located just outside the city walls. As populations increased, efforts were made to transport waste farther out from the cities.

After the fall of Rome, waste collection and municipal sanitation began a decline that lasted throughout the Middle Ages. Near the end of the 14th century, scavengers were given the task of carting waste to dumps outside city walls. But this was not the case in smaller towns, where most people still threw waste into the streets. It was not until 1714 that every city in England was required to have an official scavenger. Toward the end of the 18th century in America, municipal collection of garbage was begun in Boston, New York City, and

© 2020 Encyclopædia Britannica, Inc. 90 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Philadelphia. Waste disposal methods were still very crude, however. Garbage collected in Philadelphia, for example, was simply dumped into the Delaware River downstream from the city.

Developments in waste management

A technological approach to solid-waste management began to develop in the latter part of the 19th century. Watertight garbage cans were first introduced in the United States, and sturdier vehicles were used to collect and transport wastes. A significant development in solid-waste treatment and disposal practices was marked by the construction of the first refuse incinerator in England in 1874. By the beginning of the 20th century, 15 percent of major American cities were incinerating solid waste. Even then, however, most of the largest cities were still using primitive disposal methods such as open dumping on land or in water.

Technological advances continued during the first half of the 20th century, including the development of garbage grinders, compaction trucks, and pneumatic collection systems. By mid-century, however, it had become evident that open dumping and improper incineration of solid waste were causing problems of pollution and jeopardizing public health. As a result, sanitary landfills were developed to replace the practice of open dumping and to reduce the reliance on waste incineration. In many countries waste was divided into two categories, hazardous and nonhazardous, and separate regulations were developed for their disposal. Landfills were designed and operated in a manner that minimized risks to public health and the environment. New refuse incinerators were designed to recover heat energy from the waste and were provided with extensive air pollution control devices to satisfy stringent standards of air quality. Modern solid-waste management plants in most developed countries now emphasize the practice of recycling and waste reduction at the source rather than incineration and land disposal. Solid-waste characteristics

Composition and properties

The sources of solid waste include residential, commercial, institutional, and industrial activities. Certain types of wastes that cause immediate danger to exposed individuals or environments are classified as hazardous; these are discussed in the article hazardous-waste management. All nonhazardous solid waste from a community that requires collection and transport to a processing or disposal site is called refuse or municipal solid waste (MSW). Refuse includes garbage and rubbish. Garbage is mostly decomposable food waste; rubbish is mostly dry material such as glass, paper, cloth, or wood. Garbage is highly putrescible or decomposable, whereas rubbish is not. Trash is rubbish that includes bulky items such as old refrigerators, couches, or large tree stumps. Trash requires special collection and handling.

Construction and demolition (C&D) waste (or debris) is a significant component of total solid waste quantities (about 20 percent in the United States), although it is not considered to be part of the MSW stream. However, because C&D waste is inert and nonhazardous, it is usually disposed of in municipal sanitary landfills (see below).

© 2020 Encyclopædia Britannica, Inc. 91 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Electronic waste in a garbage dump.

© Clarence Alford/Fotolia

Another type of solid waste, perhaps the fastest-growing component in many developed countries, is electronic waste, or e-waste, which includes discarded computer equipment, televisions, telephones, and a variety of other electronic devices. Concern over this type of waste is escalating. Lead, mercury, and cadmium are among the materials of concern in electronic devices, and governmental policies may be required to regulate their recycling and disposal.

Solid-waste characteristics vary considerably among communities and nations. American refuse is usually lighter, for example, than European or Japanese refuse. In the United States paper and paperboard products make up close to 40 percent of the total weight of MSW; food waste accounts for less than 10 percent. The rest is a mixture of yard trimmings, wood, glass, metal, plastic, leather, cloth, and other miscellaneous materials. In a loose or uncompacted state, MSW of this type weighs approximately 120 kg per cubic metre (200 pounds per cubic yard). These figures vary with geographic location, economic conditions, season of the year, and many other factors. Waste characteristics from each community must be studied carefully before any treatment or disposal facility is designed and built.

Generation and storage

Rates of solid-waste generation vary widely. In the United States, for example, municipal refuse is generated at an average rate of approximately 2 kg (4.5 pounds) per person per day. Japan generates roughly half this amount, yet in Canada the rate is 2.7 kg (almost 6 pounds) per person per day. In some developing countries the average rate can be lower than 0.5 kg (1 pound) per person per day. These data include refuse from commercial, institutional, and industrial as well as residential sources. The actual rates of refuse generation must be carefully determined when a community plans a solid-waste management project.

Most communities require household refuse to be stored in durable, easily cleaned containers with tight-fitting covers in order to minimize rodent or insect infestation and offensive odours. Galvanized metal or plastic containers of about 115-litre (30-gallon) capacity are commonly used, although some communities employ larger containers that can be mechanically lifted and emptied into collection trucks. Plastic bags are frequently used as liners or as disposable containers for curbside collection. Where large quantities of refuse are generated—such as at shopping centres, hotels, or apartment buildings—dumpsters may be used for temporary storage until the waste is collected. Some office and commercial buildings use on-site compactors to reduce the waste volume.

© 2020 Encyclopædia Britannica, Inc. 92 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Solid-waste collection

Collecting and transporting

Proper solid-waste collection is important for the protection of public health, safety, and environmental quality. It is a labour-intensive activity, accounting for approximately three-quarters of the total cost of solid-waste management. Public employees are often assigned to the task, but sometimes it is more economical for private companies to do the work under contract to the municipality or for private collectors to be paid by individual home owners. A driver and one or two loaders serve each collection vehicle. These are typically trucks of the enclosed, compacting type, with capacities up to 30 cubic metres (40 cubic yards). Loading can be done from the front, rear, or side. Compaction reduces the volume of refuse in the truck to less than half of its loose volume.

The task of selecting an optimal collection route is a complex problem, especially for large and densely populated cities. An optimal route is one that results in the most efficient use of labour and equipment, and selecting such a route requires the application of computer analyses that account for all the many design variables in a large and complex network. Variables include frequency of collection, haulage distance, type of service, and climate. Collection of refuse in rural areas can present a special problem, since the population densities are low, leading to high unit costs.

Refuse collection usually occurs at least once per week because of the rapid decomposition of food waste. The amount of garbage in the refuse of an individual home can be reduced by garbage grinders, or garbage disposals. Ground garbage puts an extra load on sewerage systems, but this can usually be accommodated. Many communities now conduct source separation and recycling programs, in which homeowners and businesses separate recyclable materials from garbage and place them in separate containers for collection. In addition, some communities have drop-off centres where residents can bring recyclables.

Transfer stations

If the final destination of the refuse is not near the community in which it is generated, one or more transfer stations may be necessary. A transfer station is a central facility where refuse from many collection vehicles is combined into a larger vehicle, such as a tractor-trailer unit. Open-top trailers are designed to carry about 76 cubic metres (100 cubic yards) of uncompacted waste to a regional processing or disposal location. Closed compactor-type trailers are also available, but they must be equipped with ejector mechanisms. In a direct discharge type of station, several collection trucks empty directly into the transport vehicle. In a storage discharge type of station, refuse is first emptied into a storage pit or onto a platform, and then machinery is used to hoist or push the solid waste into the transport vehicle. Large transfer stations can handle more than 500 tons of refuse per day. Solid-waste treatment and disposal

Once collected, municipal solid waste may be treated in order to reduce the total volume and weight of material that requires final disposal. Treatment changes the form of the waste and makes it easier to handle. It can also serve to recover certain materials, as well as heat energy, for recycling or reuse.

© 2020 Encyclopædia Britannica, Inc. 93 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Incineration Furnace operation

Burning is a very effective method of reducing the volume and weight of solid waste, though it is a source of greenhouse gas emissions. In modern incinerators the waste is burned inside a properly designed furnace under very carefully controlled conditions. The combustible portion of the waste combines with oxygen, releasing mostly carbon dioxide, water vapour, and heat. Incineration can reduce the volume of uncompacted waste by more than 90 percent, leaving an inert residue of ash, glass, metal, and other solid materials called bottom ash. The gaseous by-products of incomplete combustion, along with finely divided particulate material called fly ash, are carried along in the incinerator airstream. Fly ash includes cinders, dust, and soot. In order to remove fly ash and gaseous by-products before they are exhausted into the atmosphere, modern incinerators must be equipped with extensive emission control devices. Such devices include fabric baghouse filters, acid gas scrubbers, and electrostatic precipitators. (See alsoair pollution control.) Bottom ash and fly ash are usually combined and disposed of in a landfill. If the ash is found to contain toxic metals, it must be managed as a hazardous waste.

Municipal solid-waste incinerators are designed to receive and burn a continuous supply of refuse. A deep refuse storage pit, or tipping area, provides enough space for about one day of waste storage. The refuse is lifted from the pit by a crane equipped with a bucket or grapple device. It is then deposited into a hopper and chute above the furnace and released onto a charging grate or stoker. The grate shakes and moves waste through the furnace, allowing air to circulate around the burning material. Modern incinerators are usually built with a rectangular furnace, although rotary kiln furnaces and vertical circular furnaces are available. Furnaces are constructed of refractory bricks that can withstand the high combustion temperatures.

Combustion in a furnace occurs in two stages: primary and secondary. In primary combustion, moisture is driven off, and the waste is ignited and volatilized. In secondary combustion, the remaining unburned gases and particulates are oxidized, eliminating odours and reducing the amount of fly ash in the exhaust. When the refuse is very moist, auxiliary gas or fuel oil is sometimes burned to start the primary combustion.

In order to provide enough oxygen for both primary and secondary combustion, air must be thoroughly mixed with the burning refuse. Air is supplied from openings beneath the grates or is admitted to the area above. The relative amounts of this underfire air and overfire air must be determined by the plant operator to achieve good combustion efficiency. A continuous flow of air can be maintained by a natural draft in a tall chimney or by mechanical forced-draft fans.

Energy recovery

The energy value of refuse can be as much as one-third that of coal, depending on the paper content, and the heat given off during incineration can be recovered by the use of a refractory-lined furnace coupled to a boiler. Boilers convert the heat of combustion into steam or hot water, thus allowing the energy content of the refuse to be recycled. Incinerators that recycle heat energy in this way are called waste-to-energy plants. Instead of a separate furnace and boiler, a water-tube wall furnace may also be used for energy recovery. Such a furnace is lined with vertical steel tubes spaced closely enough to form continuous sections of wall. The walls are insulated on the outside in order to reduce heat loss. Water circulating through the tubes absorbs heat to produce steam, and it also helps to control combustion temperatures without the need for excessive air, thus lowering air pollution control costs.

© 2020 Encyclopædia Britannica, Inc. 94 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Waste-to-energy plants operate as either mass burn or refuse-derived fuel systems. A mass burn system uses all the refuse, without prior treatment or preparation. A refuse-derived fuel system separates combustible wastes from noncombustibles such as glass and metal before burning. If a turbine is installed at the plant, both steam and electricity can be produced in a process called cogeneration.

Waste-to-energy systems are more expensive to build and operate than plain incinerators because of the need for special equipment and controls, highly skilled technical personnel, and auxiliary fuel systems. On the other hand, the sale of generated steam or electricity offsets much of the extra cost, and recovery of heat energy from refuse is a viable solid-waste management option from both an engineering and an economic point of view. About 80 percent of municipal refuse incinerators in the United States are waste-to-energy facilities.

Composting

Another method of treating municipal solid waste is composting, a biological process in which the organic portion of refuse is allowed to decompose under carefully controlled conditions. Microbes metabolize the organic waste material and reduce its volume by as much as 50 percent. The stabilized product is called compost or humus. It resembles potting soil in texture and odour and may be used as a soil conditioner or mulch.

Composting offers a method of processing and recycling both garbage and sewage sludge in one operation. As more stringent environmental rules and siting constraints limit the use of solid-waste incineration and landfill options, the application of composting is likely to increase. The steps involved in the process include sorting and separating, size reduction, and digestion of the refuse.

Sorting and shredding

The decomposable materials in refuse are isolated from glass, metal, and other inorganic items through sorting and separating operations. These are carried out mechanically, using differences in such physical characteristics of the refuse as size, density, and magnetic properties. Shredding or pulverizing reduces the size of the waste articles, resulting in a uniform mass of material. It is accomplished with hammer mills and rotary shredders.

Digesting and processing

Pulverized waste is ready for composting either by the open windrow method or in an enclosed mechanical facility. Windrows are long, low mounds of refuse. They are turned or mixed every few days to provide air for the microbes digesting the organics. Depending on moisture conditions, it may take five to eight weeks for complete digestion of the waste. Because of the metabolic action of aerobic bacteria, temperatures in an active compost pile reach about 65 °C (150 °F), killing pathogenic organisms that may be in the waste material.

Open windrow composting requires relatively large land areas. Enclosed mechanical composting facilities can reduce land requirements by about 85 percent. Mechanical composting systems employ one or more closed tanks or digesters equipped with rotating vanes that mix and aerate the shredded waste. Complete digestion of the waste takes about one week.

Digested compost must be processed before it can be used as a mulch or soil conditioner. Processing includes drying, screening, and granulating or pelletizing. These steps improve the market value of the compost, which is the most serious constraint to the success of composting as a waste management option. Agricultural demand for digested compost is usually low because of the high cost of transporting it and because of competition with inorganic chemical fertilizers.

© 2020 Encyclopædia Britannica, Inc. 95 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Sanitary landfill

Construction of a sanitary landfill.

Encyclopædia Britannica, Inc.

Land disposal is the most common management strategy for municipal solid waste. Refuse can be safely deposited in a sanitary landfill, a disposal site that is carefully selected, designed, constructed, and operated to protect the environment and public health. One of the most important factors relating to landfilling is that the buried waste never comes in contact with surface water or groundwater. Engineering design requirements include a minimum distance between the bottom of the landfill and the seasonally high groundwater table. Most new landfills are required to have an impermeable liner or barrier at the bottom, as well as a system of groundwater-monitoring wells. Completed landfill sections must be capped with an impermeable cover to keep precipitation or surface runoff away from the buried waste. Bottom and cap liners may be made of flexible plastic membranes, layers of clay soil, or a combination of both.

Constructing the landfill

Two methods of constructing a sanitary landfill. (The top and bottom liners and the leachate…

Encyclopædia Britannica, Inc.

The basic element of a sanitary landfill is the refuse cell. This is a confined portion of the site in which refuse is spread and compacted in thin layers. Several layers may be compacted on top of one another to a maximum depth of about 3 metres (10 feet). The compacted refuse occupies about one-quarter of its original loose volume. At the end of each day’s operation, the refuse is covered with a layer of soil to eliminate windblown litter, odours, and insect or rodent problems. One refuse cell thus contains the daily volume of compacted refuse and soil cover. Several adjacent refuse cells make up a lift, and eventually a landfill may comprise two or more lifts stacked one on top of the other. The final cap for a completed landfill may also be covered with a layer of topsoil that can support vegetative growth.

Daily cover soil may be available on-site, or it may be hauled in and stockpiled from off-site sources. Various types of heavy machinery, such as crawler tractors or rubber-tired dozers, are used to spread and compact the

© 2020 Encyclopædia Britannica, Inc. 96 of 106 Britannica LaunchPacks | Human Populations and Their Environment

refuse and soil. Heavy steel-wheeled compactors may also be employed to achieve high-density compaction of the refuse.

The area and depth of a new landfill are carefully staked out, and the base is prepared for construction of any required liner and leachate-collection system. Where a plastic liner is used, at least 30 cm (12 inches) of sand is carefully spread over it to provide protection from landfill vehicles. At sites where excavations can be made below grade, the trench method of construction may be followed. Where this is not feasible because of topography or groundwater conditions, the area method may be practiced, resulting in a mound or hill rising above the original ground. Since no ground is excavated in the area method, soil usually must be hauled to the site from some other location. Variations of the area method may be employed where a landfill site is located on sloping ground, in a valley, or in a ravine. The completed landfill eventually blends in with the landscape.

Controlling by-products

Organic material buried in a landfill decomposes by anaerobic microbial action. Complete decomposition usually takes more than 20 years. One of the by-products of this decomposition is methane gas. Methane is poisonous and explosive when diluted in the air, and it is a potent greenhouse gas. It can also flow long distances through porous layers of soil, and, if it is allowed to collect in basements or other confined areas, dangerous conditions may arise. In modern landfills, methane movement is controlled by impermeable barriers and by gas-venting systems. In some landfills the methane gas is collected and recovered for use as a fuel, either directly or as a component of biogas.

A highly contaminated liquid called leachate is another by-product of decomposition in sanitary landfills. Most leachate is the result of runoff that infiltrates the refuse cells and comes in contact with decomposing garbage. If leachate reaches the groundwater or seeps out onto the ground surface, serious environmental pollution problems can occur, including the possible contamination of drinking-water supplies. Methods of controlling leachate include the interception of surface water in order to prevent it from entering the landfill and the use of impermeable liners or barriers between the waste and the groundwater. New landfill sites should also be provided with groundwater-monitoring wells and leachate-collection and treatment systems.

Importance in waste management

In communities where appropriate sites are available, sanitary landfills usually provide the most economical option for disposal of nonrecyclable refuse. However, it is becoming increasingly difficult to find sites that offer adequate capacity, accessibility, and environmental conditions. Nevertheless, landfills will always play a key role in solid-waste management. It is not possible to recycle all components of solid waste, and there will always be residues from incineration and other treatment processes that will eventually require disposal underground. In addition, landfills can actually improve poor-quality land. In some communities properly completed landfills are converted into recreational parks, playgrounds, or golf courses.

© 2020 Encyclopædia Britannica, Inc. 97 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Recycling

The role of recycling in solid-waste disposal.

Encyclopædia Britannica, Inc.

Learn how automobiles are recycled, including the uses of various parts.

Contunico © ZDF Enterprises GmbH, Mainz

Learn why garbage is a valuable resource.

Contunico © ZDF Enterprises GmbH, Mainz

Separating, recovering, and reusing components of solid waste that may still have economic value is called recycling. One type of recycling is the recovery and reuse of heat energy, a practice discussed separately in incineration. Composting can also be considered a recycling process, since it reclaims the organic parts of solid waste for reuse as mulch or soil conditioner. Still other waste materials have potential for reuse. These include paper, metal, glass, plastic, and rubber, and their recovery is discussed here.

Separation

Before any material can be recycled, it must be separated from the raw waste and sorted. Separation can be accomplished at the source of the waste or at a central processing facility. Source separation, also called curbside separation, is done by individual citizens who collect newspapers, bottles, cans, and garbage separately

© 2020 Encyclopædia Britannica, Inc. 98 of 106 Britannica LaunchPacks | Human Populations and Their Environment

and place them at the curb for collection. Many communities allow “commingling” of nonpaper recyclables (glass, metal, and plastic). In either case, municipal collection of source-separated refuse is more expensive than ordinary refuse collection.

In lieu of source separation, recyclable materials can be separated from garbage at centralized mechanical processing plants. Experience has shown that the quality of recyclables recovered from such facilities is lowered by contamination with moist garbage and broken glass. The best practice, as now recognized, is to have citizens separate refuse into a limited number of categories, including newspaper; magazines and other wastepaper; commingled metals, glass, and plastics; and garbage and other nonrecyclables. The newspaper, other paper wastes, and commingled recyclables are collected separately from the other refuse and are processed at a centralized material recycling facility, or MRF (pronounced “murf” in waste-management jargon). A modern MRF can process about 300 tons of recyclable wastes per day.

At a typical MRF, commingled recyclables are loaded onto a conveyor. Steel cans (“tin” cans are actually steel with only a thin coating of tin) are removed by an electromagnetic separator, and the remaining material passes over a vibrating screen in order to remove broken glass. Next, the conveyor passes through an air classifier, which separates aluminum and plastic containers from heavier glass containers. Glass is manually sorted by colour, and aluminum cans are separated from plastics by an eddy-current separator, which repels the aluminum from the conveyor belt.

Reuse

Recovered broken glass can be crushed and used in asphalt pavement. Colour-sorted glass is crushed and sold to glass manufacturers as cullet, an essential ingredient in glassmaking. Steel cans are baled and shipped to steel mills as scrap, and aluminum is baled or compacted for reuse by smelters. Aluminum is one of the smallest components of municipal solid waste, but it has the highest value as a recyclable material. Recycling of plastic is a challenge, mostly because of the many different polymeric materials used in its production. Mixed thermoplastics can be used only to make lower-quality products, such as “plastic lumber.”

In the paper stream, old newspapers are sorted by hand on a conveyor belt in order to remove corrugated materials and mixed papers. They are then baled or loose-loaded into trailers for shipment to paper mills, where they are reused in the making of more newspaper. Mixed paper is separated from corrugated paper for sale to tissue mills. Although the processes of pulping, de-inking, and screening wastepaper are generally more expensive than making paper from virgin wood fibres, the market for recycled paper has grown with the establishment of more processing plants.

Rubber is sometimes reclaimed from solid waste and shredded, reformed, and remolded in a process called revulcanization, but it is usually not as strong as the original material. Shredded rubber can be used as an additive in asphalt pavements and artificial turf and is also sold directly as an outdoor mulch. Discarded tires may be employed as swings and other recreational structures for use by children in “tire playgrounds.”

In general, the most difficult problem associated with the recycling of any solid-waste material is finding applications and suitable markets. Recycling by itself will not solve the growing problem of solid-waste management and disposal. There will always be some unusable and completely valueless solid residue requiring final disposal.

Jerry A. Nathanson

Citation (MLA style):

© 2020 Encyclopædia Britannica, Inc. 99 of 106 Britannica LaunchPacks | Human Populations and Their Environment

"Solid-waste management." Britannica LaunchPacks: Human Populations and Their Environment, Encyclopædia Britannica, 10 Nov. 2020. packs-preview.eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

satellite imagery of deforestation

Colour-coded Landsat satellite images of Brazil's Carajás mining area, documenting extensive deforestation between 1986 (left) and 1992 (right). Areas of cleared land appear bluish green.

NASA Landsat Pathfinder/Tropical Rainforest Information Center

Citation (MLA style):

Satellite imagery of deforestation. Image. Britannica LaunchPacks: Human Populations and Their Environment, Encyclopædia Britannica, 19 Feb. 2021. packs-preview.eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

© 2020 Encyclopædia Britannica, Inc. 100 of 106 Britannica LaunchPacks | Human Populations and Their Environment

projected changes in mean surface temperatures

Projected changes in mean surface temperatures by the late 21st century according to the A1B climate change scenario. All values for the period 2090–99 are shown relative to the mean temperature values for the period 1980–99.

Encyclopædia Britannica, Inc.

Citation (MLA style):

Projected changes in mean surface temperatures. Image. Britannica LaunchPacks: Human Populations and Their Environment, Encyclopædia Britannica, 19 Feb. 2021. packs-preview.eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

© 2020 Encyclopædia Britannica, Inc. 101 of 106 Britannica LaunchPacks | Human Populations and Their Environment

acid rain

U.S. emissions of SO , NO , and NH , 1970–85 (five-year intervals) and 1990–2008 (one-year intervals). 2 x 3

Encyclopædia Britannica, Inc.

Citation (MLA style):

Acid rain. Image. Britannica LaunchPacks: Human Populations and Their Environment, Encyclopædia Britannica, 19 Feb. 2021. packs-preview.eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

© 2020 Encyclopædia Britannica, Inc. 102 of 106 Britannica LaunchPacks | Human Populations and Their Environment

tree: effects of acid rain

Spruce trees damaged by acid rain in Karkonosze National Park, Poland.

Simon Fraser—Science Photo Library/Photo Researchers, Inc.

Citation (MLA style):

Tree: effects of acid rain. Image. Britannica LaunchPacks: Human Populations and Their Environment, Encyclopædia Britannica, 19 Feb. 2021. packs-preview.eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

Grinnell Glacier shrinkage

© 2020 Encyclopædia Britannica, Inc. 103 of 106 Britannica LaunchPacks | Human Populations and Their Environment

A series of photographs of the Grinnell Glacier taken from the summit of Mount Gould in Glacier National Park, Montana, in (from left) 1938, 1981, 1998, and 2006. In 1938 the Grinnell Glacier filled the entire area at the bottom of the image. By 2006 it had largely disappeared from this view.

1938-T.J. Hileman/Glacier National Park Archives, 1981 - Carl Key/USGS, 1998 - Dan Fagre/USGS, 2006 - Karen Holzer/USGS

Citation (MLA style):

Grinnell Glacier shrinkage. Image. Britannica LaunchPacks: Human Populations and Their Environment, Encyclopædia Britannica, 19 Feb. 2021. packs-preview.eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

Brazil

The coastal forest of Rio de Janeiro state, Brazil, badly fragmented as portions were cleared for cattle grazing.

Courtesy, Stuart L. Pimm

Citation (MLA style):

Brazil. Image. Britannica LaunchPacks: Human Populations and Their Environment, Encyclopædia Britannica, 19 Feb. 2021. packs-preview.eb.com. Accessed 10 Aug. 2021.

© 2020 Encyclopædia Britannica, Inc. 104 of 106 Britannica LaunchPacks | Human Populations and Their Environment

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

Study the effect of increasing concentrations of carbon dioxide on Earth's atmosphere and plant life

Video Transcript

NARRATOR: How do industry, agriculture, and forestry affect weather and climate on the Earth? Today people worry whether the atmosphere has too much carbon dioxide. They actively release carbon dioxide into the air by burning fossil fuels. Plants remove carbon dioxide from the air as they grow and give off oxygen as a by-product of photosynthesis. Photosynthetic plankton in the sea do the same. But as forests are harvested and oceans become polluted, the plants and plankton--as well as their ability to restore the air--are removed from the ecosystem, and the balance of the air's chemistry may change. As the concentration of carbon dioxide increases in the air, Earth's natural greenhouse effect is enhanced. Most gases in the air do not slow the emission of heat back into space from the Earth. However, carbon dioxide in the air holds some of this heat near Earth's surface. Thus, as carbon dioxide increases in the atmosphere, its ability to retain heat also increases, so air near Earth's surface becomes warmer. Most scientists fear that increases in carbon dioxide concentrations have been responsible for increases in global average temperatures. Analyses of climate data show that some formerly stable glaciers and ice shelves have begun to melt away. As this water runs off the land, it enters the oceans, causing sea levels to rise, and cities located along coastlines are put at greater risk of flooding. A warmer atmosphere may also bring about changes in wind and rainfall. Such changes can affect crop production: some areas may receive too much rainfall, while others may receive not enough. Some historically productive growing areas are expected to decline while new areas formerly unsuited for agriculture might become productive. The chain of consequences that result from modified climate and agriculture may alter Earth's economy and politics.

An overview of the role greenhouse gases play in modifying Earth's climate.

Encyclopædia Britannica, Inc.

Citation (MLA style):

© 2020 Encyclopædia Britannica, Inc. 105 of 106 Britannica LaunchPacks | Human Populations and Their Environment

Study the effect of increasing concentrations of carbon dioxide on Earth's atmosphere and plant life. Video. Britannica LaunchPacks: Human Populations and Their Environment, Encyclopædia Britannica, 19 Feb. 2021. packs-preview.eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

© 2020 Encyclopædia Britannica, Inc. 106 of 106