<<

Britannica LaunchPacks |

Louis Pasteur For Grades 6-8

This Pack contains:

5 ARTICLES 4 IMAGES 1 VIDEO

© 2020 Encyclopædia Britannica, Inc. 1 of 47 Britannica LaunchPacks | Louis Pasteur

Louis Pasteur

(1822–95). The French Louis Pasteur devoted his to solving practical problems of industry, agriculture, and . His discoveries have saved countless and created new wealth for the world. Among his discoveries are the process and ways of preventing silkworm , , , and .

Louis Pasteur.

Archives Photographiques, Paris

Pasteur sought no profits from his discoveries, and he supported his family on his professor’s salary or on a modest government allowance. In the he was a calm and exact worker; but once sure of his findings, he vigorously defended them. Pasteur was an ardent patriot, zealous in his ambition to make great through . Scholar and Scientist

Louis Pasteur was born on Dec. 27, 1822, in Dôle, France. His father was a tanner. In 1827 the family moved to nearby , where Louis went to school. He was a hard-working pupil but not an especially brilliant one.

When he was 17 he received a degree of bachelor of letters at the Collège Royal de Besançon. For the next three years he tutored younger students and prepared for the École Normale Supérieure, a noted teacher-training college in Paris. As part of his studies he investigated the crystallographic, chemical, and optical properties of

© 2020 Encyclopædia Britannica, Inc. 2 of 47 Britannica LaunchPacks | Louis Pasteur

various forms of . His work laid the foundations for later study of the geometry of chemical bonds. Pasteur’s investigations soon brought him recognition and also an appointment as assistant to a professor of .

Pasteur received a doctor of science degree in 1847 and was appointed professor of chemistry at the University of . Here he met Marie Laurent, daughter of the of the university. They were married in 1849. Pasteur’s wife shared his love for science. They had five children; three died in childhood. Research in and Souring

In 1854 Pasteur became professor of chemistry and dean of the school of science (Faculté des ) at the University of . Hearing of Pasteur’s ability, a local distiller came to him for help in controlling the process of making alcohol by fermenting beet sugar. Pasteur saw that fermentation was not a simple chemical reaction but took place only in the presence of living organisms. He learned that fermentation, putrefaction, , and souring are caused by germs, or microbes.

Pasteur published his first paper on the formation of and its function in souring in 1857. Further studies developed the valuable technique of pasteurization (see Dairy Industry). The same year he was appointed manager and director of scientific studies at his old school, the École Normale Supérieure. During the next several years he extended his studies into the germ . He spent much time proving to doubting scientists that germs do not originate spontaneously in matter but enter from the outside. Developing Cures for Agricultural Diseases

In 1865 Pasteur was asked to help the French silk industry, which was near ruin as a result of a mysterious that attacked the silkworms. After intensive research, he discovered that two diseases were involved, both caused by on the mulberry leaves that provided for the worms. The diseases were transmitted through the eggs to the next generation of worms. Pasteur showed the silkworm breeders how to identify healthy eggs under the microscope, how to destroy diseased eggs and worms, and how to prevent formation of disease bacteria on the mulberry leaves.

At 45 Pasteur was struck by paralysis. For a time recovery was uncertain, and he was confined to bed for months. The attack left its mark; for the rest of his life, one foot dragged a little as he walked.

In 1877 Pasteur began to seek a cure for anthrax, a disease that killed , sheep, and other farm animals. He drew on research he was conducting on another animal disease, chicken cholera. When he inoculated healthy with weakened cultures of the cholera microbes, the chickens suffered only a mild sickness and were thereafter immune to the disease. Pasteur successfully applied this technique of to the prevention of anthrax.

Many scientists challenged Pasteur’s anthrax prevention claims, and Pasteur agreed to a dramatic test. Forty- eight sheep and a few cows and goats were gathered in a pasture near the town of Melun. Half the animals were first immunized with cultures of weakened anthrax microbes; then all were injected with strong cultures.

Within a few days, the untreated animals were dead; but the immunized animals showed no effect of the disease. The test verified Pasteur’s results beyond all doubt. Later he proposed that all cultures be called and the inoculating technique, (see Vaccines).

© 2020 Encyclopædia Britannica, Inc. 3 of 47 Britannica LaunchPacks | Louis Pasteur

Treatment for Rabies

Human beings contract rabies (or hydrophobia) when they are bitten by a dog or another animal that is suffering from the disease. Rabies slowly destroys the central nervous system by attacking the spinal cord.

Pasteur reasoned that it might be possible to immunize people after they had been bitten but before destruction of the spinal cord began. He took spinal cord tissues of animals that had died of rabies and dried them for varying periods of time. He then made of the tissues and injected them into another stricken animal. The first inoculation was from the driest, weakest culture, and each successive inoculation was stronger. After repeated failures, he finally succeeded in halting the development of rabies in an infected dog. The treatment required 14 inoculations.

Pasteur hesitated to try the remedy on humans. The decision was forced on him in 1885 when the mother of 9- year-old Joseph Meister begged Pasteur to save her son. The boy had been bitten 14 times by a rabid dog. Pasteur treated the child. The wounds healed and no trace of rabies appeared. Thus Joseph became the first person saved by Pasteur’s treatment.

Pasteur had won many honors for his previous discoveries; now the world united to do him special homage. Thousands of people contributed funds to establish a great laboratory, the , where scientists conduct research on various diseases. Pasteur died near St-Cloud on Sept. 28, 1895.

Citation (MLA style):

"Louis Pasteur." Britannica LaunchPacks: Louis Pasteur, Encyclopædia Britannica, 19 Feb. 2021. packs-preview. eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

chemistry

Britannica Note:

Content on this topic is available in the sections ": The Rise of Organic Chemistry" and "More New Fields of Investigation."

The science of chemistry is the study of matter and the chemical changes that matter undergoes. Research in chemistry not only answers basic questions about nature but also affects people’s lives. Chemistry has been used, for example, to make stronger , to enrich soil for growing crops, to destroy harmful bacteria, and to measure levels of in the environment. It has also made possible the development of such new substances as plastics, fibers, and new .

© 2020 Encyclopædia Britannica, Inc. 4 of 47 Britannica LaunchPacks | Louis Pasteur

Chemistry students conduct an with their teacher.

© Benis Arapovic/Shutterstock.com

Knowledge of chemistry is important in such fields as , medicine, agriculture, archaeology, and . themselves are employed by educational and research institutions, government, and industry.

The function of the chemical industry is to create new chemicals and to supply the products of chemical research to the world. Using chemical reactions, the industry converts raw materials such as water, salt, metals and minerals, petroleum, coal, natural gas, plant cellulose and starch, and atmospheric gases into other products. Some of these products are used by manufacturers. They include the chemicals needed to make and paper products; wallboard, pipe, insulation, and other construction materials; polymers for plastic toys, bottles, films, paints, and other items; and fibers for carpets and fabrics. Other chemical products that are used by consumers more directly include medicines, dyes, paints, fertilizers, shampoos, detergents and waxes, perfumes, cosmetics, and flavorings. Chemistry Aids in Understanding the World

James Watson and determined the structure of deoxyribonucleic acid, or DNA, in 1953.

Encyclopædia Britannica, Inc.

The work of chemistry is generally described as analysis and synthesis. Chemists analyze substances by taking them apart to find out what they are made of. Chemists synthesize substances by putting them together in different, possibly more useful combinations. Assisted by specialized instruments and computers, chemists study materials as small as single atoms and as large and complex as DNA (deoxyribonucleic acid), which contains millions of atoms. (See alsochemical analysis.)

The chemist wants to understand how the universe is put together and how the substances in it can be changed to the advantage of humans. Chemists and physicists have made great advances in these efforts. They have discovered that all things in the universe can be classified as matter or energy and that matter and energy can be converted from one to the other.

© 2020 Encyclopædia Britannica, Inc. 5 of 47 Britannica LaunchPacks | Louis Pasteur

The behavior of particles differs in each of the four states of matter.

Encyclopædia Britannica, Inc.

Matter itself can be classified according to its physical state: solid, liquid, gas, or plasma. A study of the physical states has led to the conclusion that their characteristics can be explained by assuming that matter consists of particles in motion. This moving-particle theory, called the kinetic-molecular theory of matter, explains many common phenomena such as the evaporation of liquids and the diffusion of gases. Matter can also be grouped according to the nature of its composition: element, compound, or mixture. (See alsoheat; physical chemistry.) Atomic Theory

The study of matter has helped chemists discover that the whole universe is made up of chemical building blocks called elements. More than 90 elements are known to exist in nature, and scientists have made about 20 additional elements artificially. The science of chemistry involves a study of these elements and of the compounds that are formed when different elements combine.

Chemists also now know that elements are made of atoms. Some early Greek philosophers had suggested that matter was made of atoms, but not until the beginning of the did physicists and chemists begin to collect the evidence needed to prove the theory and to understand the nature of atoms.

The atomic theory can be summarized as follows: (1) Ordinary matter is made of small particles called atoms. (2) Atoms of the same elements have the same average masses, and atoms of different elements have different average masses. (3) Chemical reactions take place between atoms or groups of atoms. (See alsoatomic particles; nuclear energy.) Chemical Change

In a chemical reaction, the atoms of the starting substances, or reactants, are rearranged to form…

Encyclopædia Britannica, Inc.

© 2020 Encyclopædia Britannica, Inc. 6 of 47 Britannica LaunchPacks | Louis Pasteur

When gasoline is burned, bread is eaten, or plastics are produced from petroleum, chemical changes occur. When a substance undergoes a chemical change, mass is conserved—that is, the amount of mass in the substance is the same before and after the chemical change takes place. The atoms keep their original identity and their number stays the same, but they are rearranged into new combinations. However, the chemical properties of the starting materials vanish, and the properties of the new materials appear. The process by which substances are changed into other substances is called a chemical reaction.

In addition to chemical change, matter can also undergo two other kinds of changes—physical and nuclear. The boiling of water, the melting of ice, and the dissolving of table sugar in tea are all examples of physical changes. In all these reactions the chemical composition of the substances involved remains the same. As with chemical changes, mass is conserved when matter undergoes a physical change. The only change is in physical form. ( See alsoconservation of mass.)

The phenomenon of radioactivity, the fission of uranium-235, and the fusion of atoms (to form helium atoms), with the resultant release of atomic energy, are examples of nuclear changes. In each case the atomic nucleus changes and one kind of atom is transformed into another. One element may be changed, or transmuted, into one or more other elements, and mass is conserved. (See alsonuclear energy.) Elements, Compounds, and Mixtures

Sodium bicarbonate (NaHCO3), more commonly known as baking soda, is a chemical compound. It is…

Encyclopædia Britannica, Inc.

A sample of a pure element contains atoms that are chemically the same, but different from those of all other elements. Pure copper contains only copper atoms, pure oxygen only oxygen atoms. The tendency of atoms of different elements to combine makes possible a great variety and number of compounds. A compound is made up of two or more elements that have undergone a chemical change. Water, sugar, baking soda, and salt are examples of familiar compounds.

© 2020 Encyclopædia Britannica, Inc. 7 of 47 Britannica LaunchPacks | Louis Pasteur

The synthesis of water is a chemical reaction in which two molecules of hydrogen (H2) combine with…

Encyclopædia Britannica, Inc.

Compounds are formed by a process called a chemical reaction. Elements combine, or react, to form compounds that have physical and chemical properties different from those of the original elements. For example, when atoms of hydrogen gas and oxygen gas are combined at room temperature, they undergo a chemical reaction and form water. Water is a compound that is liquid at room temperature and is unlike either hydrogen or oxygen. Methane gas, made from and hydrogen, and table sugar, made from carbon, hydrogen, and oxygen, are other examples of compounds.

A water molecule is composed of two atoms of hydrogen (H) and one atom of oxygen (O). Although the…

Encyclopædia Britannica, Inc.

A compound has a definite composition—its constituent elements always occur in the same proportion, or formula. Water, for example, always contains two hydrogen atoms for every oxygen atom. The number of known compounds, both natural and artificial, is in the millions. Countless others remain to be discovered or synthesized in the laboratory.

Substances such as milk, paint, ink, air, and muddy water are mixtures, as are rocks and metal alloys. The atoms and other particles in a mixture are intermingled, but they have not combined with each other and undergone chemical changes. Unlike compounds, mixtures do not have definite composition; this is why their composition cannot be described by a fixed formula. (See alsocolloid; solution.)

Mixtures can usually be separated by physical methods, such as chromatography, evaporation, distillation, or filtration. The choice of method depends largely on the nature of the mixture and the type of substances it contains.

Chromatography

Chromatography consists of a collection of methods that can be used to separate the substances dissolved in a liquid or gas mixture. The process has two main parts, or phases—a stationary phase and a moving, or mobile,

© 2020 Encyclopædia Britannica, Inc. 8 of 47 Britannica LaunchPacks | Louis Pasteur

phase. The stationary phase is usually a liquid or solid medium; the mobile phase is generally a liquid or gas solvent. The mixture being analyzed is applied to the medium (stationary phase), which is then placed in contact with the solvent (mobile phase). The solvent moves through the medium by capillary action, carrying the mixture substances with it. The individual substances in the mixture move at different rates based on certain properties—such as molecular size. Each substance in the mixture separates out from the moving solvent and adsorbs to the medium at a different point, enabling identification.

This photograph shows the results of a chromatography process that was used to separate the pigments …

Natrij

The different methods of chromatography include gas chromatography, thin-layer chromatography, and paper chromatography. Paper chromatography is especially useful for separating pigmented mixtures, such as inks and plant material, and identifying their components by color. A small amount of the study mixture is applied near the edge of a special filter paper. The paper is hung over a container of a solvent such as water or alcohol, so that the edge of the paper is in contact with the solvent. The solvent travels up the paper, carrying the mixture with it. Different pigments in the mixture separate out and adhere to the paper at different points, allowing their identification by color. (See alsochemical analysis.)

Evaporation

Evaporation is used to separate a soluble solid from a liquid mixture. In the lab, an open container of the mixture is heated. As the temperature of the mixture rises, the liquid part evaporates, or vaporizes, into the air. The solid part of the mixture is left behind in the container, where it can be collected and analyzed. Evaporation can also occur without heat: if a container of a liquid mixture is left open, the liquid portion will gradually evaporate into the air, leaving the solid component behind. This is the basis of the method used commercially to harvest sea salt from seawater.

Distillation

Distillation is used to separate the liquid from a solution. The liquid is heated to its boiling point, converting it to a vapor, or gas state, which is then cooled and condensed back to its liquid form in a separate apparatus. Distillation methods include simple distillation and fractional distillation.

Simple distillation is used to separate one liquid from a solution. The solution is heated to its boiling point so that the liquid evaporates. The rising vapor is captured in a separate apparatus, where it is cooled and condensed into a liquid state again. The solid material that was in the solution is left behind. Simple distillation can be used to separate and collect the water in a saltwater solution; the water evaporates, and the vapor is cooled and condensed to form liquid water in a separate container, leaving the salt behind.

© 2020 Encyclopædia Britannica, Inc. 9 of 47 Britannica LaunchPacks | Louis Pasteur

Fractional distillation is similar to simple distillation but entails more steps. It is used to separate two or more liquids that have different boiling points. The solution is raised first to the lowest of the boiling points, allowing the liquid to vaporize and then condense separately into liquid form. The remaining solution is then heated to the next highest boiling point, allowing vaporization and condensation to take place. Fractional distillation would be used, for example, to separate a solution containing ethanol and water. The solution would first be heated to 173 °F (78 °C), the boiling point of ethanol. The ethanol would vaporize, and its vapor would be cooled and condensed back to liquid ethanol in a separate container. The remaining solution would then be heated to 212 °F (100 °C), the boiling point of water. The water vapor would then be cooled and condensed back to its liquid form. Any other liquids, as well as any solid material, would be left behind.

Filtration

Filtration is a useful method for separating an insoluble solid from a liquid mixture, such as a mixture of sand, pebbles, and water. The mixture is poured through a filtering device such as a sieve or a funnel lined with filter paper. The liquid portion of the mixture passes through the filter paper; even a simple coffee filter can be used. The solid sand and pebbles will not pass through the paper, however, and will be “trapped” and retained in the filter. Molecules and Chemical Formulas

An atom of (Na) donates one of its electrons to an atom of (Cl) in a chemical…

Encyclopædia Britannica, Inc.

Elements vary from those that are highly active to those that are almost completely inert. The tendency of an atom to combine is a chemical property of the atom, and it is controlled by electrical forces at the atomic level. Those forces forge the links, called chemical bonds, that hold two or more atoms together. The groups of atoms formed by chemical bonding are known as molecules. Molecules are the smallest units of a compound that can exist while still keeping the substance’s composition and chemical properties. (See alsoinorganic chemistry; organic chemistry.)

© 2020 Encyclopædia Britannica, Inc. 10 of 47 Britannica LaunchPacks | Louis Pasteur

The chemical symbol for the element iron is Fe.

Encyclopædia Britannica, Inc.

The periodic table arranges the chemical elements in order of increasing atomic number—the total…

Encyclopædia Britannica, Inc.

To make elements and compounds easier to describe in written form, chemists commonly indicate each element with an abbreviation called a chemical symbol. This is usually the initial or the first two letters of the common name or the name of the element. Chemists use such shorthand to represent not only the name of an element but also one atom of that element. The symbols for single atoms of hydrogen, carbon, and iron, for example, are H, C, and Fe. (See alsoperiodic table.)

Atoms of one element can combine with each other to form molecules. For example, atoms of oxygen (O), hydrogen (H), nitrogen (N), and chlorine (Cl) tend to react with their own kind to form two-atom molecules. These molecules are represented by the abbreviations O , H , N , and Cl , respectively. The subscript 2 shows 2 2 2 2 that two atoms of the element make up one molecule of the element. Oxygen also can form a three-atom molecule called ozone that has the abbreviation O . Abbreviations of molecules written in this way are known as 3 chemical formulas. They show both the kinds of atoms and the number of each kind in the molecule.

Six gaseous elements—helium, neon, argon, krypton, xenon, and radon—were thought to be completely inactive chemically when they were first discovered and so were named inert gases. All are present in the air as single, uncombined atoms; thus in their gaseous forms their symbols (for example, He for helium) are written without subscripts. Since the early 1960s chemists have found that under proper conditions at least some of them can be made to form compounds. Today the preferred name of this family of elements is noble gases.

© 2020 Encyclopædia Britannica, Inc. 11 of 47 Britannica LaunchPacks | Louis Pasteur

The reaction of methane (CH4) and chlorine (Cl2), forming the products chloromethane (CH3Cl) and…

Encyclopædia Britannica, Inc.

Chemical formulas are also used to represent compounds and their molecular compositions. One atom each of hydrogen and chlorine react to form the compound (seehydrochloric acid). The chemical formula for hydrogen chloride is HCl. Hydrogen atoms combined with carbon atoms in a proportion of four to one form the flammable gas methane, which is represented as CH . One atom of chlorine combines with three atoms 4 of aluminum (Al) to form aluminum chloride (AlCl ), which in a hydrated form (combined with molecules of 3 water) is used in antiperspirants.

Iron reacts with the oxygen and water in moist air to form iron oxide, a powdery substance commonly…

Paulnasca

Even in the form of molecules, atoms of elements may still react with other elements. Oxygen molecules in the air (O ) undergo a complex reaction with atoms of iron (Fe) when the iron is exposed to moist air. They combine 2 with the iron to form rust, which is a chemical compound called ferric oxide (Fe O ). In the same way, molecules 2 3 of compounds can react with elements and with other compounds. A water molecule and an ammonia molecule (NH ), for example, can combine into a new molecule called ammonium hydroxide (NH OH). 3 4

© 2020 Encyclopædia Britannica, Inc. 12 of 47 Britannica LaunchPacks | Louis Pasteur

Two models display the three-dimensional shape of a methane molecule: a ball-and-stick model (A) and …

Encyclopædia Britannica, Inc.

In reading chemical formulas, it is important to keep in mind what they do not show. Formulas usually say little about the chemical bonding of the atoms. In addition, they are necessarily written “flat” on paper or a chalkboard, although molecules actually exist in three dimensions in space. For example, the formula for methane, CH , does not show that all four hydrogen atoms are attached only to the carbon atom, or that the 4 hydrogen atoms are arrayed in space around the carbon atom at equal distances and angles. Thus the molecule resembles a tetrahedron, or four-sided pyramid (including the base), with the carbon atom at the center and the hydrogen atoms at the corners. In solving chemical problems—such as designing a new plastic, for example— chemists often must consider the geometric structures of the molecules that will react and the molecules that will be produced. (See alsopolymer.) Chemical Equations

To express the reactions that form and break down molecules, chemists write chemical equations. These are shorthand versions of ordinary word description, and they make use of symbols and formulas for elements and compounds. For example:

Word description: Two molecules of hydrogen react with one molecule of oxygen to form two molecules of water.

Chemical equation: 2H + O → 2H O 2 2 2

In this chemical reaction, two molecules of ammonia (NH3) are decomposed, or broken down, into one…

Encyclopædia Britannica, Inc.

Word description: Two molecules of ammonia react with each other to form one molecule of nitrogen and three molecules of hydrogen.

Chemical equation: 2NH → N + 3H 3 2 2

© 2020 Encyclopædia Britannica, Inc. 13 of 47 Britannica LaunchPacks | Louis Pasteur

In chemical equations, the substances on the left side of the arrow—those undergoing the chemical change—are called reactants. The substances on the right side—the result of the reaction—are called products. The arrow can be read as “give,” “form,” or “yield.” History of Chemistry

Modern chemistry is only about two centuries old. The earlier history of chemistry may be divided into three periods: magic; alchemy; and “primitive modern,” a period of transition between alchemy and truly modern chemistry.

The Period of Magic

The period of magic extended from prehistoric times to about the beginning of the 1st century AD. Most people believed that natural processes were controlled by spirits, and they relied on magic to persuade the spirits to help while they conducted practical operations. Very little progress was made toward understanding how the universe is made, but much practical knowledge was gathered. Perhaps 9,000 years ago, people devised reliable techniques for making and sustaining fire. Gradually they learned to use fire to harden pottery, extract metals from ores, make alloys, and develop materials such as glass. Certain elements that occur naturally in a pure state, such as gold, copper, and sulfur, were recognized and valued for their properties. This was the period of the Sumerian, Babylonian, Egyptian, and Greek cultures.

About 400 BC the Greek philosopher Democritus theorized that all matter was made up of tiny, indivisible units he called atoms, but his idea was not based on scientific evidence. Other Greek philosophers, including Thales and , also speculated on the nature of matter, though their , too, had little in common with modern chemical knowledge. They believed that earth, air, fire, and water (some imagined a fifth substance called “quintessence”) were the basic elements of all matter. They speculated on the possibility of removing such qualities as hardness, heat or cold, and color from common materials and combining them to make rarer or more valuable substances. They knew that iron could be drawn from a dirty, brown earthen rock and that bronze was made by combining copper and tin. Therefore it seemed possible that if yellowness, hardness, and other qualities could be properly combined, the product would be gold. Such speculations gave rise to alchemy.

The Sterile Period of Alchemy

The span of time from about the beginning of the 1st century AD to about the 17th century is considered the period of alchemy. The alchemists believed that metals could be converted into gold with the aid of a marvelous mineral called the philosopher’s stone, which they never succeeded in finding or making. They did discover new elements, and they invented basic laboratory equipment and techniques that are still used by chemists. However, the alchemists learned very little that was worthwhile concerning the fundamental nature of matter or of chemical behavior. They failed because their basic theories had almost nothing to do with what actually happens in chemical reactions.

In the 13th century such men as Roger Bacon, Albertus Magnus, and Raymond Lully began to realize how futile it was to search for the philosopher’s stone. They suggested that alchemists might rather seek to help the world with useful new products and methods.

In the 16th century, another important leader in the new trend was Bombastus von Hohenheim, an aggressive, talented Swiss who used the Latin name Paracelsus. He insisted that the object of alchemy should be the cure of the sick. The elements, he said, were salt, sulfur, and mercury (long connected with the “elixir of life,” another nonexistent alchemical substance), and they would give health if present in the body in the proper

© 2020 Encyclopædia Britannica, Inc. 14 of 47 Britannica LaunchPacks | Louis Pasteur

proportions. On this basis he practiced medicine and attracted many followers. Thus began iatrochemistry, or chemistry applied to the study of medicine and the treatment of disease.

An illustration demonstrates Boyle's Law, showing that for a given mass, at constant temperature,…

Encyclopædia Britannica, Inc.

One of the first scientific chemists was Boyle. In 1660 he helped found one of the first scientific organizations in Europe, the Royal Society of London. In a book called The Sceptical Chymist (1661) he rejected previous theories of the composition of matter and compiled the first list of the elements that are recognized today. He also discovered the relationship between the volume and the pressure of a gas.

The Beginnings of Modern Chemistry

For about two centuries after Boyle, scientists continued to make useful discoveries but made little progress in understanding the true nature of matter or chemical behavior. Perhaps the greatest source of confusion and defeat in these centuries was a theory of burning (combustion) called the phlogiston theory. It was originated by the German chemists Johann Joachim Becher and Georg Ernst Stahl in the late 1600s. According to this theory, phlogiston, an “essence” like yellowness or hardness in the theories of the ancient philosophers, escaped from substances during the burning process. By this time, chemists were learning to gain knowledge the modern way: by testing theories with (seescience). But such tests failed to confirm the existence of phlogiston.

The first clue to a more useful theory came when an English chemist, Joseph Priestley, discovered in 1774 that a gas (now known as oxygen) was essential to the burning process. (Oxygen was also discovered by the Swedish chemist Carl Wilhelm Scheele at about the same time.) A few years earlier another English scientist, Henry Cavendish, had identified hydrogen as an element. The French chemist Antoine-Laurent Lavoisier used the discoveries of Priestley and Cavendish in a series of experiments from which he formulated the presently accepted theory of combustion. He also showed that burning, the rusting of metals, and the breathing of animals are all processes in which oxygen combines chemically with other substances. Lavoisier’s most significant finding was that the products of a chemical reaction have the same total mass as the reactants, no matter how much the substances are changed. This means that, even when chemical changes take place, something essential stays the same. These contributions are often considered to mark the beginning of modern chemistry.

© 2020 Encyclopædia Britannica, Inc. 15 of 47 Britannica LaunchPacks | Louis Pasteur

Dalton’s Theory of Atoms

This video helps explain how John Dalton developed the atomic theory of matter.

Encyclopædia Britannica, Inc.

Lavoisier’s results gave chemists their first sound understanding concerning the nature of chemical reactions. The next milestone was the atomic theory, advanced in 1805 by an English schoolteacher, John Dalton. This theory states that matter is made up of small particles called atoms, that each chemical element has its own kind of atoms (in contrast to earlier ideas that atoms are essentially alike), and that chemical changes take place between atoms or groups of atoms. To support his theory Dalton set about calculating the relative weights of the atoms of several elements. The Swedish chemist Jöns Jacob Berzelius greatly expanded this work in a long series of experiments in which he found accurate atomic weights for about 40 elements. He also found chemical formulas for most of the inorganic compounds known at that time.

Learn about Avogadro's number and the ideal gas law.

Encyclopædia Britannica, Inc.

Equipped at last with sound views about the nature of matter and of chemical reactions, chemistry made rapid advances. About 1811 came the hypothesis of Amedeo Avogadro, an Italian chemist, about the number of molecules in a given volume of gas. To Dalton’s theory that the atoms of a single element all have the same weight, Avogadro added the following notions: that equal volumes of different gases at the same temperature and pressure contain the same number of molecules, and that some of the gaseous elements are found in two- atom molecules rather than as independent atoms. These theories explained why only half a volume of oxygen was needed to combine with a volume of carbon monoxide (CO) to form carbon dioxide (CO ). Oxygen gas is 2 made of two-atom molecules, and one oxygen molecule reacts with two molecules of carbon monoxide: O + 2 2CO→2CO . 2

The Rise of Organic Chemistry

The vast new field of organic chemistry was opened in 1828 with Friedrich Wöhler’s synthesis of urea, a compound present in certain body fluids of mammals, from inorganic materials in his laboratory. This disproved

© 2020 Encyclopædia Britannica, Inc. 16 of 47 Britannica LaunchPacks | Louis Pasteur

the assumption that such compounds could be formed only through the operation of a “life force” present in animals and plants. About this time chemists also began to realize that a molecule’s geometric structure had a great effect on its chemical and biological properties. This applied especially to the structure of organic compounds, in which carbon atoms provide the basic framework. Chemists found, for example, that two organic compounds can have the same kinds and numbers of atoms but different properties because their molecules are put together differently. Such compounds are called . The French chemist Louis Pasteur opened the way for the study of carbon’s special three-dimensional bonding geometry in his investigation of the isomers of tartaric acid, an organic compound found in grapes.

The German chemist Friedrich Kekulé showed in 1858 that carbon atoms can combine with four other atoms and link with each other to form long chains. He also proposed the cyclic (ring) structure of benzene about this time. Another important field, electrochemistry, was born in the 1830s when Michael Faraday formulated its laws.

More New Fields of Investigation

By the middle of the 19th century, about 60 elements were known. A few chemists had noticed that certain elements were much alike in their properties and saw a pattern emerge when elements were arranged according to atomic weight. The work of these men enabled the Russian chemist , in 1869, to publish the first periodic table. Mendeleev predicted correctly that the gaps left in his table would be filled by as yet undiscovered elements with properties that he also predicted. This table became the foundation of theoretical chemistry.

To this period belongs , whose most important contribution to chemistry was the organization of the field of , based on observations that each element when heated emits light having a characteristic color, or wavelength. To aid his research, he invented many instruments, including the spectroscope and the Bunsen burner.

In the second half of the 19th century, Pasteur, who had done his research on isomers as a young man, became interested in how and bacteria are involved in chemical reactions and in causing illness. He invented the process of pasteurization to prevent and beverages from spoiling, and he proved that are responsible for the chemical changes called fermentation in beer and and for several diseases of humans and other animals. He was also the first to use vaccines against anthrax, chicken cholera, and rabies. His achievements helped lay the groundwork for both biochemistry and .

The Development of Physical Chemistry

The area where physics and chemistry overlap, which had been explored by Avogadro in his investigations of gas volumes, became prominent in the last quarter of the 19th century. Wilhelm Ostwald and Jacobus van’t Hoff used physical principles to describe the energy changes accompanying reactions in solution that go on simultaneously in opposite directions (reactants to products and products to reactants). Svante August Arrhenius proposed the idea that compounds such as acids and bases form ions (electrically charged particles) when they dissolve in water, and that these ions allow the solutions to conduct electricity.

The American chemist Willard Gibbs developed the “phase rule,” a mathematical formula showing how temperature and pressure affect the capacity of substances to exist in equilibrium in as many as three different states of matter, or phases (solid, liquid, and gas). Accurate atomic weight determinations were the work of Theodore Richards. Between 1894 and 1898, the efforts of the chemist William Ramsay and the physicist Lord Rayleigh (John William Strutt) resulted in the isolation of helium on Earth (the element previously had been detected in the sun) and the discovery of the other noble gases.

© 2020 Encyclopædia Britannica, Inc. 17 of 47 Britannica LaunchPacks | Louis Pasteur

After 1900, chemists began receiving invaluable aid from discoveries made in physics about the electrical nature of the atom. Henry Moseley, working with X-rays emitted from atoms of the different elements, reorganized the elements using atomic number—a quantity equal to the positive charge of the atomic nucleus—rather than atomic weight. Max von Laue and Sir William Bragg and his son Lawrence Bragg laid the basis for determining the atomic structure of substances in crystal form by means of X-rays. Francis W. Aston developed the mass spectrograph, a device that separates atoms or molecular fragments of different mass, and used it to discover isotopes of many elements. Later Harold C. Urey isolated deuterium, an isotope of hydrogen with one neutron. Deuterium has become important as a chemical tracer, in thermonuclear weapons, and in fusion power research. (See alsonuclear energy; radioactivity.)

Nuclear Chemistry and Atomic Structure

In 1896 Henri Becquerel with and Pierre Curie discovered the phenomenon of radioactivity. Thus, other scientists were shown that atoms were not permanent and changeless, and the basis was laid for nuclear chemistry and nuclear physics. Having found that atoms sometimes transmuted into other elements on their own, scientists attempted to do the same in the laboratory. In 1919 Ernest Rutherford became the first to succeed, using natural radioactivity to transmute nitrogen atoms into atoms of oxygen and hydrogen. In 1934 Frédéric and Irène Joliot-Curie made radioactive isotopes of elements that were not normally radioactive. Five years later Otto Hahn, Fritz Strassmann, and Lise Meitner discovered that the uranium nucleus could be made to fission, or split into the nuclei of lighter elements, by bombarding it with uncharged particles called neutrons.

By the early 1940s nuclear reactions had been used to make radioactive isotopes of all elements; Glenn Seaborg contributed much to this work. In the 1940s and ’50s Seaborg and his co-workers also made several elements not known to exist in nature. The new elements had atomic numbers greater than 92, the atomic number of uranium. By the early 21st century nuclear scientists were adding new elements to the periodic tale with atomic numbers higher than 110.

Rutherford’s discovery in 1911 that the atom has a tiny massive nucleus at its center allowed chemists and physicists such as Gilbert Lewis, Irving Langmuir, and Niels Bohr over the next 20 years to explain chemical bonding and atomic structure in terms of the behavior of electrons orbiting the nucleus. In the late 1920s and early 1930s, contributed much to knowledge of the nature of the chemical bond and of the relationship between the structure of atoms and molecules and their properties.

New Synthetic Materials

Some of the most notable achievements in modern chemistry have come from efforts to create whole new classes of materials. Early plastics such as celluloid, invented in the late 1860s, relied on large molecules found in natural substances. In 1909, however, the Belgian-born inventor Leo H. Baekeland took out a United States patent for a hard, chemically resistant, electrically nonconductive plastic that he called Bakelite. Made from the chemical combination of synthetic compounds called formaldehydes and , Bakelite proved to be exceptionally useful as an electrical insulator and as a structural material for such consumer goods as radio cabinets, telephone housings, and even jewelry.

The commercial success of Bakelite sparked great interest and investment in the plastics industry, in the study of coal-tar products and other organic compounds, and in the theoretical understanding of complex molecules. These research activities led not only to new dyes, drugs, and detergents but also to the successful manipulation of molecules to produce dozens of materials with particular qualities such as hardness, flexibility, or transparency.

© 2020 Encyclopædia Britannica, Inc. 18 of 47 Britannica LaunchPacks | Louis Pasteur

Techniques were developed, often requiring catalysts and elaborate equipment, to make these polymers—that is, complex molecules built up from simpler structures. From the field of polymer chemistry, the synthetic rubber and synthetic fiber industries have grown. Synthetic fibers are used in fabrics, carpets, rope, and brush bristles and for producing synthetic rubber. (See alsocoal-tar product; man-made fiber; plastics; natural and synthetic rubber.)

Another dramatic result of the growth in chemical knowledge has been the expansion of the modern . Notable early achievements include the development of the synthetic drugs acetylsalicylic acid (aspirin) in 1897, Salvarsan (for treating the bacterial disease syphilis) in 1910, and Prontosil (the first sulfa drug for treating bacterial ) in 1932, as well as the discovery of the penicillin (produced naturally by a mold) in 1928.

Since the late 20th century the rapid growth in the understanding of chemical processes in general, and of organic and biochemical reactions in particular, has revolutionized the treatment of disease. Most drugs available today do not occur naturally but are made in the laboratory from elements and inorganic and organic compounds. Others are derived from animals, plants, microorganisms, and minerals, by pharmaceutical researchers who often use chemical reactions to modify molecular structures in order to make drugs that are more effective and have fewer harmful .

The Merging of Chemistry and Other Disciplines

Graphene is a form of carbon that can form an almost perfect barrier against many substances and…

© Institute of Technology

Technological advances in chemistry, physics, and biology in the past few decades have resulted in the breakdown of distinctions between these formerly separate disciplines. Biochemists and biophysicists, for example, have made great advances in understanding photosynthesis, the chemical process by which plants use the energy in sunlight to make food, and in unraveling the role of the complex molecules DNA and RNA in heredity and the synthesis of proteins. Biochemists and molecular biologists have also learned how to transfer DNA from one kind of organism into another kind and modify the code that it carries for guiding the synthesis of proteins. Microorganisms, plants, and animals that have been given segments of human DNA are employed as living “drug factories” to make large quantities of valuable human proteins such as insulin. Researchers are also investigating how to insert DNA molecules into human cells or to chemically modify the DNA present in order to cure diseases in people who have missing or defective DNA. Some Chemical Terms

atom. The smallest part of an element that retains the element’s properties. An atom cannot be broken down by ordinary chemical means.chemical bond. Any of the forces involving electrons that hold atoms together in molecules, crystals, and other stable forms of matter.compound. A substance containing two or more kinds of

© 2020 Encyclopædia Britannica, Inc. 19 of 47 Britannica LaunchPacks | Louis Pasteur

atoms in chemical combination. The atoms in a compound can be separated by chemical means.Debye-Hückel equation. A mathematical expression derived to elucidate certain properties of solutions of electrolytes, that is, substances present in the solutions in the form of charged particles (ions). Such solutions often behave as if the number of dissolved particles were greater or less than the number actually present; the Debye-Hückel equation takes into account the interactions between the various ions, which are the principal cause of the discrepancies between the properties of dilute solutions of electrolytes and those of so-called ideal solutions.element. A substance composed entirely of atoms of one kind. It cannot be broken down chemically.Henry’s law. A statement that the weight of a gas dissolved by a liquid is proportional to the pressure of the gas upon the liquid. The law, which was first formulated in 1803 by the English and chemist William Henry, holds only for dilute solutions and low gas pressures.mass. The quantity of matter in an object or body.matter. Anything that has mass and takes up space.mixture. A substance composed of two or more elements or compounds, each of which retains its individual properties.molecule. The smallest part of a compound that retains that compound’s properties. Also, two or more atoms of the same element that are chemically combined. The atoms in a molecule can be separated by chemical means.physical change. A change in form (solid, liquid, gas, or plasma) but not in chemical composition.properties. The characteristics of a substance. The physical properties (mass, melting point, hardness, etc.) can be measured and expressed in numbers. The chemical properties (valence, ionization, etc.) determine how a substance reacts chemically.reaction (or chemical change). A change in the chemical composition of a substance. Some Fields of Chemistry

The University of California Berkeley Center for Green Chemistry is transforming the way chemicals…

Displayed by permission of The Regents of the University of California. All rights reserved.

The broad divisions such as inorganic, organic, physical, and emerged early in the history of the subject. Fields such as polymer and environmental chemistry developed during the 20th century, and new ones continue to appear. Major fields of specialization include those below.

analytical chemistry. A field concerned with determining the identity (kind) and quantity of each element or compound present in a sample. Qualitative analysis is concerned with identifying the kinds of elements or compounds in a sample. Quantitative analysis indicates the amounts of the elements present. The tools and techniques of analytical chemistry include the spectroscope, the mass spectograph, X-ray bombardment, ultraviolet fluorescence, and radioisotopes.biochemistry. The study of the chemical composition of living matter and of the chemical processes that occur in living organisms. This field is particularly important in agriculture, biology, , pharmacology, medicine, and dentistry.chemical engineering. A combination of chemistry and engineering that develops or improves industrial processes for making commercial amounts of desirable chemicals that have been produced only in small quantities or in the laboratory.colloid chemistry. The study of the behavior of particles of matter that are larger than ordinary molecules but much too small to be seen with the unaided eye. Particles in this size range have many unique properties. Colloid chemistry is concerned with

© 2020 Encyclopædia Britannica, Inc. 20 of 47 Britannica LaunchPacks | Louis Pasteur

substances such as rubber, plastics, fabrics, , chromosomes, and gases. Tools used in this field include the ultracentrifuge, the ultramicroscope, and the electron microscope.electrochemistry. The study of chemical reactions that are produced by or that produce an electric current. Also studied are the electrical conductivity of solutions and the phenomena that occur at electrodes. Electrochemistry provides methods for chemical analysis and the production of chemicals by electrical means.environmental chemistry. The study of the natural chemical processes that occur in Earth’s water, soil, and atmosphere; their relationship to living things; and the ways human beings can affect them. Chemists in this field often apply their knowledge to monitoring and controlling the effects of human activities that may harm the environment.femtochemistry. Egyptian-born chemist Ahmed H. Zewail developed a rapid laser technique that enabled scientists to study the action of atoms during chemical reactions. The breakthrough created a new field of physical chemistry. green chemistry. Also called sustainable chemistry, the emerging field of green chemistry seeks to prevent or reduce pollution through the design of chemical products and processes that minimize the use of hazardous substances. inorganic chemistry. The study of compounds that (with a few exceptions) do not contain carbon and the elements from which they are formed. nuclear chemistry. The study of radioactivity, the atomic nucleus, and nuclear reactions, and the development of applications for radioactive isotopes in medicine and industry.organic chemistry. The study of carbon and its compounds. Carbon compounds account for the great majority of all known compounds. Among compounds studied by organic chemists are those found in plant and animal tissues, petroleum, carbohydrates, proteins, plastics, and rubber.physical chemistry. The application of physical methods to the study of chemical problems. Included in this field are atomic and molecular structure; theory of reaction rates; mechanism of reactions; chemical equilibriums; energy changes in reactions; theories of solids, liquids, gases, plasmas, and solutions; electrochemistry; and radioactivity.polymer chemistry. The study of very large molecules, called macromolecules, that are formed by linking many simpler chemical units into chains and networks. Polymer chemists are interested in both natural and synthetic materials formed of macromolecules, which include proteins, cellulose, and nucleic acids; mineral crystals such as glass, plastics, and rubbers.. The study of the three-dimensional arrangements of atoms within molecules and how the physical and chemical properties of substances are affected by differences in those arrangements. Two important concepts are and stereoisomerism. A molecule is chiral if it cannot be superimposed on its mirror image; such a molecule has left- and right-handed forms like a pair of gloves. Stereoisomers are molecules with the same atom-to-atom bonding but different arrangements in space.

Alfred B. GarrettEds. Additional Reading

BREED, ALLEN, AND OTHERS. Through the Molecular Maze (Bioventure Associates, 1992).EVERNDEN, MARGERY. The

Experimenters: Twelve Great Chemists (Avisson, 2001).FARADAY, MICHAEL. Experimental Researches in Chemistry

and Physics (Taylor and Francis, 1991).FEIGL, DOROTHY, AND OTHERS. Foundations of Life, 3rd ed. (Macmillan, 1991).

GARDNER, ROBERT. Science Project Ideas About Kitchen Chemistry, rev. ed. (Enslow, 2002).HILL, J.W., AND OTHERS. Chemistry and Life: An Introduction to General, Organic, and Biological Chemistry, 6th ed. (Prentice, 2000).

KEINAN, EHUD, AND SCHECHTER, ISRAEL. Chemistry for the 21st Century (Wiley-VCH, 2001).MCQUARRIE, D.A., AND ROCK, P.

A. General Chemistry, 3rd ed. (W.H. Freeman, 1991).MALONE, L.J. Basic Concepts of Chemistry, 7th ed. (Wiley,

2003).MILLER, G.T., AND LYGRE, D.G. Chemistry: A Contemporary Approach (Wadsworth, 1991).NEWTON, D.E.

Consumer Chemistry Projects for Young Scientists (Watts, 1991).RICHARDS, JON. Chemicals and Reactions

(PowerKids, 2008).WERTHEIM, JANE, AND OTHERS. The Usborne Illustrated Dictionary of Chemistry (EDC, 2006).WOOD,

R.W. Science for Kids: 39 Easy Chemistry Experiments (Tab Books, 1991).

© 2020 Encyclopædia Britannica, Inc. 21 of 47 Britannica LaunchPacks | Louis Pasteur

Citation (MLA style):

"Chemistry." Britannica LaunchPacks: Louis Pasteur, Encyclopædia Britannica, 13 Mar. 2020. packs-preview.eb. com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

fermentation

A chemical change in animal and vegetable matter brought about by microscopic , bacteria, and molds is called fermentation. Examples of fermentation are the souring of milk, the rising of bread dough, and the conversion of sugars and starches to alcohol. Many industrial chemicals and a number of used in modern medicine are produced by fermentation under controlled conditions.

Scientist Max Li develops new biofuels—forms of renewable energy from plant or animal materials—at…

PRNewsFoto/DuPont/AP Images

The result of fermentation is usually that a substance is broken up into simpler compounds. In some cases fermentation is used to change a material in a way that would be difficult or very costly if ordinary chemical methods were chosen. Fermentation is always initiated by formed in the cells of living organisms. An is a natural catalyst that brings about a chemical change without being affected itself.

© 2020 Encyclopædia Britannica, Inc. 22 of 47 Britannica LaunchPacks | Louis Pasteur

Controlled spoilage has played a role in the development of many foods and drinks.

© MinuteEarth

Common yeast is a composed of tiny plantlike cells related to the bacteria. Its enzymes invertase and zymase break up sugar into alcohol and carbon dioxide. They leaven bread and change grape juice to wine. Bacteria sour milk by producing lactic and butyric acids. Cells of the human body produce digestive enzymes, such as pepsin and rennin, which break down food into a soluble form.

The products of fermentation have been used since earliest times. Cave dwellers discovered that aged meat has a more pleasing flavor than freshly killed meat. Wine, beer, and leavened bread are as old as agriculture. Cheese, which involves the fermentation of milk or cream, is another ancient food. The medicinal value of fermented products has been known for a long time. The Chinese used moldy soybean curd to cure skin infections 3,000 years ago. Central American Indians treated infected wounds with fungi.

The true cause of fermentation, however, was not understood until the 19th century. The French scientist Louis Pasteur, while studying problems of the brewers and winegrowers of France, found that one kind of yeast produces good wine, but a second kind causes it to become sour. This discovery led to Pasteur’s .

Fermentation chemistry is a new science. It is the basis of manufacturing processes that convert raw materials— such as grains, sugars, and industrial by-products—into many different synthetic products. Carefully selected strains of yeasts, bacteria, and molds are used.

Penicillin is an antibiotic that destroys many disease-causing bacteria. It is derived from a mold that grows in a fermenting mixture of substances carefully selected for this purpose. Manufacturing penicillin and many other antibiotics has become an important branch of the drug industry.

Citric acid is one of many chemicals produced by microorganisms. It is used in metal cleaners and as a preservative and flavoring agent in foods. Citric acid is responsible for the tart flavor of citrus fruits. It could be obtained from them, but it would take many thousands of fruit trees to produce the amount of citric acid now made by the fermentation of molasses with the mold Aspergillus niger.

A fermentation product, Terramycin, is added to cattle and poultry feed to speed the animals’ growth and to protect them from disease. Certain vitamins are made by mold fermentation; and enzymes themselves, extracted from various microorganisms, have many uses in the manufacture of foods and drugs. (See also alcohol; bacteria; beer and brewing; enzymes; liquor industry; wine and winemaking; yeast.)

Citation (MLA style):

© 2020 Encyclopædia Britannica, Inc. 23 of 47 Britannica LaunchPacks | Louis Pasteur

"Fermentation." Britannica LaunchPacks: Louis Pasteur, Encyclopædia Britannica, 19 Feb. 2021. packs-preview. eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

medicine

Britannica Note:

Content on this topic is available in the section " in Brief: 19th- Century Medicine."

The practice of medicine—the science and art of preventing, alleviating, and curing disease—is one of the oldest professional callings. Since ancient times, healers with varying degrees of knowledge and skills have sought to restore the health or relieve the distress of the sick and injured. Often, that meant doing little more than offering sympathy to the patient while nature took its course. Today, however, practitioners of medicine have several millennia of medical advances on which to base their care of patients.

In a physician's office, a routine checkup includes an examination of the mouth, throat, and…

© Tom Wang/Fotolia

© 2020 Encyclopædia Britannica, Inc. 24 of 47 Britannica LaunchPacks | Louis Pasteur

Physicians and Their Training

A doctor in India gives drops of to a child.

Drawat123

In most developed countries, are the key health care providers. Through intensive schooling and training, they become highly skilled. Those who want to practice medicine must meet educational, professional, and ethical standards before their state, province, or country awards them a license to treat patients. Approaches to medical education, licensing, and practice vary throughout the world. This article focuses on physicians’ training and practice within the United States.

Medical School

A physical therapist assists a senior citizen with rehabilitation exercises at a physical therapy…

© aldomurillo—E+/Getty Images

In 2008 there were 129 accredited medical schools granting doctor of medicine (M.D.) degrees and 25 colleges granting doctor of osteopathic medicine (D.O.) degrees in the United States. M.D.s, also known as allopathic physicians, treat disease mainly with drugs, , and other well-established remedies. D.O.s, like their M.D. colleagues, are fully licensed medical physicians and surgeons who practice in most clinical specialties. Osteopathic medicine emphasizes the interrelationships of bodily structures and functions, and its practitioners receive special training in manipulative treatments of the musculoskeletal system.

All medical schools require students to take the Medical College Admissions Test (MCAT), a four-and-one-half- hour, computer-based exam that assesses problem-solving ability, critical thinking, writing skills, and knowledge of scientific concepts. In addition to scoring well on the MCAT, medical school applicants generally must have

© 2020 Encyclopædia Britannica, Inc. 25 of 47 Britannica LaunchPacks | Louis Pasteur

completed undergraduate courses in biology, physics, chemistry (including organic chemistry), and English and earned a bachelor’s degree.

In 2007 about 42,000 students applied to allopathic medical schools; just under 18,000 were accepted. Approximately 12,000 students applied for some 4,400 spots in osteopathic schools. Medical-school opportunities for women improved significantly in the late 20th century. In 2006, 48.8 percent of allopathic medical school graduates were women (up from 5.5 percent in 1962), and women constituted 47.5 percent of about 2,800 osteopathic graduates in 2006 (up from less than 2 percent of 427 total graduates in 1968).

Although programs aimed at increasing the diversity of the medical school student body have made some inroads, enrollment of minorities other than Asians remains far from optimal. In 2006, approximately 7.2 percent of students entering allopathic schools were black, 7.5 percent Hispanic, and 20 percent Asian; the percentages of osteopathic students were 4 percent, 4 percent, and 18 percent, respectively. Because racial and ethnic minorities would constitute half of the U.S. population by 2050, and research had shown that greater diversity within the physician workforce contributes to greater access to health care for the underserved, the Association of American Medical Colleges (AAMC) and the American Medical Association (AMA) were committed to increasing the number of minority physicians and raising the general cultural competence of all physicians.

During the first two years of a typical four-year curriculum, medical students concentrate on the sciences basic to medicine—anatomy, biochemistry, physiology, microbiology, , pharmacology, and the behavioral sciences. They learn about normal structure and function of the human body and how those are altered by disease. They gain firsthand knowledge of the intricate human anatomy by dissecting dead bodies (cadavers). First- and second-year students also spend some of their time in a hospital observing experienced doctors and learning the fundamentals of taking a medical history and examining patients. In the third year, they gain extensive experience with patients in hospital, clinic, and office settings in internal medicine, family medicine, pediatrics, and gynecology, surgery, and psychiatry. Fourth-year students spend most of their time in a clinical setting; they also take elective courses. The osteopathic curriculum is virtually the same as the allopathic, except that osteopathic students also learn muscle and joint manipulation.

Medical education is expensive. In 2007 the average cost of tuition, fees, and health insurance for a first-year student at a private allopathic medical school was about $40,000 and for a state resident attending a tax- supported public medical school about $22,000. Annual tuition and fees for D.O. programs ranged from $20,000 to $35,000. According to AAMC statistics, approximately 85 percent of M.D.s were graduating with significant debt (between $120,000 and $160,000). Even though physicians’ eventual earnings are among the highest of any occupation, their debt was increasing at a much higher annual rate than their incomes. A 2007 AAMC report concluded that without drastic changes, significant numbers of young physicians could be faced with unmanageable debt.

Graduate Medical Education

Newly graduated doctors must complete a residency in the specialty they have chosen before they can be licensed to practice. Most find a residency through the computer-coordinated National Resident Matching Program (NRMP). In 2008 about 50,000 applicants sought about 25,000 residency positions through the match.

The residency served by a new doctor is a fast-paced medical apprenticeship. For M.D.s, this paid, on-the-job training lasts from three to seven years, depending on the specialty. Residencies in the surgical specialties are the longest—five or more years. Those in internal medicine, family medicine, pediatrics, and psychiatry are generally three years. Doctors who choose to specialize further enter post-residency fellowships (the NRMP conducts matches for fellowship positions in 34 subspecialties).

© 2020 Encyclopædia Britannica, Inc. 26 of 47 Britannica LaunchPacks | Louis Pasteur

Most graduates of osteopathic colleges serve a 12-month internship followed by a two- to six-year residency in one of nearly 700 American Osteopathic Association (AOA)–approved programs. Graduates of medical schools outside the United States and Canada (both U.S. citizens and non–U.S. citizens) may apply for residencies after they have met certain qualifications and been certified by the Educational Commission for Foreign Medical Graduates. Between 2004 and 2008 about 4,300 international medical graduates per year found residences through the NRMP.

Licensing of Physicians

Each of the 50 states, the District of Columbia, and the U.S. territories has its own requirements for obtaining a medical license, its own board that grants the license, and its own laws governing the practice of medicine. Because of the wide variations in the licensing process among states, few generalizations can be made about it. Since 1994, however, all states have required M.D.s to pass the United States Medical Licensing Examination (USMLE). The first two steps of the three-step test are typically taken during the second and fourth years of medical school and the third step during residency. Most state licensing boards for osteopaths require applicants to pass the Comprehensive Osteopathic Medical Licensing Examination, a three-part test comparable to the USMLE.

Certification of Specialists

Medical specialists are not certified by their state but by the profession itself. Each major allopathic specialty has its own certification board. The American Board of Medical Specialties (ABMS) oversees 24 of those boards and assists them in developing standards to certify doctors in 146 specialties and subspecialties. Certification by an ABMS member board involves rigorous testing and peer evaluation; about 85 percent of licensed M.D.s are certified by at least one ABMS board. D.O.s. gain certification from one of 18 specialty boards overseen by the AOA. What Doctors Do

Diagnosis

Physicians do several things in the course of serving a patient. The first is to make a diagnosis—that is, identify the exact nature of the problem. They obtain a medical history by asking pertinent questions about a patient’s current and past health and illnesses, personal habits, lifestyle, family, and job or school. The history will indicate how and when the patient became sick and whether or not he or she ever previously had the same or a similar condition.

The history will reveal symptoms (abnormal body changes that patients detect) such as pain, fatigue, loss of appetite, and nausea. These are clues that help doctors decide what signs (abnormal body changes that physicians detect) they should look for. By simple inspection doctors may observe a rash or swelling of the ankles. They may discover a cyst or tumor by palpation (feeling the body tissue); detect a heart murmur by listening to the chest with a stethoscope (auscultation); or determine the presence of infection in the body through blood chemistry studies. A case of appendicitis would be diagnosed on the basis of a patient’s complaints of abdominal pain, appetite loss, nausea, and vomiting (symptoms), which would prompt the physician to check for fever, tenderness in the lower right portion of the abdomen, and a high number of white cells in the blood (signs), which together would lead to a diagnosis of probable appendicitis.

© 2020 Encyclopædia Britannica, Inc. 27 of 47 Britannica LaunchPacks | Louis Pasteur

Treatment

A well-trained physician will select the best forms of therapy that will correct a particular illness in a given patient. The clear choice for the patient with appendicitis would be surgery to remove the appendix. A delay in surgery could result in rupture of the appendix and serious infection. Because of the risk of infection during and after the operation, the physician is also likely to prescribe antibiotic therapy. Increasingly doctors are practicing “evidence-based medicine”—an approach that involves conscientious use of results from well-controlled clinical trials in making treatment decisions. Proponents of evidence-based medicine consider it the most objective way to ensure that patients receive optimal care. In recent years, many specialties have issued detailed “clinical practice guidelines” based on the best evidence available. These compilations enable clinicians to readily choose treatments backed by solid science.

Before a physician proceeds with a treatment, he or she must (by law) convey the details of the planned procedure (including all the risks) to the patient in language the patient comprehends. The patient then must sign an “informed consent” document acknowledging that he or she is going ahead with the treatment with full awareness of the potential outcomes.

Prognosis

The prognosis is the doctor’s prediction of the probable outcome of a case based on his or her knowledge of the disease, previous experiences treating it, and the health status of the patient. When appendicitis strikes an otherwise healthy child, the doctor can usually make an optimistic prognosis. If, however, an obese adult with high blood pressure develops appendicitis that is not diagnosed promptly, the potential for rupture of the appendix and infection is high. Moreover, the patient’s excess weight and elevated blood pressure increase the risk of surgical complications. In this case the prognosis would be uncertain, as the doctor would not be able to predict how the patient would endure surgery, how extensive infection might be, or how long the healing process would take.

Types of Practice

Some doctors choose to go into private practice in which they set up individual offices and cater to the medical needs of as many people as they can. Private practitioners are paid on a fee-for-service basis (patients’ health insurance reimburses them for each service provided). Such solo practices give physicians a great deal of independence and allow close rapport with patients. At the same time, they tend to work extremely long hours and are virtually always “on call.”

More commonly, recently licensed doctors enter a group practice in which they share support staff, patients, and certain medical responsibilities with other doctors. Group practices tend to be able to afford expensive equipment that private practitioners generally would not have. Doctors in a group may be paid on a traditional fee-for-service basis, or they may be salaried.

New doctors may take a salaried position within a health maintenance organization (HMO). HMOs provide medical treatment on a prepaid basis; enrolled patients pay a fixed monthly fee regardless of whether they need medical care that month. HMO members must choose a primary care physician, who acts as the “gatekeeper” for the organization by determining what tests and treatments patients do and do not need and when they should see a specialist. For doctors, the advantages of such a practice include regular hours of work, steady income, availability of sophisticated diagnostic equipment, and easy access to group consultation.

© 2020 Encyclopædia Britannica, Inc. 28 of 47 Britannica LaunchPacks | Louis Pasteur

Levels of Care and Coverage

In the United States and many other countries physicians provide three levels of care. The primary level is the broadest and meets the complete medical needs of the vast majority of people. Primary care is most often provided by general practitioners such as family practice doctors, internists, or pediatricians, who are patients’ first contact with the medical system. Secondary care is provided by a physician who acts as a consultant at the request of the primary physician; the provider of this level of care might be a radiologist, a cardiologist, urologist, or dermatologist. The third level, tertiary care, is usually based at a teaching hospital or center that has personnel, facilities, and technology for special investigations and treatments; comprehensive cancer care is an example.

The levels of care patients receive depend on their insurance coverage. Until recently, the United States was the only industrialized country in the world without some form of universal health care (coverage that extends to all citizens). In 2007, 84.7 percent of the population was covered by private or insurance; the rest either had no health insurance or were underinsured. The latter group included middle- and higher-income people, half of whom went without needed care due to the cost. To address these circumstances, U.S. President Barack Obama made a top priority of his administration. The historic Patient Protection and Affordable Care Act, signed into law in 2010, was designed to extend health care to most uninsured Americans.

Too Few Physicians?

From the late 1970s to the mid-1990s, several expert panels forecast a glut of physicians in the United States in the new millennium. Consequently, medical schools capped enrollment, and Congress limited its funding of medical residencies. The experts, however, miscalculated. Between 2000 and 2007, 15 states and 16 medical specialty boards issued reports detailing how looming or already-evident physician shortfalls would affect them. Three national reports projected that by 2020 there would be as many as 200,000 too few physicians to meet the needs of the aging population. To remedy the shortfall, medical schools began increasing enrollment. In 2007 allopathic medical schools admitted 17,800 students, the largest class ever, while first-year enrollment in osteopathic medical schools reached nearly 4,500 students (up from just under 3,000 students in 2000).

In addition to the existing and anticipated physician shortages, there was also a serious misdistribution of them. The ratios of physicians to population were highest in the New England and the Middle Atlantic states (Massachusetts had 443 doctors per 100,000 population) and lowest in the South Central and Mountain states (Mississippi had 182 doctors per 100,000). In 2005 nearly 60 million Americans lived in government-designated health professional shortage areas (HPSAs). The National Health Service Corps (NHSC), now a part of the Health Services and Resources Administration, was created in 1970 to address the physician distribution problem. NHSC offers primary care physicians incentives such as help with repayment of loans to serve in HPSAs. From its inception through 2007, 27,000 NHSC-recruited doctors had served up to five million people who were uninsured or underinsured or faced language, geographic, or cultural barriers to care.

Changing Face of Medicine

Americans were not receiving their medical care from allopathic and osteopathic physicians only; since the early 1990s, surveys had shown that 30 percent to 60 percent of adult Americans were using complementary and (CAM). Complementary treatments are used in conjunction with conventional medicine; alternative treatments are used in place of it. Many of the unconventional therapies had gained such wide acceptance that, according to an editorial in the Journal of the American Medical Association in 2008, they had become “part of the conventional medicine armamentarium.”

© 2020 Encyclopædia Britannica, Inc. 29 of 47 Britannica LaunchPacks | Louis Pasteur

A man measures ingredients in a traditional Chinese pharmacy. Traditional Chinese medicine uses an…

© Dragon Images/Shutterstock.com

The number of practitioners of CAM is unknown, and the quality and quantity of their training varies widely. They include chiropractors, naprapaths, massage therapists, Chinese herbalists, energy healers, homeopaths, acupuncturists, yoga therapists, and others; they treat back problems, allergies, arthritis, insomnia, headaches, high blood pressure, depression, and even cancer. In 1992 the U.S. National Institutes of Health, the country’s premier medical research establishment, created the Office of Alternative Medicine to study therapies outside the realm of conventional medicine. The office was expanded to the National Center for Complementary and Alternative Medicine (NCCAM) in 1998. In its first decade NCCAM sponsored more than 1,200 clinical trials assessing the validity of widely used but unproven therapies.

The Internet and other communications technologies have profoundly altered the practice of medicine. They allow rapid dissemination of information, which has the potential to increase physicians’ efficiency in caring for patients as well as enhance the quality of care provided. In the burgeoning field of telemedicine, physicians provide consultative services and exchange medical information via electronic communications. Doctors can send live video and high-resolution images to distant locations and examine patients who are hundreds or thousands of miles away. Telemedicine has the potential to alleviate regional inequalities in medicine and bring the advantages of urban medical centers to those with little or no access to care.

The Internet has also empowered patients, who have virtually unlimited access to medical information. While many sites provide valuable information, others may disseminate unreliable or misleading information. The National Institutes of Health recommends that consumers ask questions about the sites they visit: Who runs the site? Who pays for it? What is the basis of the information? How current is it? The National Library of Medicine offers MEDLINEplus, a resource that links the general public to sites that have been carefully reviewed.

© 2020 Encyclopædia Britannica, Inc. 30 of 47 Britannica LaunchPacks | Louis Pasteur

History of Medicine in Brief

A surgeon performs a trepanning, or trephining, procedure on a patient by drilling into his skull.…

Wellcome Collection, London (CC BY 4.0)

Evidence of attempts to care for the sick and injured predates written records. Skulls found in Europe and South

America dating as far back as 10,000 BC have shown that the practice of trepanning, or trephining (removal of a portion of the skull bone), was not uncommon. This operation, performed by many early peoples, including American Indians, was probably done to release evil spirits that were thought to be the source of illness; yet, in many cases, it proved to be the medically correct thing to do. Opening the skull can relieve pressure and pain caused by brain tumors and head injuries.

Indeed, much of early medicine was closely identified with pagan religions and superstitions. Illness was attributed to angry gods or evil spirits; prayers, incantations, and other rituals were used to appease the gods or ward off demons—and thereby drive off disease. Nonetheless, the ancients did not entirely lack valid medical knowledge. In fact, through observation and experience, they acquired considerable wisdom about sickness and its prevention and relief. (See alsofolk medicine.)

The book of Leviticus in the Old Testament described regulations and sanitary practices that were used to prevent the spread of leprosy and plague. The ancient Romans realized the importance of to health and built sewers, systems that drained waste water from public baths, and aqueducts that provided clean water.

Egyptian Medicine

The ancient Egyptians were among the first to use certain herbs and drugs, including castor oil, senna, and opium. They also set and splinted fractured bones using techniques remarkably similar to those of modern

medicine. Egyptians were reputed to be skilled diagnosticians; a medical papyrus from about 1600 BC, which is

believed to be a copy of a text from about 3000 BC, described 58 sick patients, of whom 42 were given specific diagnoses. Although the Egyptians practiced mummification, which involved removing and dehydrating most of the internal organs of the dead, they apparently did not study those organs, as their anatomical knowledge was quite limited.

© 2020 Encyclopædia Britannica, Inc. 31 of 47 Britannica LaunchPacks | Louis Pasteur

Greek and Roman Medicine

Hippocrates (460–375 BC), known as the “father of Western medicine,” was an admired physician and teacher who rejected the notion that disease was punishment sent by the gods; rather, he believed it had natural causes. Hippocrates put forth a doctrine that attributed health and disease to four bodily humors, or fluids— blood, black bile, yellow bile, and phlegm. He believed that the humors were well balanced in a healthy person, but various disturbances or imbalances in them caused disease. At that time, his humoral theory seemed highly scientific. In fact, doctors diagnosed and treated illnesses based on the four humors well into the 19th century.

Knowing that he could not cure most diseases, Hippocrates tended to recommend conservative measures such as exercise, rest, and . By contrast, for fever, which he thought was caused by an excess of blood in the body, he recommended the drastic measure of bloodletting. The practice of bloodletting (or bleeding), which was thought to have many therapeutic effects, was used for more than two thousand years and undoubtedly hastened the deaths of countless patients who might otherwise have recovered.

Hippocrates is best known today for his ethical code (Hippocratic Oath), which continues to be used by the medical profession as a guide to appropriate conduct. The oath is a pledge doctors make to always use their knowledge and best judgment for the benefit of their patients and to never harm or injure those in their care.

For a brief period after Hippocrates’ death, two Greek physician-scholars living in Alexandria, Herophilus and Erasistratus, performed the first known systematic dissections of human bodies. They dissected virtually every organ, including the brain, and recorded what they learned. Despite their dedication to the science of anatomy,

these pioneers had little influence on the subsequent practice of medicine. By 150 BC, dissection of human cadavers was banned throughout the Hellenistic world, and any writings they left behind were lost when

Alexandria’s library was destroyed in the 3rd century AD.

One of those trained in Hippocratic medicine was (129–216? AD), a Greek who traveled widely and became the most renowned physician in Rome. Although Galen accepted and embellished the four-humors doctrine, he also made important discoveries. He performed systematic experiments on animals (including apes, monkeys, dogs, pigs, snakes, and lions), which involved both dissection and vivisection (live dissection). He treated gladiators and took advantage of the opportunity to study the internal organs and muscles of the wounded. Galen recognized connections between bodily structures and functions; for example, he demonstrated that a severed spinal cord led to paralysis. He recognized that the heart circulated blood through the arteries but did not understand that it circulated in only one direction. Galen produced a prodigious body of medical scholarship that was adhered to by medical practitioners for 1,600 years. Unfortunately, his erroneous beliefs as well as his accurate insights were perpetuated.

Arabian Medicine

After the breakup of the Roman Empire, the tradition of Greek medicine continued in the universities of the Arab world. The Persian physician Rhazes, or Al-Razi (865?–923?), is credited with being the first to distinguish between the highly contagious viral diseases and measles. He also recognized the need for sanitation in hospitals. Probably the most important physician at the beginning of the second millennium was . His monumental Canon of Medicine, a five-volume encyclopedia of case histories and therapeutic instructions, was long considered an absolute medical authority in both Eastern and Western traditions.

© 2020 Encyclopædia Britannica, Inc. 32 of 47 Britannica LaunchPacks | Louis Pasteur

Medicine in Medieval and Europe

At about the same time that Arabian medicine flourished, the first medical school in Europe was established at Salerno, in southern Italy. Although the school produced no brilliant genius and no startling discovery, it was the outstanding medical institution of its time. In about 1200 Salerno yielded its place as the premier medical school of Europe to Montpellier, in France. Other great medieval medical schools were founded at Paris, France, and at Bologna and Padua, in Italy.

Midwives assist in a while astrologers in the background consult sky charts, in a woodcut …

National Library of Medicine, Bethesda, Maryland

Even with the presence of these institutions, medicine progressed very slowly in Europe during the Middle Ages. Medieval physicians continued to rely upon ancient medical theories, including that of the humors. They analyzed symptoms, examined waste matter, and made their diagnoses. Then they might prescribe diet, rest, sleep, exercise, or baths, or they could administer emetics (something to cause vomiting) and laxatives or bleed the patient. Surgeons could treat fractures and dislocations, repair hernias, and perform amputations and a few other operations. Some of them prescribed opium or alcohol to deaden pain. Childbirth was left to midwives, who relied on folklore and tradition.

The Christian church also influenced European medicine during the Middle Ages. It is sometimes said that the early church had a negative effect on medical progress. Disease was regarded as a punishment for sin, and healing required only prayer and repentance. A number of saints became associated with miraculous cures of certain diseases, such as St. Vitus for chorea (or St. Vitus’s dance) and St. Anthony for erysipelas (or St. Anthony’ s fire). In addition, the human body was held sacred and dissection was forbidden. Nevertheless, the medieval church played an important role in caring for the sick. Great hospitals were established during the Middle Ages by religious foundations, and infirmaries were attached to abbeys, monasteries, priories, and convents. Doctors and nurses in these institutions were members of religious orders and combined spiritual with physical healing.

© 2020 Encyclopædia Britannica, Inc. 33 of 47 Britannica LaunchPacks | Louis Pasteur

It was not until the Renaissance that Europeans began to seek a truly scientific basis for medical knowledge instead of relying on the teachings of Galen and other ancient physicians. The Flemish physician discovered many new principles of anatomy through dissections, which he compiled in his profusely illustrated Seven Books on the Structure of the Human Body, published in 1543.

Ambroise Paré (1510–90), a Frenchman, practiced as an army surgeon and became an expert at treating battlefield wounds. He proved that tying blood vessels was a better method of stopping profuse bleeding than cauterizing them with hot oil or a hot iron—a discovery that spared countless soldiers terrible pain and suffering.

17th- and 18th-Century Medicine

Based on painstaking observations of his own veins and study of the blood vessels of sheep, English physician determined that blood was pumped away from the heart via the arteries and was returned to it by way of the veins. Groundbreaking as this discovery was, Harvey could not explain how blood passed from the arteries to the veins. Four years after Harvey’s death in 1657, the Italian researcher , with the aid of a microscope, identified and described the pulmonary and capillary network that connected small arteries with small veins. (See alsocirculatory system.)

The art of surgery developed in 17th-century England at a time when elsewhere in Europe operations were being performed mainly by barbers. William Cheselden, a surgeon and anatomist, was known for his swift and skillful operations; it was reported that he could perform a lithotomy (removal of a stone from the urinary bladder) in 54 seconds.

Edward Jenner, an English country physician, noticed that women who milked cows often caught (a relatively mild illness) but never got the much more virulent human disease smallpox. Based on that observation, he began developing the world’s first vaccine against a virulent infectious disease. In 1796 Jenner inoculated an eight-year-old boy who had never had smallpox with material taken from cowpox lesions on the hands of a dairymaid. Several weeks later, he exposed the boy to smallpox, and he remained healthy.

19th-Century Medicine

Before the mid-1800s surgery had to be performed without anesthesia. Patients may have been given a blow on the head, a dose of opium, or a swig of whiskey or rum—at best, minimally effective means of reducing pain. The best surgeons, therefore, were those who completed their work in the least amount of time. Early in the century British and American scientists began experimenting with two pain-numbing substances, the gas nitrous oxide and the liquid solvent ether. In 1846, before a large group of doctors at Massachusetts General Hospital in , William Morton (who did not have a medical degree but had apprenticed with a dentist) demonstrated ether anesthesia in a patient undergoing surgery to remove a tumor from his neck. The resoundingly successful operation was painless for the patient. Word of this achievement spread quickly, and soon dentists and surgeons on both sides of the Atlantic were using anesthesia. In 1847 chloroform was introduced and became the anesthetic of choice.

Certainly one of the most important advances of the 19th century was the development and acceptance of the “germ theory of disease.” In the 1840s , a young physician working in a hospital in Vienna, recognized that doctors who performed and then delivered babies were responsible for spreading puerperal (childbed) fever, an often deadly infection of the reproductive organs. After Semmelweis ordered doctors to their hands with a chlorinated lime solution before entering the maternity ward, deaths from puerperal fever plummeted.

© 2020 Encyclopædia Britannica, Inc. 34 of 47 Britannica LaunchPacks | Louis Pasteur

French chemist and microbiologist Louis Pasteur first learned about germs by studying the fermentation of beer, wine, and milk. He went on to explore infectious diseases in farm animals and develop vaccines against anthrax in sheep, erysipelas in swine, and chicken cholera in poultry. Finally Pasteur turned his attention to rabies in humans. His crowning achievement was the development of a vaccine against the always-fatal viral infection caused by bites of rabid animals. In 1885 Pasteur was urged by doctors to give his experimental vaccine, which had only been tested in dogs, to a young boy who had been bitten more than a dozen times by a rabid dog. He administered a series of 13 daily injections of increasingly virulent material obtained from the spinal cords of rabid rabbits. The child endured the prolonged and painful treatment and made a full recovery.

German physician discovered that dormant anthrax could remain in the blood of sheep for years and, under the right conditions, develop into the infectious organisms that caused deadly anthrax outbreaks. In 1876, when he presented his findings on the anthrax disease cycle to doctors in Breslau, Germany, an eminent pathologist commented: “I regard it as the greatest discovery ever made with bacteria and I believe that this is not the last time that this young Robert Koch will surprise and shame us by the brilliance of his investigations.” He was right. Koch went on to discover the bacteria responsible for (TB; 1882) and human cholera (1883) and to do groundbreaking research on leprosy, plague, and malaria.

The French microbiologist Charles Laveran discovered the disease-causing protozoanPlasmodium, the parasite carried by mosquitoes responsible for malaria. At the turn of the century, the American army doctor Walter Reed headed a team of physicians who proved that yellow fever was also transmitted by mosquitoes.

The first serious studies of mental disease were conducted during the 19th century. Jean Charcot used hypnosis as a tool to search the troubled minds of mental patients. His student Sigmund Freud developed the psychoanalytic technique for treating mental illness.

20th- and 21st-Century Medicine

In 1900 the average life expectancy of persons born in the United States was 47 years; by the end of the century it was 77 years. The U.S. Centers for Disease Control and Prevention (CDC) attributed 25 of those 30 additional years of life that Americans had gained to 10 momentous 20th-century public health achievements: control of infectious diseasesimmunizationsthe decline in deaths from heart disease and strokesafer and healthier foodshealthier mothers and babiesincreased safety of motor vehiclessafer workplacesfamily planningfluoridation of drinking waterthe recognition of tobacco use as a health hazard

Paul Ehrlich’s discovery in 1910 of Salvarsan, the first drug effective against syphilis, inaugurated the era of modern drug therapy. The sulfa drugs, which provided strong protection against streptococci and other bacteria, were introduced in the 1930s. In 1938 Ernst Chain and Howard Florey succeeded in synthesizing and purifying the Penicillium mold that had discovered 10 years earlier; their product, the broad-spectrum antibioticpenicillin, is still widely used today. In 1948 Selman Waksman discovered streptomycin, a powerful antibiotic that led to the control of TB.

In the early 1920s researchers Frederick Banting and Charles Best isolated the hormoneinsulin, which they used to save the lives of young people with diabetes mellitus. At the time, diabetes mainly affected children and adolescents; because their bodies did not produce insulin, which the body needs to convert food into energy, they died. Shortly after this triumph, the pharmaceutical manufacturer Eli Lilly and Company began large-scale production of cow and pig insulin, which helped turn diabetes (the type now known as Type 1) from a fatal into a manageable disease that allowed young people to live into adulthood. By the end of the century, Type 2 diabetes (in which the body is unable to properly utilize the insulin it produces) had become a public health threat of proportions; 3.8 million people worldwide died from its complications each year.

© 2020 Encyclopædia Britannica, Inc. 35 of 47 Britannica LaunchPacks | Louis Pasteur

The eradication of smallpox, one of the most deadly and debilitating scourges the world had ever known, represents one of the greatest accomplishments in modern medicine, science, and public health. Thanks to widespread vaccination, smallpox was eliminated from Europe, North America, Australia, and New Zealand by 1950 and from most of South and Central America by 1959. In 1967 the World Health Organization (WHO) launched a campaign to eradicate the disease that still infected up to 15 million people annually in 31 countries. Ten years later, the last case of smallpox was diagnosed in a young man in Somalia, and in 1980 WHO declared smallpox eradicated from the planet. Because humans were the only natural reservoir of the smallpox , once it was gone, people no longer needed to be vaccinated against it. The only remaining specimens of the virus were retained in high-security in the United States and Russia.

A global polio eradication initiative was begun in 1988, at which time about 350,000 children in 125 countries on five continents were crippled each year by the highly contagious viral disease that attacks the nervous system. By 1999 the number of cases had been reduced by 99 percent, and by the end of 2006, only four countries— India, Nigeria, Pakistan, and Afghanistan—still had endemic polio (uninterrupted transmission of the wild polio virus). Continuing efforts reduced the number of cases to just 222 in 2012, and the campaign’s sponsors (WHO, the United Nations Children’s Fund, CDC, and Rotary International) expressed confidence that a polio-free world could be achieved by 2018.

The early 1980s saw the emergence of the deadly new disease acquired immunodeficiency syndrome (AIDS), caused by the human immunodeficiency virus (HIV), which rapidly grew into a global pandemic. Thanks to the development of life-prolonging drugs, by the mid-1990s HIV/AIDS was no longer a death sentence in wealthy countries. In poor countries, however, the pandemic continued to wreak havoc. In 2001 more than 28 million people in sub-Saharan Africa were living with HIV/AIDS, but fewer than 40,000 had access to drug treatment. At the same time, much of Africa and many developing countries were profoundly affected by malaria and TB. The three pandemic diseases killed more than six million people every year.

In 2002 the Global Fund to Fight AIDS, Tuberculosis and Malaria was created to dramatically increase resources to combat the trio of devastating diseases. By mid-2008 the fund had distributed $6.2 billion to 136 countries. At least 1.75 million people were receiving drug treatment for HIV/AIDS, 3.9 million were receiving TB treatment, and 59 million insecticide-treated mosquito nets had been distributed to families in malaria-ridden countries. The program estimated it had saved more than 1.5 million lives.

Late in the 20th century, antimicrobial drugs employed to treat common infections were becoming increasingly ineffective, allowing the return of previously conquered diseases and the emergence of virulent new infections. By 2007 about five percent of the nine million new cases of TB in the world each year were resistant to at least two of the four standard drugs; treatment with other drugs was 200 times more expensive and had lower cure rates. In 2007 the CDC reported that the bacterium methicillin-resistantStaphylococcus aureus (MRSA) was responsible for more than 90,000 serious infections and 19,000 deaths in the United States annually. For years MRSA had been a problem in health care institutions such as hospitals, nursing homes, and dialysis centers, where it had grown increasingly resistant to commonly used antibiotics. Formerly it primarily infected people with weakened immune systems; by 2007, however, 13 percent of cases were occurring in healthy people living in the community.

In 1953 British graduate student Francis Crick and American research fellow identified the double- helix structure of DNA, a discovery that helped explain how genetic information is passed along. Exactly 50 years later, the was completed. The 13-year international collaboration of more than 2,800 researchers, one of the boldest scientific undertakings in history, identified all humangenes (about 22,000) and determined the sequences of the 3 billion chemical base pairs that make up human DNA. The genetic information provided by the project has enabled researchers to pinpoint errors in genes that cause or contribute

© 2020 Encyclopædia Britannica, Inc. 36 of 47 Britannica LaunchPacks | Louis Pasteur

to disease. In the future, having the tools to know the precise genetic make-up of individuals will enable clinicians to deliver truly personalized medicine.

As an increasing number of genetic tests have become commercially available—some of which can be lifesaving—new ethical questions have been raised about the best ways to deliver them and how the genetic information they provide should be used by insurers, employers, courts, schools, adoption agencies, and the military. In response to those concerns, in 2008 the U.S. Congress passed and President George W. Bush signed into law the Genetic Information Nondiscrimination Act, which prohibits insurance companies and employers from discriminating on the basis of information derived from genetic tests.

Ellen BernsteinEd. Additional Reading

ADLER, ROBERT. Medical Firsts (John Wiley & Sons, 2004).GROOPMAN, JEROME. How Doctors Think (Houghton Mifflin,

2007).HARDING, ANNE S. Milestones in Health and Medicine (Oryx Press, 2000).JAUHAR, SANDEEP. Intern: A Doctor’s

Initiation (Farrar, Straus and Giroux, 2008).NULAND, SHERWIN B. The Uncertain Art: Thoughts on a Life in Medicine

(Random House, 2008).PORTER, ROY, ED. Cambridge Illustrated History of Medicine (Cambridge University Press,

1996).STRAUS, EUGENE W. AND STRAUS, ALEX. Medical Marvels: The 100 Greatest Advances in Medicine (Prometheus Books, 2006).

See also bibliographies for biochemistry; bioengineering; bioethics; brain and spinal cord; human disease; health; psychology.

Citation (MLA style):

"Medicine." Britannica LaunchPacks: Louis Pasteur, Encyclopædia Britannica, 28 Feb. 2020. packs-preview.eb. com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

microbiology

Britannica Note:

Content on this topic is available in the section "History."

Scientific exploration to understand the nature of the tiniest living organisms constitutes the field of microbiology. Such organisms are known as microbes, and who study them are called microbiologists.

© 2020 Encyclopædia Britannica, Inc. 37 of 47 Britannica LaunchPacks | Louis Pasteur

Over the years, microbiology has extended to include more than microbes alone. For instance, the field of immunology, which studies the body’s reaction to microbes, is closely aligned with microbiology (seeimmune system). In addition, a whole new field of molecular biology has emerged. Today molecular biologists study the properties of cellular structures such as proteins and nucleic acids.

Microbes are widely spread over the surface of Earth and play a crucial role in . Soil and water contain high concentrations of bacteria and molds (two types of microbes), and the surface of every human body is covered with a unique microbial flora. Certain bacteria draw nitrogen from the air and pass it on to plants in the soil. Others help break down and recycle organic materials and waste products.

The action of microbes has also been harnessed for industrial uses. Yeast is used in the production of bread and alcohol. Other microscopic organisms are used for the production of many foodstuffs and for the degradation of industrial by-products. The research and development of microbes for such practical uses is the subject of applied microbiology. Areas of Study

Microbiologists classify microorganisms into bacteria, archaea, protozoans, algae, fungi, and viruses, and the study of each constitutes a separate specialty within microbiology. Individual fields may overlap, and the discipline of microbiology may overlap with other disciplines, as it does with immunology. In any case, most areas of scientific inquiry can be subdivided in a variety of ways, depending upon the questions being asked. Microbiology may be subdivided as follows:

Bacteriology

The study of bacteria is called bacteriology. Bacteria are single- microbes that grow in nearly every environment on Earth. They are used to study disease and produce antibiotics, to ferment foods, to make chemical solvents, and in many other applications.

Protozoology

Protozoology is the study of protozoans, small single-cell microbes. They are frequently observed as actively moving organisms when impure water is viewed under a microscope. Protozoans cause a number of widespread human illnesses, such as malaria, and thus can present a threat to public health.

Phycology

Phycologists study algae. These are organisms that carry out photosynthesis in order to produce the energy they need to grow.

Mycology

Mycology is the study of fungi—well-known organisms that lack chlorophyll. Fungi usually derive food and energy from parasitic growth on dead organisms.

© 2020 Encyclopædia Britannica, Inc. 38 of 47 Britannica LaunchPacks | Louis Pasteur

Virology

Viruses are a very different kind of biological entity. They are the smallest form of replicating microbe. Viruses are never free-living; they must enter living cells in order to grow. Thus they are considered by most microbiologists to be nonliving. There is an infectious virus for almost every known kind of cell. Viruses are visible only with the most powerful microscopes, namely electron microscopes.

Exobiology

Exobiology is the study of life outside Earth, including that on other planets. Space probes have been sent to Mars, and samples of rocks have been brought back from the Moon as part of experiments to search for traces of microbial life in extraterrestrial environments. Most exobiologists examine such samples for the basic building blocks of life known to have evolved on Earth. Since microbes were probably the earliest organisms on Earth, and since they have continued to thrive, exobiologists consider that microbes are the most likely form of life to exist beyond Earth.

Biological Warfare

Another area of study in microbiology involves the development, deployment, and defense against agents of . Both chemical and biological agents have been used in past wars because they are often more insidious and less easily detected than conventional weapons. This application of microbiology has been made still more ominous by the ability to alter microbes using genetic engineering. Methods of Microbiology

The chief tool of the microbiologist has always been the microscope. Since its invention there have been great refinements in the optical microscope’s power and precision. In addition, the electron microscope and other high- energy devices allow viewing of the very smallest structures of life, including the DNA molecule. However, because the powerful forms of energy that are necessary for electron microscopes, such as X-rays and particle beams, will destroy biological specimens, scientists have developed new technologies for viewing cells. The transmission electron microscope, for example, produces a shadow of the specimen by evaporating platinum metal over the viewing platform at a sharp angle. When electrons are passed through the platform at high speed, they can distinguish the shadows on the metal as a representation of the original specimen. Scanning electron microscopes, on the other hand, view the surface of a specimen by reflected radiation. In this case, the sample is also thinly coated with a heavy metal such as gold, and the biological material is observed as a cast of the more stable material.Advances in microscopes and microscopic techniques continue to be introduced to study cells, molecules, and even atoms. Among these are confocal microscopy, the atomic force microscope, the scanning tunneling microscope, and immunoelectron microscopy. These are particularly significant for studies of microorganisms at the molecular level.

Advances have also occurred in the use of pure cultures, and improvements in methods of growing and identifying microbes have found wide application in all areas of microbiology. In the laboratory, scientists must have pure cultures of microbes for their studies. If contaminating organisms are present, the results of their experiments may be useless or misleading. For these reasons, microbiologists maintain pure cultures of the known microbes and provide them to associates for experimental use.

© 2020 Encyclopædia Britannica, Inc. 39 of 47 Britannica LaunchPacks | Louis Pasteur

Careers in Microbiology

Professional microbiologists are employed in a wide variety of positions. The majority work in universities, government agencies, or industry. In colleges and universities, microbiologists teach and conduct research. State and federal governments employ microbiologists to conduct research and to help regulate private-sector activities. For example, in the United States some microbiologists are employed by the federal government to inspect sewage-treatment facilities, hospitals, and food-production plants in order to protect the public health. Large federal agencies, such as the National Institutes of Health, the Department of Agriculture, and the Centers for Disease Control and Prevention also employ microbiologists to investigate and monitor the action of microbes in the environment. There are also many commercial positions available to microbiologists. For example, wineries and breweries employ microbiologists to help standardize and maintain yeast cultures. In many other areas of industrial food production, the expertise of microbiologists is employed in quality control to guard against spoilage and contamination. Microbiologists are also employed by pharmaceutical companies to help produce vaccines and other drugs. History

Microbiology began with the development of the microscope in the 17th and 18th centuries. By 1680 the Dutch scientist Anthony van Leeuwenhoek had produced a simple handheld device that allowed scientists to view a variety of microbes—which Leeuwenhoek called “”—in stagnant water and in scrapings from teeth.

In the late 1700s conducted the first , using cowpox virus to protect people against smallpox. Later an altered form of the rabies virus was used to protect against the dreaded disease rabies. Vaccines remain the major means of protection against most viral infections.

Modern microbiology had its origins in the work of the French scientist Louis Pasteur—considered the father of microbiology—who developed methods of culturing and identifying microbes. During the second half of the 19th century, he and his contemporary Robert Koch provided final proof of the germ theory of disease. They also demonstrated that microbes must be introduced or seeded into a sterile environment and could not arise spontaneously, as had been previously believed. Pasteur was the first to propose that microbes cause chemical changes as they grow. Koch derived a central principle of modern microbiology, known as Koch’s Postulate, that determines whether a particular germ causes a given disease.

Pasteur and his contemporaries developed pure culture methods for the growth of microbes. By diluting mixtures of microbes in sterile solutions, they were able to obtain droplets that contained a single microbe, which could then be grown on fresh, sterile media. In a separate procedure, they used rapid, sequential passage of cultures so that certain specimens were able to outgrow others. Thus, for the first time, stable cultures containing a single kind of microbe could be used to identify and study specific disease-causing organisms.

Another great advance in pure culture methods came in the late 19th century, when microbiologists discovered that each kind of microbe preferred a certain medium for optimal growth. Over the past century, microbiologists have made great progress in the preparation of selective media for the purification and identification of most species of microbes.

In 1929 Alexander Fleming observed that molds can produce a substance that prevents the growth of bacteria. His discovery, an antibiotic called penicillin, was later isolated and produced commercially to protect people

© 2020 Encyclopædia Britannica, Inc. 40 of 47 Britannica LaunchPacks | Louis Pasteur

against the harmful effects of certain microorganisms. Today several kinds of penicillin are synthesized from various species of the mold Penicillium and used for different therapeutic purposes. Many other antibiotics have been identified as well, and they remain the major line of defense against infectious bacterial diseases.

In the 1940s microbiology expanded into the fields of molecular biology and . Viruses were found to be simple microbes that could be studied quantitatively, and they themselves were used to study the nature of deoxyribonucleic acid, or DNA. Microbiologists began to work inside cells to study the molecular events governing the growth and development of organisms.

In the early 1970s genetic researchers discovered recombinant DNA. Scientists found that DNA could be removed from living cells and spliced together in any combination. They were able to alter the genetic code dictating the entire structure and function of cells, tissues, and organs (seegenetic engineering). Additional Reading

ALEXANDER, LORI.All in a Drop: How Antony van Leeuwenhoek Discovered an Invisible World (Houghton Mifflin

Harcourt, 2019). HIRSCH, REBECCA E.The Human Microbiome: The Germs that Keep You Healthy (Twenty-First

Century, 2017). SAGAN, DORION, AND MARGULIS, LYNN.Garden of Microbial Delights: A Practical Guide to the Subvisible World (Kendall Hunt, 1993).

Citation (MLA style):

"Microbiology." Britannica LaunchPacks: Louis Pasteur, Encyclopædia Britannica, 17 Mar. 2020. packs-preview.eb. com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

© 2020 Encyclopædia Britannica, Inc. 41 of 47 Britannica LaunchPacks | Louis Pasteur

Louis Pasteur

French chemist and microbiologist Louis Pasteur experimenting on a chloroformed rabbit, coloured wood engraving, 1885.

National Library of Medicine, Bethesda, Maryland

Citation (MLA style):

Louis Pasteur. Image. Britannica LaunchPacks: Louis Pasteur, Encyclopædia Britannica, 19 Feb. 2021. packs- preview.eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

© 2020 Encyclopædia Britannica, Inc. 42 of 47 Britannica LaunchPacks | Louis Pasteur

Louis Pasteur and fermentation

French chemist and microbiologist Louis Pasteur made many important contributions to science, including the discovery that microorganisms cause fermentation and disease.

World Photos/Alamy

Citation (MLA style):

Louis Pasteur and fermentation. Image. Britannica LaunchPacks: Louis Pasteur, Encyclopædia Britannica, 19 Feb. 2021. packs-preview.eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

© 2020 Encyclopædia Britannica, Inc. 43 of 47 Britannica LaunchPacks | Louis Pasteur

Louis Pasteur

Louis Pasteur in his laboratory, painting by Albert Edelfelt, 1885.

Photos.com/Thinkstock

Citation (MLA style):

Louis Pasteur. Image. Britannica LaunchPacks: Louis Pasteur, Encyclopædia Britannica, 19 Feb. 2021. packs- preview.eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

© 2020 Encyclopædia Britannica, Inc. 44 of 47 Britannica LaunchPacks | Louis Pasteur

Louis Pasteur

A tribute from Le Petit Journal, Paris, at the time of Louis Pasteur's death (1895).

© Photos.com/Thinkstock

Citation (MLA style):

Louis Pasteur. Image. Britannica LaunchPacks: Louis Pasteur, Encyclopædia Britannica, 19 Feb. 2021. packs- preview.eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

© 2020 Encyclopædia Britannica, Inc. 45 of 47 Britannica LaunchPacks | Louis Pasteur

Examine how bacteria can be beneficial ecological agents of decomposition but harmful spoilers of food and agents of disease

Britannica Note:

Louis Pasteur originated pasteurization, a heat-treatment process that destroys harmful bacteria and other microbes in certain foods and beverages.

Video Transcript

Bacteria live in almost every habitat on Earth. They owe part of this success to their tremendous reproductive output. Through the process of binary fission, a bacterium duplicates its genetic material and divides into two new bodies. A single colonizing cell can soon give rise to thousands of descendants. Just how fast do colonies of bacteria grow? Some have doubling times of just 10 minutes, which means that every 10 minutes the number of cells in the culture doubles. Rapid reproduction makes bacteria well suited for their main ecological role, as decomposers, but it also makes them a danger to unprotected food supplies. Preventing food from spoiling means preventing bacteria from growing. The most fail-safe method is to destroy all the bacteria in and on the food through heat pasteurization and canning. Cans prevent new bacteria from reaching the food from the outside, while the high temperatures of pasteurization kill any bacteria already trapped inside. If heat pasteurization is not complete, dormant spores of microbes, such as Clostridium botulinum, can begin to grow. Clostridium botulinum—the agent of botulism—produces a potent toxin that can cause fatal paralysis. The illnesses associated with bacteria do not stop with food spoilage. Disease-causing bacteria, or , are responsible for a long list of ailments, including typhoid, cholera, and pneumonia. As bacteria cells enter the human body, they are attacked by immune-system cells called phagocytes. When a phagocyte overtakes the bacteria, it engulfs and dissolves the microbe. The phagocytes in our tend to be very busy, because bacteria are persistent and widespread in our world—part of every meal, every breath, and every touch.

Learn about bacteria as agents of decomposition, food spoilage, and disease (pathogens).

Encyclopædia Britannica, Inc.

© 2020 Encyclopædia Britannica, Inc. 46 of 47 Britannica LaunchPacks | Louis Pasteur

Citation (MLA style):

Examine how bacteria can be beneficial ecological agents of decomposition but harmful spoilers of food and agents of disease. Video. Britannica LaunchPacks: Louis Pasteur, Encyclopædia Britannica, 19 Feb. 2021. packs- preview.eb.com. Accessed 10 Aug. 2021.

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

© 2020 Encyclopædia Britannica, Inc. 47 of 47