Thermometry, Temperature, and Thermal Equilibrium

Total Page:16

File Type:pdf, Size:1020Kb

Thermometry, Temperature, and Thermal Equilibrium

Handout Origins

Thermometry, Temperature, and Thermal Equilibrium

Questions to keep in mind while reading:

1. How has the meaning of temperature changed over time and when did those changes happen relative to caloric theory and to kinetic gas theory?

2. Which aspects of nature does temperature describe and how has the fundamental nature of this description changed?

3. How did the scientific process work in the progression from early forms thermometry to modern forms?

The physician Galen (130 to 200 AD) is the first known to have used the notion of degrees of heat and cold, four degrees each way away from a neutral point. The neutral point was mixed of equal quantities of ice and boiling water. Galen seems to have thought these to be the very hottest and coldest of materials.

There was as yet no instrument to measure the degrees. The ancient Greek Philo's work on pneumatics, which was eventually to lead to thermometry, was lost until Arabic manuscripts were translated. The second edition appeared in 1592, the year in which Galileo took his position in Padua and Galileo is known to have read it by 1594.

The candidates for the honor of having invented the thermometer are Galileo, Santorio, Drebbel, and Fludd. The Italian partisans of Galileo have done their best to magnify his achievements (similar to the British and Newton later on the topic of the Newton Laws).

Their thermometers were based on an air thermoscope, which was in ancient times solely used to entertain by lifting water in a tube by applying heat. Santorio applied first a measuring device to the thermoscope (~ 1612), making it in effect a thermometer. The scale was divided in eight degrees, each divided into minutes.

The Welshman Fludd took a medical degree at Oxford in 1605. He also started from the thermoscope and his letters tell he knew he was making a thermometer. His scale ran from bottom to top in degrees of cold. The exact dates are unclear. Drebbel, a man from the Netherlands, made astronomical clocks for King James of England (1604). It is clear from his letters that Drebbel understood the principle of the thermoscope, and it was used in his clocks to simulate the tides. But the purpose of the instrument was not to be an air thermometer.

The only thing that is clear is that Santorius was the first to use the air thermometer as a scientific instrument.

A major step forward was the use of thermometers with the thermometric fluid sealed in glass. However, the rather peculiar range of properties of glass delayed the development of fixed temperature points and two point scales. For a long time thermometers showed many scales, but the reproducibility was poor nonetheless due to the different properties of the glass other scale developers had been using. By about 1660 the spirit-in-glass thermometer had been developed to satisfactory technical specifications and the mercury thermometer had been temporarily abandoned. One of the few preserved oldest thermometers in the Museo Copernicane in Rome shows 18 temperature scales.

The instrument maker Fahrenheit learned in Denmark from the astronomer Romer, the discoverer of the finite speed of light. Romer left a lab book, which his widow gave in 1739 to the university library in Copenhagen, some 30 years after Romer's death. Meanwhile, his student had continued to add comments to Romer's original observations. It is apparent from the notes that Romer was the first to use the melting point of ice and the boiling point of water as fixed points, and he divided the space in between into equal volume sections, a method very similar to what we use today.

This would be of less interest, if young Fahrenheit had not visited Romer in 1708. Today's temperature scale: setting 32 degrees at the melting point of ice and 212 at the temperature of steam just above pure water boiling (to ensure little fluctuation) at normal atmospheric pressure, the intervals determined by the volumetric expansion of liquid mercury (spirits proved to be expanding non-linearly); closely resembles the last of three temperature scales Fahrenheit developed. In practice, Fahrenheit's second fixed point was not the boiling water temperature but human body temperature (96 degrees). In France, Fahrenheit's scale remained unused and even unknown. There, Reaumur showed the way, which he documented in surprisingly verbose monographs. He took a long time to document an instability of the boiling point temperature, which was, unbeknownst to him, due to the variability of atmospheric pressure. Reaumur deserves credit for recognizing that the volume expansion of mercury was superior to that of alcohol spirits.

There is little written about the melting point of ice to this point. The French had an edge because some very deep cellars of the university in Paris had rather stable low temperatures, whereas elsewhere it was difficult to have a freezing point reference when it was not winter.

The Swedish astronomer Anders Celsius preferred melting snow over Reaumur's method with the cellar to establish the lower fixed point (the cellar was also not made available to all people, who did work in this field). He made the discovery that water stays at the same temperature until all the snow in it has melted. He also observed that the boiling water depends to some extent on how violently the water boils and that measuring in the steam just above the water is more reliable. Celsius did also begin to use 100 as the degree difference between boiling and freezing. Stromer eventually inverted Celsius' scale and put zero at freezing and 100 at boiling. Modern Developments

The measurement of temperature is always an indirect measurement. In such experiments, one has to assume a linearity of the phenomenon, which is used for measurement, with temperature. Clearly, when doing this for the very first time we apply circular logic. Fixed points are used to check the deviation of the linear scale, which is engraved on the thermometer, from the progression of the effect in use. Commonly used effects are volume expansion, electric resistivity, magnetic susceptibility, radiant emittance, and thermoelectric effects. All of these come with their own conceptual and model constraints (see lecture discussion).

Six important modern thermometers are the constant-volume gas thermometer, the (platinum) electric resistance thermometer, the thermocouple, saturation helium vapor, paramagnetic salt magnetometry, and blackbody radiation. Each has its own strengths, range of linearity, conceptual weaknesses, and range of application. The constant-volume gas thermometer is considered the most empirical and is treated as a primary thermometer. Its conceptual weakness is the validity of the ideal gas law for dilute gases. Secondary thermometers, like thermocouples and electric resistance thermometers are calibrated on primary thermometers. Primary thermometers are usually unwieldy and impractical. Note that, when compared, different type thermometers give the same reading usually only at one calibration point! It is good practice to use overlapping ranges for thermometers and to change the type of thermometer when one changes into new temperature ranges.

Therefore, the use of reliable fixed points remains crucial. Again we meet a certain degree of circular logic: How did one know in the first place that fixed points are indeed fixed, i.e. always occur the same temperature under comparable conditions? Common fixed points include melting points, vaporization points, superconducting transitions, etc. Thus, one always strives for strong theoretical support, which predicts temperature dependences of measuring effects and values of fixed points. This is never straight forward because most of these predictions change with regard to sample purity, pressure, magnetic field, etc.

It is worthwhile to note that thermometers do not measure the temperature of the object they are in contact with, but they measure their own temperature. Thus, one has to fall back on the validity of the zeroth law of thermodynamics, in order to have a basis to rely on thermometer readings. One also has to wait long enough for thermal equilibrium to occur between the thermometer and the object whose temperature is in question. Often, one reaches in practice a near- steady state instead of thermal equilibrium.

Temperature itself is a somewhat elusive and slippery concept without an easily comprehensible meaning: At first one defined it based on the ideal gas law and the zeroth law (see lecture). Later the dependence of entropy on internal energy was used, and one set temperature as the inverse of the slope in that curve. Today, we base our definition of temperature on our knowledge of statistical distributions and take the peak of the Maxwell-Boltzmann speed distribution as our indicator for temperature. Consequently, temperature is undefined when a system is not in thermal equilibrium! Origins: Thermodynamic science in the making – The difficult birth of the law of energy conservation

Questions to keep in mind while reading:

4. Which of the earlier concepts have survived into the modern theory?

5. Which aspects of nature did the caloric theory explain successfully and how was it eventually falsified?

6. How did the scientific process work in the progression from early forms to the caloric theory, and on to the kinetic gas theory?

The history of the Kinetic Theory of Gases begins in the 17th century when Torricelli, Pascal, and Boyle established the physical nature of the air. They persuaded other scientists that the earth is surrounded by a sea of gas that exerts a pressure and explained many phenomena previously accounted for by ‘nature abhors a vacuum’.

It was well known in the time of Galileo that water will not rise more than 34 ft in a pump. His student Torricelli devised an experiment to illustrate the effect in the laboratory. Since mercury is about 14 times as dense as water, one might expect that it can be lifted only a 14th times as much, and this is what is observed. Taking a yard long glass tube, partially filled with mercury, and placing a finger to close one end of it, and then inverting the tube, the mercury fell to about 30 inches when the finger was removed. Between the top of the mercury column and the end of the tube was an empty space, which became known as Torricellian vacuum. According to Torricelli, it is just the mechanical pressure of the air that raises the mercury in the tube. Blaise Pascal then pointed out that by analogy the pressure of the air on a mountain top should be less than at sea level. An experiment was devised, which confirmed this prediction. Robert Hooke devised a pneumatic engine for Robert Boyle, who devised experiments based on the laws of impact (collisions), which built the kinetic theory of billiard balls (atoms represented by elastic spheres). Boyle is credited with the discovery that the pressure exerted by a gas is inversely proportional to the volume in which the gas is confined. His main achievement was to introduce a new variable, pressure. Boyle also proposed a theoretical explanation of the elasticity of air (upon compression): The atoms are said to behave like springs, which resist compression. To a modern reader this seems hardly satisfactory. Boyle also tried the crucial experiment, which was some 200 years later responsible to overthrow his own theory in favor of the kinetic gas theory: A pendulum in an evacuated chamber shows hardly any difference in its period or in the time needed to come to rest than with air.

In 1859, Maxwell deduced from the kinetic theory that the viscosity of a gas should be independent of its density, a property difficult to explain based on Boyle’s theory. In Boyle’s own day, his work was criticized by Thomas Hobbes, who postulated a subtle matter filled space (the ether), a view that hampered the development of kinetic theory right up to the 20th century.

In the 18th century, Daniel Bernoulli formulated an early version of a kinetic theory as a billiard ball model, much like the ones in introductory texts today, involving the application of the principle of energy conservation, then often known as the vis viva, the living force. Bernoulli’s kinetic theory, which is in accord with modern views, was a century ahead of his time, when heat was still envisaged as a substance, not as atomic motion. Bernoulli’s assumption that heat was nothing but atomic motion was unacceptable, especially in the study of radiant heat. Bernoulli’s model neglected the drag of the ether and the interaction between atoms. The best that could be said about this theory was that it explained some properties of gases, which were already well understood anyway.

It was not possible for the kinetic theory to be fully accepted until the doctrine that heat is a substance, or ‘caloric’, was overthrown. From the standpoint of the early 19th century scientists, there were many valid reasons to retain the caloric theory and to reject the mechanical theory (what we now call thermodynamics).

Caloric was a fluid composed of particles that repel each other but are attracted to the particles of ordinary matter. Caloric is able to diffuse into and penetrate the interstices of all matter, and each particle of matter is surrounded by an atmosphere of caloric whose density increases with temperature. These atmospheres cause two particles to repel each other at small distances (where the atmosphere reaches). At larger distances gravitational attraction dominates and there is thus an intermediate point of equilibrium. As the temperature rises more caloric is added to the substance, the equilibrium point shifts outward, and the average distance between particles becomes greater, causing a (thermal) expansion of the body. If the body is instead compressed, caloric is squeezed out and appears as heat. In this context, Black postulated the doctrines of latent and specific heats: He showed that various substances absorb different amounts of caloric (heat) when their temperature is raised by the same amount and that they require large amounts of heat to be added during the process of melting or vaporization. Thus, a clear distinction was made between caloric (heat) and temperature.

Opinions differed on the question: Does caloric have weight? The American Benjamin Thompson, later created Count Rumford, performed a series of experiments from which he concluded that the weight of caloric would be undetectable. This became one of his arguments against the caloric theory. Rumford also pointed out that an indefinite amount of heat can be produced from matter by doing mechanical work on it (hw 1), which is consistent with an energetic nature of heat, but not with a subtle particle one (caloric), because the particles would have to exhaust at some point.

The opponents of this view stated that heat could be transferred across a vacuum, as radiant heat, and this could only be achieved by particles (caloric). At this time, the history of this science adopted some national flavor: a French school stayed with the strict caloric view, the British school accepted the main idea of caloric but added Black’s discoveries, and an emerging German school would eventually diverge completely from the caloric view.

To illustrate how the caloric theory explained the properties of gases, we summarize the derivation of the gas laws by Pierre Simon, Marquis de Laplace (1825). Laplace is also known for his mathematical development of Newton’s theory of planetary motion and for his work on the theory of probability. The kinetic theory of gases and statistical mechanics owe much to Laplace’s development of probability theory.

Denoting by c the amount of caloric in a molecule, Laplace assumed that two similar molecules at a distance r would repel each other with a force . Where H is a constant and expresses the rapid decrease of the repelling force with distance. The total pressure due to such forces becomes where  is the density and K is the sum of all forces exerted on a given molecule by all the others. It appears that the pressure is proportional to the square of the density, but observation yields that is scales linearly with . Laplace argued that the amount of caloric c must depend on density because each molecule is continually sending out and receiving rays of caloric. But the radiation of heat (caloric) by a molecule is regarded as being caused by the action of the caloric of the surrounding molecules on its own caloric , thus the amount of radiation is proportional to the density and the caloric of the surrounding gas, hence c, and to the caloric of the molecule considered, c. This quantity, proportional to c2, will have to be equal to the extinction at that temperature, since the system as a whole is supposed to be in thermal equilibrium. Combining these thoughts, we find that P ~  at constant temperature (Boyle’s law), and the ideal gas law follows immediately; if the temperature changes, the density changes in a ratio regardless of the nature of the gas (Gay-Lussac’s law). The outstanding accomplishment of Laplace’s caloric theory is its explanation of adiabatic compression and of the velocity of sound: Laplace was able to show that PV stays constant in adiabatic compressions or expansions, where  is the ratio of specific heat at constant pressure to that at constant volume. By assuming that the propagation of sound involves adiabatic rather than isothermal compressions and rarefactions, Laplace derived a correction factor for the classical speed of sound ()that held up to experiment. On the other hand, his theory predicts that specific heat should increase with decreasing pressure or density, which is not true. But experimental techniques did not allow for testing this at the time. In 1842 JR Mayer deduced that the heat produced by mechanical work is directly related to the amount of work performed; i.e. he treated heat like other energy terms. James Joule determined the relation between Joules of energy and calories of heat via friction of water, of mercury, and of cast iron to within 0.33% of the today accepted value. From that time on, the concept of caloric was mainly of historic value. The mechanical concept of heat is, of course, equivalent to heat being motion of particles (atoms).

Joule's experiment used a double mass Atwood machine to turn a spindle. The bottom of the spindle led through a water bath that was heated by multiple paddles that turned with the spindle. A ratchet was used to reset the masses without stirring the water. The amount of work performed is the loss of gravitational potential energy of the two falling masses. For the exact determination of Joules in terms of calories, Joule needed to apply corrections for the cooling of the calorimeter (water) by convection and radiation. With 20 repeated falls through 160.5 cm a temperature change of 0.3129 degrees was measured in some 26 kg of water by the use of a thermometer with an accuracy of 1:360 parts of a degree centigrade. He found 4.19 Joule per calorie. Connection between Thermodynamics, Kinetic Theory, and Statistical Mechanics Questions to keep in mind while reading:

7. …?

For a long time,- the existence of temperature was not seen as having any effects on fields like Mechanics or Electromagnetism. Today, we see Thermodynamics as the field of Classical Physics which affects all other classical fields and which puts constraints onto the results obtained there.

In the 1600s people were beginning to wonder how to quantify temperature and they started to develop measurement instruments. There were many obstacles which affected the reproducibility of results, which would only be understood later; sometimes much later.

The first comprehensive attempt to find a theoretical description turned into the caloric theory. It treated heat as a substance. Many data which were attainable at the time were in agreement with the predictions of the theory.

A competitor theory was the atomic theory, which took heat as energy of motion of invisibly small particles: atoms. A kinetic theory was devised by Bernoulli that lacked the mathematical method to be successful. Maxwell picked up later in about the 1850s and showed that a distribution of speeds of atoms in a gas can be derived. Boltzmann, in the 1870s took this farther. He would develop the Kinetic Theory of Gases and he would prove that Maxwell’s distribution was the only possible one. In the meantime, a theory of Thermodynamics manifested in piecemeal manner. The work of Clausius, Joule, and others stood for its eventual success. As such, Thermodynamics is a macroscopic theory and it does not matter whether matter consists of atoms or not. That made it more acceptable than Kinetic Theory, which must insist on the reality of unobservable particles.

Thermodynamics experienced a breakthrough when Mayer showed that the quantity that was known as energy in Mechanics and E&M (vis viva ‘force of life’ back then for kinetic energy) is conserved in all processes, if one postulates that heat is an energy form as well. Joule furthered the acceptance by measuring the mechanical equivalent of heat energy with an ingenious apparatus.

Thermodynamics rests on four empirical laws, known as the Laws of Thermodynamics. As is the case with the empirical laws of Mechanics, the Newton Laws, these foundational laws are neither proven nor derived, they are postulated, and their consequences are then explored and their predictions checked against experiment.

The First Law is the law of energy conservation, written from a thermal point of view: The energy change of any object is the sum of the heat energy and the work done on the object or done by the object.

The Second Law is the law of entropy, a recognition that certain processes do occur (in chemistry known as two classes: reversible and irreversible processes) and that other processes do never occur. This is news to Mechanics and E&M and many solutions to Newton’s Laws or to Maxwell’s Equations of E&M fall under the forbidden category. Nothing in Mechanics and E&M would suggest this, so the inclusion of Thermodynamics and, in particular, the Second Law is pivotal to find agreement between processes that occur in nature and those that do not with theory.

What exactly entropy is is difficult to describe, and it is the reason why many famous physicists have stated that one will not understand Thermodynamics until one knows it all and then learns it over again. In other words, discovery of facts and laws throughout Thermodynamics will affect ones understanding of the simplest thermal processes, like in measuring temperature, for example.

A working definition of entropy is that disorder is favored over order and it takes work to create order (or that heat flows from hotter to colder objects). Any energy that is invested in any process yields an increase in entropy, the fraction of energy that is unavailable for future processes.

The Third Law is of little consequence for our course. It states essentially that absolute zero is unobtainable.

As thermal phenomena began to be understood better it became clear to people that the initial definition of temperature was not up to the rigor and the flexibility needed for the progress obtained. Consequently, a re-definition of temperature and of thermal equilibrium was undertaken. This resulted in the peculiar name of the last law: The Zeroth Law.

In it a priori assumptions are made about thermal equilibrium and it states that ‘temperature is the thing that is equal between three bodies A, B, and C which are in a certain kind of contact with each other: A is in perfect thermal contact with C, B is in perfect thermal contact with C, and A and B are perfectly thermally insulated from each other.’ The a priori assumptions are about the perfect contact and the perfect insulation.

Thermodynamics explains all thermal phenomena from a point of view of equilibrium and it does so very well, without making any assumptions about the microscopic nature of matter. Should our understanding of that ever change, Thermodynamics will remain exactly the same.

Kinetic Theory makes concrete assumptions about the microscopic nature of matter: Everything consists of atoms and the motion of atoms is an energy we call heat. Consequently, we can calculate heat if we can build an object (typically a gas) from scratch with kinetic theory methods. The atoms interact with each other based on Newton’s Laws and on momentum calculations during collisions with other atoms. It can be so done in principle and the results are in agreement with nature, but it is cumbersome to do it for more than a few dozen of atoms. The main point was that Thermodynamics has thus been reduced to Mechanics of very many particles (recall how many particles are in a mole of gas), and so has temperature: it is an average kinetic energy. At room temperature and atmospheric pressure, air molecules, for example, undergo some 1012 collisions per second.

This was the cause for the development of yet another theory: Statistical Mechanics. As the name tells, Stat Mech introduces statistical methods where Boltzmann’s theory does mostly count.

In consequence, and because all other physics is constrained by thermal physics, we introduce the concept of chance and probability to all classical physics: Not – Newton’s Law holds true. But – Newton’s Law holds very probably true. Fortunately, when one sets out to calculate probabilities, for macroscopic objects the probabilities become very extreme.

For example, when one calculates for two solids of just 300 and 200 atoms (technically of 300 and 200 harmonic oscillators, which need not be the same as one atom) such probabilities, what is called macroscopically the ‘equilibrium state’ has a very overwhelming chance. To illustrate the extreme nature of this chance: If one draws on a horizontal axis the equilibrium state as one inch wide and one then calculates how wide the axis ought to be to cover all possible other states, one finds that the origin of the x-axis needs to be plotted 100,000 [km] to the left. The total probability of finding the solids in the equilibrium state is the area under the entire curve from the equilibrium state to the origin and to infinity on the other side. But almost all the probability resides within our one inch wide equilibrium state distribution. However, the equilibrium state has now near neighboring states (within that one inch) and they have similar probabilities as the equilibrium. Consequently, small changes do happen. In instrumentation these are often present as ‘instrument noise’.

Another example is to calculate the probability that in a room all air molecules spontaneously migrate into the upper half, so that people in the lower half asphyxiate. Disturbingly, the probability is not zero! Reassuringly, the probability is so small that it will happen only once in about 100,000 times the known age of the universe. Regrettably, that COULD be during the next second.

I list these examples to illustrate two points at once: There is a fundamental change in our conception of physics. Everything is probabilistic. Everything. (i.e. now: Heat most probably flows from hot to cold, etc.)

On the other hand, in most practical cases the probabilities are so extreme that one does not have to worry about the off-chance. But that is not so in all cases. I suppose you will have to take the Stat Mech course, if you want to know more 

Recommended publications