Collapse Theories
Total Page:16
File Type:pdf, Size:1020Kb
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by PhilSci Archive Collapse Theories Peter J. Lewis The collapse postulate in quantum mechanics is problematic. In standard presentations of the theory, the state of a system prior to a measurement is a sum of terms, with one term representing each possible outcome of the measurement. According to the collapse postulate, a measurement precipitates a discontinuous jump to one of these terms; the others disappear. The reason this is problematic is that there are good reasons to think that measurement per se cannot initiate a new kind of physical process. This is the measurement problem, discussed in section 1 below. The problem here lies not with collapse, but with the appeal to measurement. That is, a theory that could underwrite the collapse process just described without ineliminable reference to measurement would constitute a solution to the measurement problem. This is the strategy pursued by dynamical (or spontaneous) collapse theories, which differ from standard presentations in that they replace the measurement-based collapse postulate with a dynamical mechanism formulated in terms of universal physical laws. Various dynamical collapse theories of quantum mechanics have been proposed; they are discussed in section 2. But dynamical collapse theories face a number of challenges. First, they make different empirical predictions from standard quantum mechanics, and hence are potentially empirically refutable. Of course, testability is a virtue, but since we have no empirical reason to think that systems ever undergo collapse, dynamical collapse theories are inherently empirically risky. Second, there are difficulties reconciling the dynamical collapse mechanism with special relativity. Third, the post-collapse state is not the same as the post-measurement state of standard quantum mechanics, raising the possibility that dynamical collapse theories do not solve the measurement problem after all. These challenges are described in sections 3, 4 and 5, respectively. Assuming that these challenges can be met, dynamical collapse theories can lay claim to being serious contenders for the correct description of the quantum world. And the description they provide has a number of interesting consequences. First, it makes indeterminism an irreducible fact about the physical world. It has been argued that this has important consequences for the foundations of statistical mechanics, for free will, and for consciousness. These claims are discussed in section 6. Second, many dynamical collapse theories imply that the quantum wave function is fundamental, and that particles are just a temporary “bunching up” of the fundamental wave-like entity. This understanding of the ontology of the quantum world gives rise to a new kind of vagueness, since a wave can be fuzzy around the edges in a way that a particle cannot. Furthermore, the quantum wave function is defined over a high-dimensional space, not the three-dimensional space of experience, suggesting to some that three-dimensionality is an illusion. Dynamical collapse theories share these implications with other “wave function only” theories, such as Everettian approaches, and share some of them with wave function realist versions of Bohmian approaches. These implications of dynamical collapse theories are considered in section 7. 1. The Measurement Problem Quantum mechanics represents the state of a system in various ways, but the representation that is most perspicuous for understanding collapse theories uses a wave function. For a single particle, the wave function is a complex-valued function of three spatial dimensions and time: �(�, �, �, �). The wave function changes over time according to a linear differential equation, the Schrödinger equation. To begin with, let us stipulate that a particle is located in a particular spatial region if and only if all the corresponding wave function amplitude is contained in that region. (We will have reason to relax this stipulation later.) So, for example, the wave function shown schematically (in 1 Figure 1: Three quantum states one dimension) in figure 1(a) represents a particle in region A, and the one in figure 1(b) represents a particle in a distinct region B. The trouble with quantum mechanics starts from the superposition principle, which says that the sum of any two quantum states is also a quantum state. That is, if we take the function in figure 1(a) and add it to the function in figure 1(b), we obtain another possible state of the particle, shown in figure 1(c). (We also need to rescale the function, due to the connection between the area under the curve and probability, to be explained shortly.) States like 1(c) are crucial to quantum mechanical explanations. But according to our earlier stipulation, the wave function in figure 1(c) is not a state in which the particle is in region A, and it is not a state in which the particle is in region B. Nevertheless, when the position of a particle in such a state is measured, it is always found in one location or the other. Why should a state in which the particle is not in A and not in B generate a measurement result in which the particle is found in one of these locations? The founders of quantum mechanics were aware of this problem. Born (1926) noted that the wave function can be used to generate the probabilities of the two outcomes: the square of the area under the wave function in region A is the probability that the particle will be found in region A, and similarly for region B. This is the Born rule. But the Born rule just quantifies the problem: if the particle is not in region A, why is there a 50% chance of finding it there? The standard response, codified by von Neumann (1932), was to postulate that there are two dynamical processes by which the wave function changes over time. Between measurements, the wave function evolves continuously according to the Schrödinger equation, but during a measurement, the wave function jumps discontinuously into a state corresponding to a determinate location, with probabilities given by the Born rule. The latter is the collapse postulate. Applied to a particle in the state shown in figure 1(c), the collapse postulate says that if the position of the particle is measured, there is a 50% chance that it will collapse to the state shown in figure 1(a), and a 50% chance that it will collapse to the state shown in figure 1(b). Hence even though the pre-measurement state is not one in which the particle is in region A, and not one in which it is in region B, the collapse postulate together with the post-collapse state can explain our measurement results, as well as the sense in which these results reveal the actual (post- measurement) location of the particle. As it stands, though, the collapse postulate is untenable. The continuous, linear Schrödinger evolution and the discontinuous, non-linear collapse process are incompatible: neither can be reduced to the other. So to be consistent, quantum mechanics must postulate a sharp division between those physical processes that count as measurement processes and those that do not. This seems like a tall order. What’s more, since measuring devices are constructed out of particles that are not themselves being measured, the measuring device should operate according to the continuous Schrödinger evolution, and hence cannot instantiate the discontinuous collapse process. This is the much-discussed measurement problem. It is worth noting, though, that the problem lies not with collapse per se, but with tying the collapse process to measurement. If a collapse process could be described that yields the same 2 Figure 2: Collapse for a single particle outcome without the appeal to measurement, it would not be subject to the same critique. This is the approach to the interpretation of quantum mechanics pursued by dynamical collapse theories. 2. Dynamical theories of collapse The basic approach was first described by Pearle (1976): instead of one dynamical law describing measurements and another applying between measurements, a single dynamical law applies at all times, deviating just slightly from the Schrödinger equation. The first fully-developed theory along these lines was proposed by Ghirardi, Rimini and Weber (1986), and has become known as the GRW theory. According to the GRW theory, the wave function for a single particle obeys a dynamical law that mostly coincides with the Schrödinger equation, except that there is a small chance per unit time that the wave function undergoes a spontaneous collapse process (or hit) in which it becomes highly localized around a point. More precisely, a hit multiplies the wave function by a narrow three- dimensional Gaussian (bell curve) in the coordinates of the particle concerned, where the width of the Gaussian is 10–5cm. The collapse process is indeterministic in two senses. First, whether a given particle undergoes a collapse is a random matter: there is a chance of 10–16 that any given particle will undergo a collapse in any given second. Second, if a particle undergoes a collapse, the location of the hit is random, with probabilities chosen so as to recover the Born rule: the chance that the Gaussian is centred in a particular region is given by the integral over that region of the pre-collapse wave function multiplied by the Gaussian. For a single particle in the superposition state of figure 1(c), the effect of a hit is shown schematically (in one dimension) in figure 2. In this state, there is a 50% chance of the hit being centred in region A, and a 50% chance of it being centred in region B. If it is centred in region A, then the effect is as shown: almost all the wave amplitude is now in region A, although a tiny amount (not shown to scale!) remains in region B, due to the fact that the “tails” of the Gaussian extend to infinity.