<<

RELATIVE REALITY

______

A thesis presented to the faculty of the College of Arts & Sciences of Ohio University

______

In partial fulfillment of the requirements for the degree Master of Science

______

Gary W. Steinberg June 2002

© 2002 Gary W. Steinberg All Rights Reserved

This thesis entitled RELATIVE REALITY

BY

GARY W. STEINBERG

has been approved for the Department of and Astronomy and the College of Arts & Sciences by

David Onley Emeritus Professor of Physics and Astronomy

Leslie Flemming Dean, College of Arts & Sciences

STEINBERG, GARY W. M.S. June 2002. Physics

Relative Reality (41pp.)

Director of Thesis: David Onley

The consequences of Einstein’s Special are explored in an imaginary world where the of light is only 10 m/s. Emphasis is placed on phenomena experienced by a solitary observer: the aberration of light, the Doppler effect, the alteration of the perceived power of incoming light, and the perception of . Modified ray-tracing software and other visualization tools are employed to create a video that brings this imaginary world to life. The process of creating the video is detailed, including an annotated copy of the final script. Some of the less explored aspects of relativistic travel—discovered in the process of generating the video—are discussed, such as the perception of when actually accelerating from rest along the forward direction.

Approved: David Onley Emeritus Professor of Physics & Astronomy 5

Table of Contents

ABSTRACT...... 4

LIST OF FIGURES ...... 6

LIST OF TABLES...... 7

I. INTRODUCTION ...... 8

II. EINSTEINIAN OPTICS ...... 9

A. Aberration of Light...... 12 B. Doppler Effect...... 13 C. Power Distortion...... 14 D. Temporal Distortion...... 14

III. VISUALIZATION...... 16

A. Ray-Tracing...... 16 B. Color Shifting...... 19 C. Power Shifting...... 26

IV. THE VIDEO ...... 26

A. Introduction ...... 27 B. Temporal Distortion...... 28 C. Spatial Distortion...... 32 D. Spectral Distortion...... 38 E. Power Distortion...... 40 F. Combined Effect ...... 41 G. Conclusion...... 41

6

List of Figures

Fig. 1. The capturing of a sky on a unit sphere...... 11 Fig. 2. The perspective camera...... 18 Fig. 3. The CIE-XYZ color-matching functions...... 22 Fig. 4. Chromaticity Diagram for sRGB...... 24

7

List of Tables

Table I. Chromaticity Coordinates of sRGB primaries and white point...... 25 8

I. INTRODUCTION

George Gamow’s popular treatment of modern physics, Mr. Tompkins in Wonderland1, was first published in 1940 and has remained in continual print for the past sixty years. The adventures of its protagonist, C.G.H. Tompkins, have captivated generations of readers. Mr. Tompkins’ first adventure, and possibly his most famous, occurs after he falls asleep during a lecture on ’s Special Theory of Relativity and dreams that he has awakened in a world where the does not have its conventional value; instead, it has a drastically smaller value, one closer to a human scale. Due to this change Mr. Tompkins witnesses many marvelous sights from bicycles that contract in size to seemingly ageless travelers. Although, as we shall see, Gamow’s description of a world with a much slower speed of light has some flaws, it nevertheless conveys to a general audience the bizarre nature of relativity and is the inspiration for this thesis. This is an attempt to create a virtual reality that obeys Gamow’s principal conceit, which is to realize a world with a much slower speed of light and to examine the consequences. As this exercise is also motivated by a desire to popularize and elucidate as Gamow had done, the accompanying video presentation, intended for a broad, popular audience, has been created. Properly speaking, the video is the thesis: it is intended to stand alone and be understood without this paper.

1 G. Gamow, Mr. Tompkins in Paperback: Containing Mr. Tompkins in Wonderland and Mr. Tompkins Explores the (Cambridge University Press, New York, 1965). 9

This paper, then, is meant as a supplement to the video. It seeks to explain what is being shown there and the science behind it in more specific terms than the medium of video allows. It is also meant to justify some of the choices made in the presentation. Information on the process of generating the video and other technical issues are also discussed.

II. EINSTEINIAN OPTICS

The description of the alternate reality with the slower speed of light in Mr. Tompkins contains a major flaw, although quite a common one. Gamow’s hero sees a mysteriously contracted bicyclist pass him on the street. Later, as he rides on his own bicycle in pursuit, he discovers that the bicyclist is now completely normal while the city blocks and other stationary objects are contracted. The problem is that Mr. Tompkins could not possibly see as simply as described. Although the bicyclist and the city are each, in turn, contracted relative to Mr. Tompkins, he cannot perceive it in a clear fashion. This is because simultaneously released light signals from different parts of the bike or street do not reach Mr. Tompkins’ eyes simultaneously: the light reaching Mr. Tompkins at any given instant conveys images of objects as they were at the time the light left them. His view either of moving objects while stationary or of the stationary landscape while he is moving is distorted. In other words, the Lorentz transformations, which describe length contraction and other aspects of relativity, cannot alone describe visual phenomena; a new Einsteinian optics2 must be developed. Dealing with Einsteinian optics is simplified considerably by the realization that the totality of light that reaches a certain point at a certain time is what the observer interacts with. To simplify discussion, we will refer to the entirety of light that reaches an observer at a given point and time as his ‘sky’. This sky can be captured on a unit sphere centered on the observer (Fig. 1) that would thus contain a snapshot of all the light the 10

observer would receive at a given instant.3 That the observer may be in through the point does not alter the information contained in his sky, it merely distorts how he perceives it or, alternately, how it is to be presented in the spherical snapshot. Consider an observer traveling through a static landscape in an arbitrary fashion. Calculating how the terrain would look from his perspective—creating a snapshot of his sky—would be a difficult task if tackled directly. The observer’s sky is different depending on his . By developing methods to map one observer’s snapshot into another’s, we can avoid the computationally daunting task of directly constructing the sky for an observer moving in arbitrary fashion through a static landscape. Instead, we first generate a snapshot of the landscape as it would appear to an imaginary observer who is at the same point at the same time but at rest relative to the terrain; that is, we record the terrain’s contribution to this observer’s sky. For objects that are stationary relative to an observer the process of rendering their contribution to his sky is comparatively straightforward since we can call upon traditional computer graphic techniques in the absence of the distorting phenomena caused by relativistic motion. Once this is complete the relevant section of the sky can be mapped into the actual observer’s sky, which takes into account his motion. This mapping has several distinct aspects. One could separate the information contained in a snapshot of the sky into four categories: spatial information, which contains information on the where light originated in the sky; spectral information, detailing the wave properties of the incoming light; power or intensity information; and, finally, temporal information, which strictly speaking is not contained in a single snapshot but in a series of snapshots.

2 , Essential Relativity: Special, General, and Cosmological: Revised Second Edition (Springer-Verlag, New York, 1977), pp. 54-60. 3 Discussion of the sky based on Rindler. 11

Fig. 1. The capturing of a sky on a unit sphere. 12

A. Aberration of Light

In all situations we will need to call upon the mapping of spatial information. The distortion, or aberration, of spatial relationships resulting from this is known as the aberration of light. Einstein completely described this phenomenon in his original 1905 paper4, although its implications were not investigated for some time. It can be stated in several forms, but the most useful for our purposes is: cosθ + β cosθ '= (1) β cosθ +1 Where θ is the angle between the incoming light ray in the original reference frame and the direction of motion of the new reference frame, θ ' is the angle in the new reference frame, and β is the ratio of the new reference frame’s speed to the speed of light5. In general this causes objects to appear to crowd towards the direction of motion of the observer: if β > 0 then θ '≤ θ . It wasn’t until the late 1950’s that exploration into some of this aberration’s consequences began, principally by and, independently, by James Terrell6,7. They discovered several facets of Einsteinian optics that had been overlooked. Of primary importance was the realization, previously stated, that the Fitzgerald-Lorentz contraction is invisible. They discovered that objects that subtend a small solid angle will

4 Albert Einstein, “On the Electro-magnetics of Moving Bodies,” Annalen der Physik 17, 891 (1905). 5 Since this thesis is primarily concerned with modeling travel of an observer through a static landscape, primed variables (e.g., θ ') shall refer to measurements made in the observer’s reference frame, while unprimed variable shall refer to ones made in the rest frame of the landscape. 6 R. Penrose, “The Apparent Shape of a Relativistically Moving Sphere,” Proc. Camb. Phil. Soc. 55, 137- 139 (1959). 7 J. Terrell, “Invisibility of the Lorentz Contraction,” Phys. Rev. 116 (4), 1041-1045 (1959). 13

appear rotated, but otherwise undistorted, and spheres of any size will always have a circular outline. This effect is known as Penrose rotation, or sometimes Penrose-Terrell rotation. In essence, the transit time of light counters length contraction: strangely enough, objects would appear more distorted if the principle of did not hold. Another interesting consequence of the aberration of light and one that does not appear to be described elsewhere is the sensation of moving backwards when accelerating from rest. Driving with constant velocity at night, the moon and stars appear stationary, due to their extreme away from us. According to the aberration formula, if we accelerate, the angle between the image of any star on our sky and our direction of motion will decrease, giving the peculiar impression that we are moving backwards while accelerating: all our reference points will move forward in the sky, making us appear to move in reverse. The apparent movement of closer objects masks this effect, so that their actual motion in the sky generally overcomes this apparent regression. However, when accelerating from rest there is a short time for which the regression always dominates. In ordinary experience this time is negligible and the amount of regression imperceptible so the effect is not noticed, but it is there.

B. Doppler Effect

Spectral information is translated from one observer’s snapshot to another by the relativistic Doppler equation2: 1+ β cosθ ' λ'= λ ⋅ (2) 1− β 2

Where λ is the original wavelength, λ' is the new wavelength, and all other variables are as described previously. One interesting aspect of this transformation is that there is

−1  1 2  always an angle, Θ'= cos  ( 1− β −1) , at which the wavelength is unaltered: an  β  14

annular region of the sky will always be visible even if the rest of the light in the sky is π shifted out of the visible frequency range. Also, Θ' does not equal for β ≠ 0 . 2

C. Power Distortion

The translation of power information from one reference frame to another is not as straightforward as the other distortions, since it is a combination of electromagnetic field transformations and the change of the area on the sky sphere of an object’s image. Since, in constructing the video we are dealing with discrete picture elements or ‘pixels’, which we approximate as an array of points with a fixed number per unit area, it is apparent brightness, measured in Watts·m-2·ster-1, that must be transformed. This is accomplished using the following8: B 1 B'= 4 4 , γ = (3) γ ⋅(1− β cosθ ') 1− β 2

In which B' is the new apparent brightness and B is the original apparent brightness.

D. Temporal Distortion

The famous is an excellent example of relativistic temporal distortion: the traveling twin experiences time in a different manner due to his motion. According to his perceptions, the clocks accompanying him on his journey run normally while all others clocks act in a bizarre fashion. Just as length contraction is invisible to the eye because of the time it takes light to travel, so is also obscured. This becomes clear if one realizes that a clock is similar to a light source of fixed frequency:

8 T. Greber and H. Blatter, “Aberration and Doppler shift: The cosmic background radiation and its rest frame.” Am. J. Phys. 58 (10), 944 (1990). 15

both record time in a periodic function, one with a dial, one with an oscillating electro- magnetic field. It is obvious, then, that just as light sources will appear blue shifted when approaching and red shifted when departing due to the Doppler effect, so will a clock dial appear to run fast when approaching and slow when receding. The significant difference, however, is that for light one does not care what part of the cycle the light is in or how many cycles have transpired, while a clock shows the accumulated time. To deal with this we take an approach similar to ones employed with the other types of distortion. If we have an array of synchronized clocks at rest relative to one another, the images these clocks present to a stationary observer would not be synchronized. They would in fact display a time that differs from their actual time by an amount related to their distance away from the observer. Clearly, for an observer at the origin and a clock r at r , this difference would be − clock . This difference remains the same for all clock c observers at the origin, no matter their state of motion. Therefore if an observer first synchronizes his own clock, measuring t', with the array, measuring t , and then proceeds r to follow a 4-vector trajectory, s(t') = [ct(t'), r(t')], the time, tclock , any particular clock within the array will appear to read is simply: r()t' − r t = t()t'− clock (4) clock c It should be noted that calculating the trajectory, s(t'), for accelerations is simplified considerably by using a speed measure known as , ϕ , defined to be tanh −1 ()β . Unlike the more conventional beta, rapidity is additive and can range from − ∞ to + ∞ . A linear acceleration in rapidity (ϕ = αt ') starting from rest at the origin and proceeding along the x-axis would give the following 4-space trajectory:

c c s()t' = []α sinh()αt' , α [cosh(αt')−1], 0, 0 (5) All accelerations in the video are linear with regard to rapidity and have similar trajectories. 16

III. VISUALIZATION

To realize Mr. Tompkins’ world on screen required several visualization tools. Of these the most often used was a modified ray-tracing program based on Persistence of Vision9 (POV-Ray) that created all the three-dimensional imagery used in the video, with only a few exceptions. Originally intended to obey non-relativistic optics, it was altered to account for the spatial distortions created by the aberration of light. It, however, could not account for the Doppler-shift or for power distortion. Separate software was created in both these cases to take images from the ray-tracing program and alter them in accordance with these effects. Together these three programs generated the raw footage of the video. Adobe After Effects, a commercial animation program, was used to string the raw footage together, add effects such as overlaid text, and composite separate picture elements together. Another commercial product, Adobe Premiere, was then used to edit the final video, adding sound effects, music, and audio commentary.

A. Ray-Tracing

The backbone of this endeavor was the ray-tracing software; the video would not have been possible without it. It generates realistic looking imagery by simulating photography in reverse. Instead of capturing light on film, a ray tracer projects virtual light from the image plane into the modeled environment. The virtual light rays are ‘cast’ from a focal point, which represents the observer, through a pixel in the image plane, and out into the virtual world. The physical properties of real light are modeled in this virtual world in order to trace the light ray back to its original sources. Modeled properties of these sources along with objects the light interacts with on its journey, determine the color and intensity of the relevant pixel.

9 Source code and compiled binaries can be found at http://www.povray.org/. 17

Ray-tracers vary from the crude to the elaborate, but the basic initial algorithm used to cast rays from the image plane is basically the same. For a perspective camera, the camera’s coordinate system is chosen such that the intended observer is at the origin and the image plane a unit distance away along the z-axis (Fig. 2). The image plane, of width w and height h, is divided into a 2-dimensonial array of A by B pixels, through which imaginary rays are cast. For a given pixel (a, b) the directional vector, rˆ ' , of the resulting ray is simply:

w  2 A ()1+ 2a − A  r'   rˆ'= , r '= h ()1+ 2b − B (6) r'  2B   1  This ray, once created, can then be altered by rotation and translation, usually with matrices, to transform it into another coordinate system. These rays then propagate through the environment, the mechanics of which are specific to the particular ray-tracer and how it models space. All ray tracers employ the same initial casting algorithm to render a two- dimensional image and they do not make any assumptions about the nature of the landscape they are displaying, as polygonal based systems do. Therefore they are ideally suited for alteration to accommodate the spatial distortion caused by the aberration of light. To account for the aberration of light all that needs to be done is map the outgoing ray, rˆ', which is created in the camera or observer’s reference frame, into the world’s reference frame, according to the inverse aberration formula, Eq. 1 with β → −β . In Cartesian space, the inverse aberration formula is really just the of the initial emission of light and can be stated as follows for motion in the z-direction: x  x'      y =  y'  (7) z γ ()z'+βr'  18 Fig. 2. The perspective camera.

19

Thus an initial ray []x', y', z' , which represents pixel (a, b) now propagates along [x, y, z]. Subsequent interaction with the virtual world is unaltered. The result is a relativistic ray- tracer.

B. Color Shifting

It was decided that the most instructive and intelligible way to show the Doppler effect was to choose one wavelength, one pure spectral color, as the only color present in the artificial world at rest and to show that color shift as the viewer traverses the landscape. This is a different approach from that which is often found in the literature10, where objects at rest are supposed to emit blackbody radiation at a certain temperature; the Doppler shift has the effect of shifting this temperature. The advantage of our approach is entirely pedagogical, as one does not have to understand blackbody radiation to deal with this topic, and interpretation is more straightforward. The disadvantage is twofold: first, it is impossible to display pure spectral color with any currently available video displays, so only an approximation can be made. Second, this approach will eventually cause wavelengths to move out of the visual range and become completely invisible. By contrast, a portion of the blackbody radiation will always be visible (although it might be so dim as to be practically invisible).

10 U. Kraus, “Brightness and color of rapidly moving objects: The visual appearance of a large sphere revisited,” Am. J. Phys. 68 (1), 56-60 (2000). 20

Representing any color accurately on screen is an inexact science at best, as anyone who has dealt with digital imagery can attest. All modern color displays employ the tristimulas principle in order to render a large color palette using only a few primary colors: The eye interprets color based upon the relative stimulation of three types of color receptors, or cones, in the retina. A pair of light sources that stimulate these cone cells in the same manner, no matter their true spectral characteristics, will be perceived as the same color with the same intensity. Therefore by varying the individual intensities of just a few primary colors (usually three) and combining the output, one can match the color and intensity of the original. Most colors in the real world can be accurately reproduced in this manner. Ideally, one would like to choose primary colors such that each primary would only stimulate one of the three types of cone cells. This, however, turns out to be physically unrealizable because the sensitivities of the cone cells overlap. However, the Commission Internationale de l’Eclairage (CIE) defined a set of imaginary primaries that accomplish just that11. This system is known as the CIE-XYZ. Its three primaries, X, Y, and Z, only stimulate the red, green, and blue sensitive cells in our retinas, respectively. Although they cannot be created physically, these imaginary primaries facilitate the discussion of device independent color-reproduction systems. By computing the illumination of each primary necessary to produce a specific color, one can categorize every perceptible color uniquely and empirically.

11 David Travis, Effective Color Displays: Theory and Practice (Academic Press, New York, 1991), pp. 89- 97. 21

Given a color’s spectrum, I(λ), where I is the intensity and λ is the wavelength, three color-matching functions, X (λ),Y (λ), and Z(λ) , are used to determine the illumination of each of the imaginary primaries necessary to reproduce the color. These color-matching functions were arrived at empirically, originally in a series of studies conducted during the 1930’s that established their values in 5 nm increments. Subsequent work has refined the table of values to greater precision and with 1 nm increments12(Fig. 3). They are used to define three coordinates in a Cartesian 3-space, ()X ,Y, Z : X = ∫ I()λ X ()λ dλ Y = ∫ I()()λ Y λ dλ (8) Z = ∫ I()()λ Z λ dλ

For pure spectral colors, I()λ = I 0δ (λ − λ0 ), where λ0 is the wavelength of the light, and the XYZ coordinates are merely the values of the matching functions at λ0 . It is useful to define a related coordinate system that separates intensity from color. The Y coordinate is retained as the measure of intensity since this corresponds to the colors to which the eye is most sensitive, and additional coordinates x and y, also referred to as the chromaticity of the color, are defined as follows: X Y x = , y = (9) X +Y + Z X +Y + Z

12 Günter Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulas (John Wiley & Sons, New York, 1967), pp. 238-252. 22

2

1.8

1.6

1.4

1.2

X 1 Y Z

Tristimulas Values 0.8

0.6

0.4

0.2

0 360 400 440 480 520 560 600 640 680 720 760 800 Wavelength (nm)

Fig. 3. The CIE-XYZ color-matching functions. 23

It is particularly helpful to plot colors on the xy-plane in what is known as a chromaticity diagram. Pure spectral colors form a horseshoe shape that surrounds all perceivable colors. Points at the center of the horseshoe (Fig. 4) are “colorless”, or gray/white depending on the intensity. Following a straight line from a white-point out to the horseshoe one finds all the colors along the path share the same hue but grow more vivid or saturated the closer they are to the horseshoe. Any physical color-reproduction system can be modeled using this system by determining the chromaticity of each of its primary colors and its white point, or the color of all its primaries combined at maximum intensity. For three primaries a triangle is defined, the interior of which, called the gamut, is the set of colors reproducible with that system. It is important to point out that no matter what three primaries are chosen, the pure spectral colors always lie outside the gamut. The vast majority of modern color video systems, from televisions to computer monitors, employ the RGB (Red, Green, Blue) color system. The specific chromaticities of the primaries vary widely, as does the white point, depending on the specifics of the equipment used, but several standards have been created over time to minimize this variation. The current preferred standard is known as sRGB and is the one employed in making the video. Its primaries and white point are defined as follows13:

13 International Electrotechnical Commission, “Multimedia systems and equipment - Colour measurement and management - Part 2-1: Colour management - Default RGB colour space – sRGB”, IEC 61966-2-1 (1999). Also see http://www.w3.org/Graphics/Colors/sRGB. 24

0.9

0.8

0.7

0.6 Green

500nm 0.5 y

0.4 600nm White Point 0.3 Red 830nm 0.2

0.1 400nm Blue 0 360nm 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

x

Fig. 4. Chromaticity Diagram for sRGB. 25

Table I. Chromaticity Coordinates of sRGB primaries and white point.

Red Green Blue White x 0.6400 0.3000 0.1500 0.3127 y 0.3300 0.6000 0.0600 0.3290

The illumination of each of these primaries is defined to vary from 0 to 1.0 such that full illumination of all primaries creates the white described. This defines a new space, sRGB, with coordinates [RsRGB ,GsRGB , BsRGB ] . Converting a color between CIE- XYZ and sRGB is a straightforward matrix computation, since they are both linear spaces and they obey the following transformation13:

RsRGB   3.2410 −1.5374 − 0.4986X  G  = − 0.9692 1.8760 0.0416 Y  (10)  sRGB     BsRGB   0.0556 − 0.2040 1.0570 Z  Out-of-gamut colors will result in one or more of the sRGB coordinates being negative or greater than one. Since all the colors shown during the spectral distortion sequence are supposed to be pure spectral colors, which are all outside the gamut, a method of approximating the colors and bringing them into the gamut had to be chosen. The simplest technique is to truncate the data to fit into the physically realizable range: values greater than one are replaced with one; values less than zero become zero. This provided reasonable results but also caused a considerable loss of image definition, especially with bright colors. The process of taking raw-images of the environment from POV-Ray—which already account for the aberration of light—and altering it to account for the Doppler effect was a multi-step process. First, in computing the original frames, the velocity, camera angle, and orientation of the camera were recorded. This information was used to compute the angle, θ ', between each pixel on the image plane and the direction of 26

motion. The RGB values of the pixel, assumed to follow the sRGB standard, were then transformed into xyY-space. The original chromaticity coordinates were replaced with the chromaticity of a spectral color of wavelength λ' , Doppler shifted from the original 555nm wavelength: 1+ β cosθ ' λ'= 555nm ⋅ (11) 1− β 2

Where lambda is 555nm for our monochromatic scenes. The resulting xyY coordinates were returned to sRGB space via CIE-XYZ. The new pixel then replaced the old and the final output was created.

C. Power Shifting

Power shifting was accomplished in a similar manner to color shifting. Individual frames were created using POV-Ray and the orientation, camera angle, and velocity were recorded in a data file. After converting each pixel’s RGB values into xyY values in the same manner as above, the Y-values, which represent brightness on a linear scale, were transformed according to Eq. (3). To create the grayscale images, the x and y values were replaced with the x and y values of the sRGB white point before conversion to RGB color-space. In the case of color images when using the full relativistic simulation, the x and y values were determined according to the color-shifting algorithm before the final RGB values were generated.

IV. THE VIDEO

With all the visualization tools in place we now had the means to render a wide variety of scenes in our virtual world. We found ourselves continually exploring this new 27

frontier, not always prepared for what we saw. Only a few of these sights were chosen for inclusion in the video, some obviously staged to demonstrate a point, others because of their beauty or novelty. Within the video some account is made of the cause of these novel sights, but the presentation makes no attempt to explain Special Relativity. No explanation of Lorentz contraction is made, for example, because we had little to offer besides a standard textbook discussion. An effort was made, within the desired roughly half-hour format, to introduce the viewer to what we considered to be unfamiliar, to tie in later observations with earlier ones, and to give some coherence to the strange world the viewer encounters. The following dissects the video into its individual elements and provides a transcript of the narration.

A. Introduction

Since this video is really an attempt to alter the natural speed of things, time-lapse photography, which does the same thing, serves as an intermediary between our everyday world and the virtual world of Relative Reality. As the images loosen the viewers’ expectations of the world, the narrator eases him into the virtual world. Most of us have heard that the speed of light acts as an ultimate speed limit in the . This, according to Einstein’s Special Theory of Relativity, is inescapable. Even if you and a friend contrive to go in opposite directions each at as close to the speed of light as possible you still will not see your friend departing at a speed greater than light—that is greater than three hundred million meters per second or one thousand and eighty million kilometers per hour or six hundred and seventy million miles per hour. The implications of this speed limit are far reaching but since this speed is so great the phenomena that result remain largely invisible to us.

But what if the speed of light were not so fast? What if it could be slowed to something like thirty kilometers per hour? Then any traveler could become a relativistic traveler—someone capable of obtaining speeds comparable to the speed of light—and we could witness these effects first hand.

28

An image of an open book is overlaid with moving clocks and a moving car. They illustrate the “textbook” view of length contraction and time dilation in a general way. Both the clocks and the car will be reappear several later on. As relativistic travelers what would we see? Textbooks tell us that lengths along the direction of motion would be contracted and that moving clocks would run slow.

What follows is the image of a lamp as it is switched on. The footage is repeated, altered by a time displacement effect, to give the illusion of a slower speed of light. This image was chosen as the simplest way to convey the impact of light’s travel time. But we must also take into account the amount of time it takes light to reach our eyes: in ordinary experience light’s travel time is inconsequential, but in a universe with a slower speed of light, this time becomes considerable. This time must be taken into account along with length contraction and time dilation, for an accurate description of relativistic travel.

B. Temporal Distortion

This section opens with an introduction to the imaginary world that will occupy the rest of the video. It is a place where the speed of light is 10m/s, only a 30 millionth of its conventional value. To establish this a speed limit sign is shown to read 36 kilometers per hour. The narrator shows the viewer around the interior of a car: Now you are in the front seat of a car in a computer-generated world where the speed of light is just thirty-six kilometers per hour or twenty-two miles per hour. Look first at the dashboard: the speedometer redlines at 36 kilometers per hour, indicating the new maximum speed limit for the universe. Under the speedometer is a readout of something called beta, which is the ratio of the car’s speed to the speed of light—a number that must always be less than one.

The speedometer described looks conventional, except for the readout of β below it. To reinforce the idea that the speed of light cannot be overcome this speed is colored red on the dial. The camera pans to the dashboard clock, which has a decimal format because it is the most convenient for reading seconds—the temporal distortions to be encountered in this world are on order of several seconds. The clocks in this world are also a little different—the big hand completes one revolution every ten seconds and the little hand revolves every one hundred seconds. 29

As the camera returns to its initial position we see the outside world come into view. The ground, a faint luminescent green, is perfectly flat with periodic grid lines that divide it into squares, 3 meters on a side. A track with checkerboard tiles proceeds into the distance, the tiles alternating every one and a half meters. These divisions were chosen because they were small enough to give the viewer a framework to assess his speed and large enough to avoid the so-called “wagon-wheel” effect. A wagon wheel may appear to roll backwards when shown on film because the frame rate of the picture is too slow to capture the movement properly. For similar reasons, too short a grid spacing would make the viewer appear to move backwards. An array of clocks, each a meter in radius, comes into view shortly after the track. These are shown running in a similar fashion to the dashboard clock. These clocks appear every five meters, alternating from left to right along the track. They are transparent for two reasons: first, to enable the viewer to read clocks occluded by the face of another. Second, to give an ephemeral quality to the clocks so that they are not taken to represent a physical object: the motion of a real clock’s hands would cause them to appear distorted in this world. Although this distortion would often be imperceptible to the viewer, it was deliberately ignored to avoid the few cases where the distortions would make the clocks unreadable. Thus, the time actually displayed is that of the clock’s center. The ephemeral quality of these clocks is reinforced later on in the video when one clock is seen to rotate in order to remain facing the viewer. Outside we see a track running straight ahead flanked by a number of transparent clocks. Even though all of these clocks are showing us different times, with the more distant clocks showing earlier times, they are in fact synchronized. Since it takes longer for light to reach us from the more distant clocks, we see them as they were at earlier times: we are literally looking into the past. Suppose we reset the clocks and start them again. Notice how, the farther away they are, the longer it takes the light to reach us, hence delaying the moment at which we see them move. The apparent disagreement between our dashboard clock and any other clock is simply related to its distance away from us. For example this clock is thirty meters away, so, remembering that light now travels ten meters per second, it displays a time three seconds behind our own. The disagreement is least for the closest clock, the one on our left. 30

The viewer has now had an opportunity to get used to the fact that synchronized clocks do not appear synchronized because of the finite light speed. However, there is a pattern—the clocks do not appear to read random times and are clearly related to the dashboard clock. After familiarizing the viewer with the clocks and the landscape, the car’s journey begins when the dashboard clock displays 25 seconds. The car accelerates linearly with respect to rapidity until the car achieves a rapidity of 1 after 5 seconds. During the entire journey the view remains fixed—directly ahead out of the front windshield. This fixed gaze allows the viewer to concentrate on the apparent speed of the exterior clocks but prevents him from keeping track of what time a particular clock may read. We will take a short trip and as we do, watch the outside clocks taking particular note of how fast or slow their hands appear to be moving. At a modest 27 kilometers per hour where beta is .75 or three quarters the speed of light, the exterior clocks appear to be running faster than normal.

Having achieved its cruising speed the car continues on for 10 seconds and then decelerates to a stop. Stopping briefly at the end of the track we see that the closest clocks are about seven seconds ahead of our clock: so our dashboard clock is no longer synchronized with the exterior clocks.

After a ten-second pause at the end of the track, the process is repeated in reverse until the car returns to its starting location. As we shift into reverse notice how different the clocks look when they are receding from us. We don’t appear to be going so fast this time, but we are still going the same speed, 27 kilometers per hour; moreover the clocks appear to be running slow.

Back at the start let us check the clocks again. The image of the clock that is 30 meters away is now 11 seconds ahead of our clock. Taking into account the light’s three-second travel-time we discover that this clock is actually 14 seconds ahead of our dashboard clock. So someone standing outside has actually experienced more time than we have.

Let’s repeat the experiment, but this time we’ll concentrate on that one clock and our own. 31

The clock thirty meters ahead remains in view while all the others fade out and the view changes to focus on this remaining clock. For the subsequent journey this clock appears in the center of the screen and the viewer can now concentrate on the difference between his dashboard clock and the exterior clock. The so-called “delta clock” is introduced in order to facilitate the comparison of these two clocks even when the exterior clock is too far away to see. To help us compare the two, we’ll use a new instrument, a delta clock, which shows the difference between the two clocks’ apparent times—it currently registers negative three seconds because the exterior clock is three seconds behind our clock. As soon as we start to move this difference changes. The exterior clock gains as we approach and - this time we will turn to keep it in sight –it loses as we recede. The stripes across the window are just the rear-window defroster; we won’t need it but it will serve to remind us when we are looking back. Once we are no longer moving relative to the other clock, the difference stabilizes at negative 7 seconds. As once again we begin the return trip the delta clock reading increases when approaching the other clock and decreases when we leave it behind. In the end it reads positive 11 seconds. Recalling that, before we began the journey, it read negative 3 seconds, we are left with a total change of fourteen seconds.

Explanations are now in order, having run through the experiment twice and still ending with a fourteen-second discrepancy. This is a truly relativistic effect and it seems only fitting to explain it from two separate perspectives: that of the car and that of the ground. To ease explanation it was deemed necessary to “turn-off” the time delay of light signals so that length contraction and time dilation could be shown directly. To prevent confusion the narrator alerts the viewer to the change. Also, an unspoken convention is established that is maintained throughout the rest of the video: whenever the interior of a car is visible the time-delay is “on”, otherwise it is off. To illustrate the source of this final fourteen-second discrepancy, let us compare the clock inside the car to the outside clock. On the bottom we will see the car’s , on the top, a view from the exterior frame of reference; but at present they look just the same. For this illustration, the time delay has been removed, allowing us to see everything instantly. Under these circumstances we can witness time dilation and length contraction directly. Running the top version of events first, we see the car contract while it is moving; it travels the full length of the track and returns. As long as it is moving its clock runs slow so that at the end of the journey the car’s clock is behind the exterior clocks. Now the events as viewed by the car are quite different. Centering him on the screen we see that for him it is 32

the track that contracts, so the journey is very short and the time taken according to the car’s clock is equally short: the exterior clocks appear to have been running fast because they are once again ahead of the clock on the car. But now that we are stationary everything is the same, top and bottom, and the disagreement between the car’s clock and the ones beside the track is unmistakable either way you look at it.

A few aspects of these clips are not addressed by the commentary. In the second clip, the run-though from the car’s perspective, it should be noted that the exterior clocks are not synchronized when the car’s reference frame differs from theirs. The view shown is that in a Lorentz frame of speed βc corresponding to the instantaneous speed of the car. Note that the clocks only run fast when the car is accelerating: they run slow while the car is at a constant speed. Finally, if one compares the two clips one can see that the car clock reads the same time in both when it passes each grid line on the track. The next explanation is relatively straightforward since the effect described can be understood in almost classical terms. Once again we are outside the car so the time- delay is turned off. Light waves can thus be depicted as expanding transparent spheres. Although it is not discussed, length contraction and time dilation are also visible in the sequence: the clock is actually emitting light every one and a half seconds and the gridlines are contracted when the camera is in motion relative to them. Now we consider the delay due to the finite speed of light, it is responsible for our perception of the outside clocks running either fast or slow depending on our motion relative to the clocks. For this illustration let’s imagine that we could stand aside and see even light waves instantaneously. If we treat the exterior clock as a beacon emitting a light pulses at one-second intervals we see that the moving car intercepts these light pulses at various rates: rapidly when approaching and slowly when receding. Since at each encounter the driver would see the clock advanced by one second, the exterior clocks appear to gain time when he approaches and lose time when he recedes.

C. Spatial Distortion

Although it has not been mentioned yet in the commentary, the effects of the aberration of light have already been shown. It is now time for the narrator to acknowledge this in a series of snapshots taken from the previous section and to give 33

some explanation. The accompanying graphics of the car driving first through rain and then through a “storm” of light are slight departure from the strict abeyance to the principles of relativity since they are the only scenes where time dilation and length contraction are not taken into account. You may have noticed how the clocks and the landscape appeared to warp as we started to move and more so the faster we went. This is a result of the finite light speed that makes the picture appear distorted—this is sometimes referred to as the aberration of light. It is somewhat analogous to driving through rain: even if rain is falling straight down it will appear, to someone in a moving car, to fall at an angle, striking the windshield from ahead. Light of course is coming at us from all directions – not just from above – but once we are moving most of it appears to come from ahead. The result is to distort the picture we have of objects outside by crowding them together ahead of us.

When our initial forays into this virtual world were made in preparation for this video we came across a curious phenomena that to our knowledge has not been previously noted: during acceleration from rest, a sensation of lurching backwards was unmistakable. Subsequent investigation discovered the cause of this effect to be the competition between the aberration of light and the actual relative motion of the surroundings. The location of an object on an observer’s sky is determined by only two factors: its physical location and the aberration of light. These two factors tend to move an object across the sky in different ways. A discontinuous jump in velocity from rest would, in the first instant of time, cause the entire sky to be shifted towards the direction of motion since the observer would be at the same point in space but have a different velocity. Once moving, however, the landscape’s relative motion would eventually be recorded in the sky. The closer an object in the landscape, the shorter the time for which the aberration of light’s effect dominates—conversely, the farther, the longer, until, in the case of infinitely distant objects (which have no visible motion) the aberration of light is always the sole effect. Since we derive our sense of movement from the path made by close objects in the landscape, we will have the sensation of going backwards as long as the aberration of light dominates the apparent motion of these objects. 34

Another phenomenon is the sensation of lurching backwards before starting to move forward at the beginning of a journey – you surely noticed it in our previous experiments. If we replay the first few moments in slow motion you can see that the car appears to move backwards, in fact the START sign comes into view a little before proceeding forward, whereas it was previously hidden. This occurs because when we accelerate we encounter light that was emitted from sources that were hidden from us before we began to move. As you can see in this illustration, light emitted by the top of the START sign is not reaching the driver’s eyes when he is stationary, but as he moves forward some of the light, which previously went over him, is now brushing the top of his head briefly coming down to eye level. In this world light’s substantial travel time has strange consequences.

With preliminaries out of the way the full exploration of spatial distortion begins. The city in which the rest of the video takes place is introduced. A more “real” environment was chosen to make it easier for the viewer to call upon his own experiences of the ordinary world as comparison. In rough accordance with a real city, it contains buildings of various heights and constructions grouped into blocks eighty meters long. Now we take the car to an imaginary city, with towering buildings and uniformly straight and level blocks. All cross streets that we will encounter are perpendicular to our path, a fact that will become important to remember later on. We’ll increase speed slowly to show how quickly the effects of this aberration of light manifest themselves.

The acceleration is in the same manner as during the earlier runs, but this time it takes twenty seconds to achieve the a rapidity of 1. The vertical nature of the landscape helps to make the aberration of light more noticeable than it was in the earlier landscape. Notice how the cross streets no longer appear straight but bend away from us, and the buildings appear to bend. And yet, the curb parallel to our path is still as straight as can be.

This is due to the parallel orientation of the curb relative to the car’s direction of motion—in fact any line in the same direction would remain “unbent”. The leading sides of the buildings no longer appear at right angles to the street front but are angled away from us forming sharp corners. Look at this open space, a park to our right: it doesn’t appear to be a rectangular plot at all. We take a glance behind to see that everything looks a little bigger and appears to move just a little bit slower than the forward view. 35

The apparent peacefulness of views through the rear window compared to those offered from the front is gives a false impression of slower motion. However, if one were to pick a visual landmark on the car and time the transit of one city block past that point, he would arrive at the same “visual speed” whichever way he looked. We’ve now entered a tunnel – we will encounter several along our journey – and note we are now traveling at 27 kilometers per hour, as fast as we dared to go in the last experiment.

The tunnels that the car periodically traverses were included to allow the splicing together of scenes rendered separately without dispelling the illusion of one continuous shot. This allowed greater flexibility during the final editing and shortened rendering times by reducing the complexity of the environment. Shortly after traversing the first tunnel the car starts to accelerate again. Arriving at a rapidity of 1.5, the camera pans to give a view out the rear window. One interesting feature is that the tunnel up ahead appears to grow more distant during the acceleration. The narration contrasts the view out the back with the view out the front as the camera pans back to the front view. Let’s accelerate and go farther into the red. Looking around, we notice, as we come out of the tunnel, that buildings behind us appear larger closer and thinner, very much the appearance when looking through a telephoto lens. On the other hand, in front everything appears to be gathered up just as if we’re looking through a distorting wide-angle lens.

So far the viewer has only experienced acceleration while looking forward. Now, the camera turns to look out the rear window in preparation of another acceleration, this time to a rapidity of 2. The experience is quite different from the view out front. As we’re about to accelerate again, this time looking out the rear window, pay attention to the tunnel. Once we begin accelerating the tunnel mouth appears to remain almost stationary as if we were using a zoom lens to keep it in sight.

At this speed the effect of the aberration of light is quite evident. The narrator points out some of the more obvious features of the distortion. One feature that is ignored is the appearance of very tall buildings: they remain in view for quite some time after they have been passed, appearing to hang over in an arch towards the direction of motion. 36

The question of where the car actually is during its journey is complicated by the fact that the image of objects that are physically behind the car remains in the front half of the observer’s sky. An imaginary “curtain” is placed perpendicular to the car’s path in order to address this question. One qualification to the accuracy of this “experiment” should be mentioned here: since the interior of the car is actually rendered separately from the curtain, they do not interact as they should in this artificial reality. Looking through the front windshield again notice that you can now easily look up the perpendicular side streets without turning your head. You are actually passing these crossroads, although they still appear to be in front of you. As we move through space at a speed comparable to the speed of light, objects and landscapes we have already passed may still appear to be in front of us because that is where they were when the light that we see from them was emitted.

As an experiment, suppose we place a net curtain down the center of one of the cross streets: we can burst through this without damage. Now we get back in the car; the purple object up ahead is the curtain. Once we are moving towards it, notice that it appears to be confined to a semi-circle, even though it actually stretches to infinity in all directions. Its circular outline never changes as long as our speed remains constant, but the network of lines on it grows larger as we approach. We freeze just as we are about to pass through it. We have reached the center of the cross street as determined by the curtain about to touch the windshield, but it is so distorted that a bulge appears to stretch towards us at eye level while at ground level the yellow center line on the cross street is still clearly visible. Lets play that again, this time in slow motion.

One interesting aspect of the encounter with the curtain that the audio commentary does not address is that the curtain maintains the same circular outline even after it is behind the car. Now, however, the curtain is outside this demarcation line. Due to its infinite size and the aberration of light, the curtain would remain in view forever in front of the car even though it is behind the car. Having given some account of where the car is in the city, the natural progression is to address its speed. It should be evident to the viewer that the car appears to be going faster than 10 m/s. The apparent contradiction between this and the notion of an ultimate speed limit is addressed by the commentary. The aberration of light relates where you perceive an object or event to be, to where it actually is. Since length contraction distorts where an object actually is, 37

we relativistic travelers can easily make mistakes about the state of our motion. Right now everything seems to be moving past very quickly, much faster than the 36 kilometers per hour speed limit. In fact, if we measured our speed using a street map and our dashboard clock, we would conclude that we were going almost four times the speed of light. Then why doesn’t the police car we just passed (you did see him didn’t you?) pull us over? That is because our actual speed is still only 35 ½ kilometers per hour. You would think that either our clock or our map must be in error. In fact for us in the car, it is reading the off the map that is wrong. Length contraction has caused the blocks to shorten to nearly one quarter their normal length and that is why we are getting uptown much faster than we might expect.

A related matter not addressed is how one can determine their true speed. If the speed of light is known then the apparent speed, vapp , can be used to determine this since the true speed, v , is related to the apparent speed by the , γ :

−1 v = vappγ (12) How the true speed could be determined without knowing the speed of light is a question that deserves further investigation, but is beyond the scope of this paper. The next section tries to consider the question of the car’s speed from a completely different perspective, that of a stationary observer. This observer was given the occupation of police officer for obvious reasons. The explanation for what the policeman sees is really a reworking of the explanation for the aberration of light in disguise. To get back to the policeman – let’s take a look at the scene from his point of view. To him we appear to approach very rapidly, but slow down drastically just as we pass him. Notice how the car appears to be rotated as it passes as though it were doing a four-wheel slide. Whereas we might appear to warrant a dangerous driving charge, in fact our speed has not changed at all. The policeman, who is accustomed to these aberrations, relies on is Doppler-radar gun, a device which still works admirably even in this crazy world.

To understand what is happening, suppose there is a series of markers posted every five meters down the street. These markers release a light signal whenever our car passes them. Thus signals of our progress then travel with the speed of light in all directions to whoever may be watching. To assess what the policeman saw, notice that as we are approaching the light signals marking our passage are all bunched up and consequently he sees us pass the blocks in rapid succession giving the appearance of great speed. After we pass the reverse is true – the light spheres 38

carrying news of our progress are delayed making us appear to be traveling much slower than we actually are.

The graphics demonstrating the cause of Penrose rotation were some of the most complicated in the video. For each frame an image of the car generated by the altered ray-tracer was mapped onto a sphere that was programmed to contract at the artificial speed of light towards the eye of the policeman. The car, this time properly Lorentz- contracted, was then introduced into the three-dimensional model of the scene, moving at βc on the appropriate intercept course with the sphere. The image of the car on the sphere and the modeled car then intersect and we note with satisfaction the fact that they actually coincided in the correct fashion, verifying that the algorithms implementing the aberration of light were correct. The final step was to mask out, frame by frame, the parts of the spherical image that had not yet had contact with the car, to create the illusion that the image was actually peeling off the car. And the rotation? This is called Penrose rotation and to understand it you have to consider where each part of the car was at the time it sent its contribution to the policeman’s current picture. Different parts of the rotated image “peel off” the moving car and travel towards the policeman, eventually all arriving at the same time. If the policeman were able to remove the travel time of light and reconstruct the picture as it would appear with instantaneous transmission of information, he would see our speed as constant, our car contracted (as the textbooks say it should be) and, if he could see our clock, its would appear to be running very slow, recording one second for every four on his dashboard clock.

D. Spectral Distortion

The discussion of the relativistic Doppler effect begins with a discussion of frequency and its relation to what we perceive as color. The spectrum displayed was created using the same color model discussed earlier and employed in the rest of the video. This model of relativistic travel has ignored a very important property of light, its frequency, which we perceive as color: Blue light has a higher frequency, is higher pitched if you like, than green, which in turn is higher than red. When the source of light is moving its frequency appears to change. This phenomenon is more familiar with sound waves - when we hear a car pass we notice that as it approaches, the 39

pitch of the engine sounds higher than when it recedes; this is the Doppler effect. Light behaves in a similar way: approaching light sources appear bluer, while receding light sources appear redder.

The following graphics are deliberately modeled after the graphics used to explain the perception of the clocks. This is used to reinforce the connection between the two phenomena. Suppose a source is sending out light at a particular frequency, the expanding spheres would now represent the peaks of the light waves as they go out like ripples on a pond. But once we are moving we have the now familiar phenomenon that, while approaching, we encounter the spheres more frequently; the frequency appears higher and so the source would appear bluer. Then, when receding, the frequency appears lower and the source would appear redder.

As mentioned earlier, the color chosen for the city for the following section of the video is supposed to represent 555 nm monochromatic light. To better illustrate this effect we’re going to turn our city a monochromatic green, a frequency that is in the middle of our range of color vision. Back in the car, we expect to see color changes in our surroundings as they move relative to us. We will accelerate very slowly because the effect is very quickly obvious. At a rather slow speed of 7 kilometers per hour the full spectrum of our vision surrounds us ranging from blue in the front to red in the back. Accelerating a bit more, at 9 kilometers per hour or beta equal to one quarter, and we lose the ability to see the outside world behind us because the light has been shifted out of the visible range and into the infrared. At 12 kilometers per hour –a beta value of .33-- we can no longer see where we are going since the light in front of us has become ultraviolet.

Due to the variation in video monitors and viewing environments it may be possible to see objects the commentary says are shifted out of the visible range when they are not intended to be seen. This is a problem that cannot currently be adequately addressed since not only do the physical properties of monitors vary, but also the viewer often has control over the color balance and tint of the display. However, if the set-up conforms to the sRGB standard and is properly calibrated the representation should be as intended. Looking sideways where objects are neither coming toward us, or going away, you might expect to see the original shade of green we chose for the city. But that is not quite right—what you see is closer to yellow! Now this is rather like a replay of our earlier experiment with clocks; a source of light behaves like a clock sending out its time signal at very high frequency. The general caveat that moving clocks should appear to run slow, applies to light sources also. All moving sources slow 40

down – that is they head towards the red or, in this case, green gets about as far as yellow. This is sometimes called the second order Doppler shift. On top of this the ordinary, or first-order Doppler shift, does its usual job shifting light that is coming from ahead back towards the blue, and light behind is shifted further into the red. Recall that we ran out of the red end of the spectrum first. Meantime we are headed down the road unable to see where we are going. This is getting pretty dangerous so lets stop. Whew! That was a close call.

E. Power Distortion

The final type of “mapping” is addressed in rather less detail than the others—in fact no explanation is offered for its effect on the visual appearance of the landscape. Explanations were avoided in order to keep the video as accessible as possible—an understanding of relativistic transformations of electromagnetic fields by the viewer was not assumed. Our model has yet to account for one more thing—the intensity of the images. Approaching light sources not only turn bluer, they also grow brighter, while receding light sources not only turn red they become dim: The intensity or power of incoming light is distorted as we travel. To isolate this effect from the Doppler shift, let’s turn off the exterior color because our eyes register intensity differently for different colors. As we start to move again, the scene grows brighter especially right in front of us. When we reach 16 kilometers per hour look around carefully—it gets darker as we look back behind us. Now look out the front windshield again— we will accelerate to 22 kilometers per hour – a beta of 0.64. The scene before us is starting to get completely washed out. At this speed the television picture or projector cannot reproduce the intensity we should see in front of us, whereas looking around again the scene behind is dark to the point of not being able to see anything much at all. Since the screen cannot show the difference between bright and blindingly bright, we get a false impression of what lies before us. To get around this problem the obvious thing to do is wait for nightfall – something that happens very rapidly at these latitudes, as you can see. Very soon the light level has decreased to five percent of its daytime value—so we can still see a little. Set off again accelerating very rapidly to a beta of .5, .6, .7, .8, .9…clearly now night has become day to the extent that even Las Vegas might be jealous. We now see a bright spotlight and as we clear this tunnel and accelerate again we see the spotlight condenses in front of us.

This acceleration brings us to a rapidity of three. At this speed there is only a small angular range for which the scene is neither washed out nor too dark to see. If you look at a fixed angle in the forward direction as β increase the transformation of Eq. 3 has the 41

effect of first increasing the brightness, and then decreasing it (as the value β = cosθ is passed); the bright area shrinks to a spot around θ = 0 , sometimes called the searchlight effect. We are now traveling at a beta of .995, faster than we’ve ever traveled before. The center of the view is nearly 10,000 times brighter than our television can show and if we got any closer to the speed of light it would just grow brighter while the spotlight would diminish in size, eventually to a point.

F. Combined Effect

Is this what we would really see? No, because we’ve turned off the Doppler effect and the bright radiation in front of us long ago shifted out of the visible range. If we turn the color back on and return our vision to its normal range, all we can see is a faint rainbow at the edges of the windshield. But remember, it is nighttime. Wait! Dawn approaches and we will soon greet a new day. The rainbow is more visible now, but we are still left with very little to look at…until we slow down again. As our speed drops the rainbow broadens. Slowing down even more to barely a crawl we find the spectrum surrounding us again, much as it did without the power distortion, except that in the back it is still too dim to see anything and in front the blue and violet end of the spectrum is much brighter.

G. Conclusion

To give the video a “big finish” we decided to return to the earlier view without the Doppler effect or power distortion. This does not have much of a pedagogical use except to show that apparent speed is limitless. After a rapidity of six is reached the car comes to a rather abrupt full stop, ending the video. To end our holiday in this virtual world it would be a shame not to enjoy the thrill of traveling at speeds even closer to the speed of light. We won’t worry about the Doppler effect or power distortion as this prevents us from seeing anything much. So here we go. Beta is .99, .999, .9999, .99999!! Help!! I want...to…stop…thank you! Thank you! We hope you will have no problem adjusting to the world where the speed of light has its conventional value, and that you enjoyed this short exposure to the Reality of Relativity.