Brigham Young University BYU ScholarsArchive

Theses and Dissertations

2020-08-26

Improvements in Optical Trap Displays

R. Wesley Rogers Brigham Young University

Follow this and additional works at: https://scholarsarchive.byu.edu/etd

Part of the Engineering Commons

BYU ScholarsArchive Citation Rogers, R. Wesley, "Improvements in Optical Trap Displays" (2020). Theses and Dissertations. 8686. https://scholarsarchive.byu.edu/etd/8686

This Thesis is brought to you for free and open access by BYU ScholarsArchive. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of BYU ScholarsArchive. For more information, please contact [email protected], [email protected]. Improvements in Optical Trap Displays

R. Wesley Rogers

A thesis submitted to the faculty of Brigham Young University in partial fulfillment of the requirements for the degree of

Master of Science

Daniel E. Smalley, Chair Stephen Shultz Ryan Camacho

Department of Electrical and Computer Engineering

Brigham Young University

Copyright © 2020 R. Wesley Rogers

All Rights Reserved ABSTRACT

Improvements in Optical Trap Displays

R. Wesley Rogers Department of Electrical and Computer Engineering, BYU Master of Science

This thesis improves on the design of the Optical Trap Display (OTD), presented in 2018 [1]. Contributions include: real time animation; single beam, multiparticle suspension, point primitive anisotropic scattering, and virtual image approximation. First, real time animation was demonstrated on the OTD for the first time in full color at up to 30Hz refresh. Second, multi- particle systems allow for scaling of the display by a multiplicative factor, potentially up to orders of magnitude greater than the first OTD. Third, anisotropic scattering of point primitives was shown for individual suspended particles and multiple simultaneously suspended particles. Fourth, virtual images have been previously considered impossible in volumetric displays but by using perspective projections we have shown in simulation and experiment for the first time that an effect similar to a virtual image can be created.

Keywords: optical trap, photophoretic trap, 3D display, ACKNOWLEDGMENTS

I gratefully acknowledge the patient tutelage of Dr. Daniel Smalley throughout my graduate program. I appreciate the support of my family, as well as the consistent, loving support of my wife Madisyn, without whose encouragement and support I would have never finished a graduate degree.

I would also like to thank the generous support of the National Science Foundation (NSF) who have helped fund a portion of this work.

TABLE OF CONTENTS

LIST OF FIGURES ...... vii

CHAPTER 1: INTRODUCTION...... 1

1.1 Advancing Volumetric Displays ...... 1 1.2 Background ...... 2 1.2.1 Volumetric Displays Thus Far ...... 2

1.2.2 Previous Work on the Optical Trap Display (OTD) ...... 4

1.2.3 Visual Cues ...... 4

1.2.4 Photophoresis ...... 12

1.3 Overview of the Text ...... 12

CHAPTER 2: IMPROVING PHOTOPHORETIC TRAP VOLUMETRIC DISPLAYS ... 13

2.1 Abstract ...... 13 2.2 Introduction ...... 14 2.3 Trapping ...... 15 2.4 Scanning ...... 17 2.5 Scaling ...... 17 2.6 Robustness ...... 20 2.7 Safety ...... 21 2.8 Occlusion ...... 22 2.9 Applications ...... 27

iv 2.9.1. Surveillance ...... 28

2.9.2 Medicine ...... 28

2.9.3 Corporeal AI Agents ...... 28

2.10 Conclusion...... 30 2.11 References ...... 30

CHAPTER 3: SIMULATING VIRTUAL IMAGES IN OPTICAL TRAP DISPLAYS ...... 34

3.1 Abstract ...... 34 3.2 Introduction ...... 34 3.3 Theory ...... 37 3.3.1 Optical Trap Displays ...... 37

3.3.2 Perspective Projection ...... 37

3.4 Experiment ...... 38 3.5 Results ...... 39 3.6 Analysis...... 41 3.7 Conclusion ...... 43 3.8 References ...... 44

CHAPTER 4: CONCLUSION AND FUTURE WORK ...... 45

4.1 Conclusion: ...... 45 4.2 Future Work: ...... 46

Bibliography ...... 49

v References Included in Chapters 1 and 4: ...... 49 References Included in Chapter 2: ...... 50 References included in Chapter 3: ...... 53

Appendix A: Generating Perspective Projection in MATLAB...... 55

Appendix B: Blender Simulation of Virtual Image ...... 85

vi LIST OF FIGURES

Figure 1- 1. Reproduction of the figure in [3]. The three families of 3D displays are shown with a basic breakdown of strengths and weaknesses...... 3

Figure 1- 2. Reproduction of the figure in [16]. Table showing primary visual cues ranked by influence in three different distance zones (personal, action, and vista) relative the viewer...... 5

Figure 1- 3. The Blue block on the left is shown occluding the green block on the left. The right blue and green blocks show the effect of transparency on occlusion...... 7

Figure 1- 4 Relative density is shown here with a series of identical objects placed in a grid, all equally spaced but appearing more dense in the view as they recede into the distance...... 8

Figure 1-5 The green and red cubes pictured are the same size but placed at different distances to the camera. This shows relative size, the red cube looks smaller because it is farther away. This also shows height in the visual field, the red block appears at a different height in the view even though it is at the same height from the plane as the green cube...... 8

Figure 1- 6. Aerial perspective can be seen in this image. The mountains in the distance appear bluer than they are in reality and are blurred compared to objects closer to the viewer...... 9

Figure 1- 7. Vergence is part of the ocular motor visual cues and is shown here as a pair of eyes looking at objects various distances from viewer. The angle of the eye positions is shown with the white lines running from the eyes to objects...... 10

Figure 1- 8. Motion perspective shown using sequential pictures from left to right as the viewer moves past objects. The view of the objects changes as the relative position between the viewer and the objects changes...... 10

Figure 1- 9. Accommodation is the change of the focus of the eye. As focus changes from the green cube in the front to the yellow cube in the back, the objects not at the focus blur ...... 11

Figure 1- 10. Binocular disparity is the difference between the image each of your eyes, here we have an image on the left and right taken from spatially separated positions...... 11

Figure 2-11 (a) Photo of single-color, single particle, vector, video rate image 1 cm tall, circa 2016. (b) Photo of three-color, single-particle, line raster not video rate image, 1 cm tall, circa 2018. (c) Conceptual image of three-color, multiple-particle, volume raster image, video rate 10 cm tall...... 15

Figure 2-12.Multi-particle scaling. (a) Single-particle display with complex pathing and simple illumination. (b) Multiple-particle display with a simple path and complex illumination. (c) Lab result showing a single-particle system (image courtesy of Joel Rasmussen). (d) Lab result showing multiple particles in a linear array from a single laser source. (e) Concept showing a planar array of suspended particles rastering a volume image, video rate refresh, large scanning volume...... 19

vii Figure 2-13. Occlusion. (a) Anisotropy makes it possible to eclipse or occlude objects. (b) Setup for observing scatter from multiple angles simultaneously. (c) Isotropic scatter. (d) Anisotropic scatter. (e) Particle exhibiting isotropic scatter; this particle has relatively uniform scatter over 4pi steradians. (f) Particle exhibiting anisotropic scatter. (g) Two particles, one above the other, demonstrating alternating brightness moving from front to back [27]...... 25

Figure 2-14.Interactive applications. (a) Composite of photos from first OTD animation (see Visualization 2). (b) Satellite surveillance concept. (c) Guided catheterization concept. (d) Corporeal AI agent “holonurse” concept...... 27

Figure 3-15 OTD Display and Simulated Virtual Images Concept a. Optical Trap Display (OTD) b. 3D Vector, long exposure, image drawn by OTD c. Flat, rastered, long-exposure image drawn by OTD (content from Big Buck Bunny) d. Simulated virtual image concept with flat moving/rotating plane at the back of a draw volume filled with real images/objects such as 3D OTD images or 3D printed objects...... 36

Figure 3-16 Experiment Setup a. An OTD display projects a flat moon image at the back of a draw volume that contains a 3D printed house. The image is updated at persistence of vision frame rates (12 frames per second) using the perspective projection base based on expected camera location. b. A close-up of the house position, moon position, and perceived moon position in 3D space...... 39

Figure 3- 17 Experiment results a-c. Parallax for particle at z=0 (in front of the house) d-f. Simulation result, parallax for particle at z=0, with perspective projection. g-i. Experiment result, parallax for particle at z=0, with perspective projection The parallax is consistent with a particle at z=8 mm (behind the house). For full video see supplemental Document and Visualization 2...... 40

Figure B- 18. Left. Zoomed in image from God’s eye view of Chapter 3 visualization 4. Looking at the moon in the center of this image we can see that the moon is not occluded by the window frame properly. Right. Same frame from the camera used to generate the perspective projection...... 86

Figure B- 19. Showing the material node structure for the plane displaying the “virtual” images...... 87

viii

CHAPTER 1: INTRODUCTION

1.1 Advancing Volumetric Displays

Although significant advancements and work have been accomplished in the area of 3D volumetric displays, the perfect volumetric display has not been demonstrated to date. The goal of future volumetric displays should be fulfilling all visual cues in a large variety of circumstances (indoors, outdoors, multi-viewer, etc.). Many impressive display technologies can fulfill a portion of these requirements, but the future of the 3D volumetric display is to control all visual cues in every circumstance. Blundell [1] points out a number of shortcomings in the volumetric display community to date such as: “major limitations on image space visibility (e.g. a single ‘window’ onto image space), limited and non-scalable image space dimensions, variations in voxel attributes, low fill factor, low brightness of voxels, and high density of materials used for image space formation (static volume).” [1] These limitations in the displays are problematic to achieving the gold standard of volumetric displays as they limit the displays ability to fulfill one or more of the visual cues that the human brain interprets to understand the world around us.

This thesis improves on the design of the Optical Trap Display (OTD), presented by Dr.

Daniel Smalley [2] bringing it closer to the perfect volumetric display. Improvements include: real time animation, multiple suspended particles using a single trapping beam, anisotropic scattering of point primitives, and approximating virtual images. First, real time animation allows for the display of video content on the OTD for the first time. Second, multi-particle

1 systems allow for scaling of the display by a multiplicative factor, potentially up to orders of magnitude greater than the first OTD. This allows significantly larger displays to be created which will improve key performance metrics such as accuracy recreating visual cues. Third, anisotropic scattering of point primitives shows that based on particle morphology and dynamics the scattering can change in meaningful ways. Fourth, virtual images have been previously considered impossible in volumetric displays, but by using perspective projections we have shown for the first time that an effect similar to a virtual image can be created. Introducing virtual images allows for an increase in display size without an equivalent increase in display hardware size thus increasing potential OTD applications.

1.2 Background

The following background sections serve to provide the reader with the minimum information necessary to understand the following chapters. Discussion includes: previous work in volumetric displays, a brief introduction to visual cues, and a brief introduction to photophoresis. Each of these topics are extensive and warrant a full investigation for the interested reader. An in-depth study can be found for each topic in the associated references.

1.2.1 Volumetric Displays Thus Far

The Volumetric display is one type of 3D display. The volumetric display, or point display family, is defined as “the display’s scatterers or emitters are co-located with the actual image points”. This family of displays comes with advantages and disadvantages compared to the ray and wave display families, see figure 1. A more in-depth discussion can be found from

Smalley [3].

2 Figure 1- 1. Reproduction of the figure in [3]. The three families of 3D displays are shown with a basic breakdown of strengths and weaknesses.

Within the family of volumetric displays there are different approaches to creating the

volumetric image points. Each of these approaches deserves an in-depth discussion that can be

found in the associated references. Approaches include: Static Volume (semi-transparent

medium [4, 5], induced micro disturbance [6, 7], Swept Volume (rotating [8] or translating [9]),

and Free space (induced plasma [10], holovect [11], and of course the OTD [2], holodust [12],

fog display [13, 14]). Each approach offers different advantages, but none have achieved the

3 ultimate gold standard of performance in volumetric displays which is total control over all visual cues in all circumstances.

1.2.2 Previous Work on the Optical Trap Display (OTD)

The previous work on OTD technology not included in this thesis is primarily the work from my group under the direction of Dr. Daniel Smalley published in 2018 [2]. This work laid the foundation for the photophoretic trap volumetric display and introduced the concept of an optical trap display (OTD). Capabilities of the display at the time of the publication included display of long exposure rastered images, display of long exposure complex vectored images, persistence of vision (POV) simple images, both long exposure POV images in 2D and 3D. The

OTD display at that time was capable of the pictorial cues (occlusion only from a predetermined fixed viewing point because the images generated by the OTD are not self-occluding and no live image updating (animation) was demonstrated at that time), accommodation and vergence for real image points, binocular stereopsis for real image points, and motion parallax for real image points. We note here the limitation to real image points only as this will be the focus of the developments discussed in chapter 3.

As discussed in the previous section, the “finish line” for development in volumetric displays is total control over all visual cues in all circumstances. The next section will outline a basic understanding of the major visual cues to help explain this goal.

1.2.3 Visual Cues

The ability to understand the future of volumetric displays relies on an understanding of the visual cues the volumetric seek to fulfill. Below I will briefly describe the major visual cues with some examples that may be familiar to most readers. The visual cues are the true measure of

4 how effective a display is at creating the perception of 3D. Controlling all of the visual cues completely would produce a visual result that the human viewer would not be able to differentiate from reality. Each visual cue can have different challenges associated with a particular display technology and we will address some of these challenges for OTD displays in chapter 2. Each of these visual cues affect the human visual perception of the world, but depending on distance from the viewer, some visual cues have a greater impact on visual perception than others (see in figure 1).

Figure 1- 2. Reproduction of the figure in [16]. Table showing primary visual cues ranked by influence in three different distance zones (personal, action, and vista) relative the viewer.

We see in Figure 2 that the visual cues can be complicated by the consideration of distance from the viewer to the visual information. Cutting and Vishton break down the general trends of influence into three distinct distance zones from the viewer. Personal space is generally considered to be up to 2 meters from the viewer. Action space is considered to start at the edge of the personal space and extend to 30 meters. Vista space is considered to start at the edge of

5 action space and extend to infinity. Understanding the level of influence in different spatial zones in relation to virtual images will be discussed further in chapter 3.

In the following chapters we will often refer to the pictorial visual cues as a set for simplicity. The pictorial cues include occlusion, relative size, relative density, height in the visual field, and aerial perspective. These cues derive their collective name from the ability to control each cue using only two-dimensional display technologies such as a painting on a canvas or a conventional 2D television. The following paragraphs briefly explain each of the visual cues.

Occlusion or interposition can be described as the covering of one point by another. This cue is the most influential in all three of the spatial zones. Gneiting says, “As one object blocks the view of another, the mind automatically recognizes that the now invisible object is behind the visible one. To prove this, look out your window. The window is closer than the scene that can be seen through it. It follows that the frame of the window blocks, or occludes, your view of the outside world. If this cue is removed, the result is a scene where every object is always visible, creating a world that appears to be filled with semi-transparent, ghost objects.“ [15] Given that this cue is the most influential in all three spatial zones, it is of particular importance to control because a conflict between occlusion and a different visual cue will create a noticeable conflict of cues leading to possible viewer discomfort or fatigue. Chapter 2 section 7 further discusses occlusion in OTD displays.

6 Figure 1- 3. The Blue block on the left is shown occluding the green block on the left. The right blue and green blocks show the effect of transparency on occlusion.

Relative size refers to the size of objects dependent on position. If you were to take two identical items, teapots for example, and place them at different positions in the visual field of an observer, the more distant object would appear smaller than the identical item at a closer distance to the viewer.

Height in the visual field or height in the picture plane refers to the angle of elevation off the optical axis of the observer. This is easiest to picture when looking out over a long distance in relation to the horizon. Objects farther away on the ground plane will be closer to the horizon line than objects on the same plane closer to the viewer.

7 Figure 1-5 The green and red cubes pictured are the same size but placed at different distances to the camera. This shows relative size, the red cube looks smaller because it is farther away. This also shows height in the visual field, the red block appears at a different height in the view even though it is at the same height from the plane as the green cube. Relative density refers to the projected retinal density of a cluster of objects or textures,

whose placement is stochastically regular, as they recede into the distance [16].

Figure 1- 4 Relative density is shown here with a series of identical objects placed in a grid, all equally spaced but appearing more dense in the view as they recede into the distance.

8 Aerial perspective or atmospheric perspective all refer to the color and clarity shift associated with distance from the viewer. This effect is present at close distances but so minor that it is difficult to detect. At large distances the effect is sufficient to see a blue shift in color due to Rayleigh scattering and gaussian blurring from diffraction of light through the air.

Figure 1- 6. Aerial perspective can be seen in this image. The mountains in the distance appear bluer than they are in reality and are blurred compared to objects closer to the viewer.

Motion perspective or motion parallax is the change in appearance and position in the visual field of an object based on movement of the viewer relative to the object. For example, this occurs any time you walk around an object for example. The approach in chapter 3 will discuss this cue in greater detail.

9 Figure 1- 8. Motion perspective shown using sequential pictures from left to right as the viewer moves past objects. The view of the objects changes as the relative position between the viewer and the objects changes. Convergence or vergence refers to the ocular motion necessary to keep an object in the center of view of each eye. The most extreme example of this is when a person points each eye at extreme angles toward the nose, commonly referred to as cross eyed. Cross eyed is the extreme example of what our body naturally does as we focus on objects closer to us. As the object comes closer the eye will naturally point inward to maintain visual.

Figure 1- 7. Vergence is part of the ocular motor visual cues and is shown here as a pair of eyes looking at objects various distances from viewer. The angle of the eye positions is shown with the white lines running from the eyes to objects. Accommodation refers to the change in the optical power of the human eye when focusing at different depth planes. If you have ever placed something too close to your eye and strained to focus on the item, that straining feeling is the ciliary muscle of your eyes attempting

10 to flex the lens to focus on the item. The eyes do this for many distances, but it is most evident in

closer regions due to the strain.

Figure 1- 9. Accommodation is the change of the focus of the eye. As focus changes from the green cube in the front to the yellow cube in the back, the objects not at the focus blur

Binocular disparity, stereopsis, and diplopia refer to the difference in optical information

received by each eye based on the natural spatial separation of the human eyes. This can be

easily seen by placing your hand touching your nose and closing one eye at a time. One eye will

see the front of your hand while the other eye can see the back of your hand. When you have

both eyes open the brain does a good job of naturally stitching these different views into a single

information stream.

Figure 1- 10. Binocular disparity is the difference between the image each of your eyes, here we have an image on the left and right taken from spatially separated positions.

11 1.2.4 Photophoresis

The photophoretic trap is based on the process of photophoresis, whereby a particle can be suspended in gas or liquid when illuminated by a sufficiently strong light source. The photophoretic force can be defined as,

��� = −� (1.1)

Where I is the illuminating (plane wave) intensity, a is the particle sphere radius, the gas viscosity µa, mass density ρa, average temperature T, and ka and kf are the thermal conductivities of gas and the particle, respectively, and J1 is the asymmetry of the internal heat sources. [17]

Equation 1.1 shows a number of factors that go into the photophoretic force. The complexity of photophoretic trapping is added to by the consideration of the directionality of the force, time variance of the force, and other external forces acting on the suspended particle. However, at the most basic level, equation 1.1 offers a good understanding of the basic variables that can be manipulated to control the photophoretic trapping force.

1.3 Overview of the Text

The rest of the thesis is organized as follows: Chapter 2 discusses several directions of advancement in OTD displays and presents new work on several topics including real time animation, directional anisotropic scattering, and multiparticle trapping using diffractive orders.

Chapter 3 discusses the fundamental limitation of volumetric displays to create virtual images and a possible solution to achieve a visually similar effect using perspective projections. Chapter

4 offers the conclusion of these advancements and describes several directions for future work.

12

CHAPTER 2: IMPROVING PHOTOPHORETIC TRAP VOLUMETRIC DISPLAYS

I hereby confirm that the use of this article is compliant with all publishing agreements.

© Improving photophoretic trap volumetric displays [2019] Optical Society of America. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modifications of the content of this paper are prohibited. https://www.osapublishing.org/ao/abstract.cfm?uri=ao-58-34-G363

This article includes the work of several co-authors. I was the primary author and conceived the idea for the real time animation as a next step in development as well as and methods and experiments associated with real time animation. I performed several of the experiments on multiparticle trapping with Josh Laney. I performed several experiments with directional scattering contributing to this work.

Improving Photophoretic Trap Volumetric Displays

Wesley Rogers, Josh Laney, Justin Peatross, Daniel Smalley

2.1 Abstract

Since the introduction of optical trap displays in 2018, there has been significant interest in further developing this technology. In an effort to channel interest in the most productive directions, this work seeks to illuminate those areas that, in the authors’ opinion, are most critical

13 to the ultimate success of optical trap displays as a platform for aerial 3D imaging. These areas include trapping, scanning, scaling, robustness, safety, and occlusion.

© 2019 Optical Society of America

2.2 Introduction

Optical trap displays (OTDs) like the photophoretic trap volumetric display, provide screenless, optical real images in free space [1]. OTDs are part of the point family [2] of 3D displays and, as such, they do not “clip” at the display boundary, they have constant resolution throughout the display volume, and they are capable of creating display geometries forbidden to wave (e.g., holographic) and ray (e.g., light field) displays. Because OTD displays have bandwidth requirements that are dependent on the number of voxels in the image, they can have much lower bandwidth requirements for sparse scenes than wave and ray displays of comparable size. Within the family of point displays, optical trap displays also stand out for their ability to project into image volumes that may be larger than the physical display itself.

Notwithstanding their manifold advantages, optical trap displays will require a number of improvements before they reach a scale at which they will be broadly useful. The authors’ current goal is to achieve images with a linear dimension in excess of 20 cm [see Fig. 11(c)]. The authors’ earliest efforts involved a single particle and a single beam (the beam serving as both the trap and the illumination beam) which were capable of creating 1 cm vector images, of low complexity that were rewritten at rates greater than 10 frames per second to provide persistence of vision [see Fig. 11(a)]. At the time of this writing, it is possible to create full-color vector images at video rates, or much more detailed rastered images at less than video rates, using multiple beams: one violet (405 nm) beam for trapping the particle and a set of red, green, and

14 blue lasers to provide primaries for color mixing [Fig. 11(b)]. The respective intensities of these

beams are 80 mW, 24.4 mW, 30.5 mW, and 15.89 mW. The presence of the visible 405 nm

trapping beam will affect color mixing but has a limited perceptible affect because of the human

eye sensitivity to near-UV wavelengths [3].

The current display capabilities are limited by the quality of the trap, the variation of the

particles, and the speed of the scanning system. To achieve the target dimensions for next-

generation OTD displays, and to make these displays suitable for the lay user, researchers will

need to improve all of the following: trapping, scanning, scaling, robustness, and safety. This

paper also briefly discusses each of potential impact of occlusion in OTDs and suggests possible

early target applications for a scaled display.

Figure 2-11 (a) Photo of single-color, single particle, vector, video rate image 1 cm tall, circa 2016. (b) Photo of three-color, single-particle, line raster not video rate image, 1 cm tall, circa 2018. (c) Conceptual image of three-color, multiple-particle, volume raster image, video rate 10 cm tall.

2.3 Trapping

The quality of the optical trap within an OTD is critical to particle pickup, particle

motion, and particle resilience to environmental conditions. The trap must be capable of picking

up a particle in a robust, repeatable fashion. Currently, this is accomplished by scanning the trap

15 through a reservoir of particles (though other methods are possible [4]). The trap must also be capable of holding the particle for long periods of time as it is moved through each image point over and over again as each frame is drawn. In the current OTD prototype, both pickup and hold are extremely variable with some particles holding in place for up to 15 h and withstanding up to

1 liter/s airflow [1] while many trap attempts fail to hold even a few seconds. This variability stems from the fact that we use aberration traps that have hundreds of trapping sites [5] of differing size and shape. This is helpful for “pickup” as most particles find a home; however, the variation in sites also results in many trapping events being sub-optimal, and it is difficult to know whether the particle is located in a trap that possesses high contrast and has a morphology and dimension suited to the particle. The second source of variability is from the distribution of the particles themselves. The particle reservoir contains particles with sizes varying from less than 1 μm to tens of micrometers in diameter. The particle shapes also vary widely. This variety makes it possible to achieve good trapping results in the limit of large number of trials as many trap and particle parameters are represented, but high variability precludes repeatability. The characteristics of particles have a significant effect on the performance of the display. Moving from experimentation to optimization will require the isolation of a small, uniform set of traps and a uniform population of particles. Aberration will likely be replaced by other mechanisms for generating photophoretic optical traps, including holographic traps [6–8], phase contrast traps

[9,10], and Poisson spots. To provide a uniform population of particles, the authors suggest the use of uniform coated microspheres [11,12] to replace black liquor. By testing one trap and one particle at a time, optimal pairings can be identified, and the effect of changes quantified. These tests could be facilitated by the use of a liquid crystal on silicon, which has already been used to

16 create aberration traps [1] and could be updated to quickly provide a wide variety of trap types, serially, during testing.

2.4 Scanning

Once an optimal particle and trap pair has been identified, attention should be given to improved scanning that will take advantage of the more robust individual trapping conditions.

The current prototype uses galvanometric scanners and stage- mounted lenses to scan the space.

However, it may be possible to replace these, all or in part, with solid-state scanning solutions that could increase total scan speed. Care must be taken to utilize a scanning solution that preserves the trap as it is scanned. Acousto-optics and electro-optics may be able to do this. It is less clear that liquid lenses or pneumatic lenses could do this without the need for active wavefront correction. Galvanometric scanning mirrors are, in general, achromatic, relatively fast

(of the order of a few kilohertz bandwidth) [13], of large aperture (as large as 30 mm and above), and conservative of trap morphology as they scan. Given their advantages, rather than eliminate galvanometric scanners entirely, it might be best to use solid-state technologies, such as static gratings, to trap multiple particles simultaneously [14,15]. Then, with multiple particles in tow, an array of trapped particles can scan through a volume in a single pass, thereby increasing the sophistication of the images while simultaneously reducing the complexity of the scanning hardware. This approach suggests an alternative method for scaling the OTD image that does not require fast scanning as described below.

2.5 Scaling

In the current OTD prototype, a single particle is scanned through a complicated path

[Fig. 12(a)]. The next-generation display may instead scan many particles through a simple path

[Fig. 12(b)]. Similar approaches for multiple voxel generation have been shown previously

17 [16,17]. As previously mentioned, changing to parallel optical beams will allow the reduction from dual-axis scanning to single-axis scanning. This reduces the complexity of the scanning while making it possible to create images with greater sophistication at video rates. Based on current maximum velocities [1], it should be possible to make images with a maximum linear dimension of 20 cm or 8 in. (before any further optimization of trap and hold) if every trapped particle is allowed to travel in a straight line during the frame. Given that aberration traps are inefficient, dividing their optical power over hundreds of trapping sites [5] (most of which are unused), we expect that the optical power freed by optimized traps should make it possible to trap a large number of particles in an array of identical trap sites [Fig. 12(e)] with- out greatly increasing the current optical power of the system. One straightforward method of duplicating traps is by the addition of a Damman or similar grating to the display output [14]. This simple modification is shown below in Fig. 12(b). Figure 2(c) shows a single particle system with an aberrated lens as the final optic and Fig. 12(d) shows a similar system with a rectangular amplitude grating added after the aberrated lens to create multiple traps—two of which hold a particle. Ideally, the display would have tens, hundreds, or thousands of particles, trapped and moving together. Multiple particle systems of up to several thousand particles have been demonstrated previously [14,15,18,19]. In such a scenario, the complexity of the display is shifted from the scanning sub-system to the illumination sub- system. The illumination sub- system will now be responsible for illuminating each particle independently of the others. The authors estimate that a practical upper bound on the number of simultaneously illuminated particles will probably be of the order of millions of points assuming one-to-one pixel to particle mapping from commonly available spatial light modulator (SLM) products at the time of

18 Figure 2-12.Multi-particle scaling. (a) Single-particle display with complex pathing and simple illumination. (b) Multiple-particle display with a simple path and complex illumination. (c) Lab result showing a single-particle system (image courtesy of Joel Rasmussen). (d) Lab result showing multiple particles in a linear array from a single laser source. (e) Concept showing a planar array of suspended particles rastering a volume image, video rate refresh, large scanning volume.

19 publication. Such numbers might still be considered sparse by 3D display standards [20].

Because the bandwidth of OTD displays scales with the number of particles, the bandwidth required for a one-million particle system could be below the 400 million pixels per second provided by each channel of a commodity graphics card [21,22]. This display bandwidth is several orders of magnitude below that required for a with the same view volume, making it possible to contemplate the use of OTDs for real-time applications, such as face-to-face telepresence. However, to achieve this scale will require not only the trapping improvements described above, but also a strategy for making the display robust to environmental disturbances.

2.6 Robustness

OTD displays based on photophoretic trapping are susceptible to external environmental factors such as air movement and destructive user interaction (e.g., the user passes their hand through the image, knocking out the particle). In order for an OTD display to perform reliably outside a laboratory setting, improvements will be needed to counteract external factors.

Improving trapping as described earlier in this paper would go a long way to mitigate the effects of air flow but would do nothing to prevent users from dislodging particles. To address this second case, it might be possible to replace the lost particle(s) quickly enough that the user would be unaware that a disruption had occurred. In the laboratory, using the automated pickup method described earlier, the authors were able to achieve a pickup success rate of over 87% [1].

If the particle reservoir were to be placed directly beneath image volume it might be possible to pick up new particles as quickly as once a frame. Having the capacity to replace the particle at the frame rate would effectively render the display insensitive to user disruption so long as that disruption was temporary. There is also little danger of reservoir exhaustion as the volume of the

20 total number of particles lost would amount to less than one cubic centimeter even after a year of continuous use. For those concerned about air pollution, it should be noted that this worst-case cubic centimeter of cellulose dust would constitute only one small fraction of the tens of kilograms of dust—some of which is cellulose-based—that is generated in the average in

American household every year [23]. It should be further noted, that the display, which is equipped with a dust-trapping laser beam, could actually be configured to act as an air sponge, scanning for dust and trapping it to leave the immediate environment cleaner than it found it.

2.7 Safety

The use of class 3B lasers in the current OTD display intro- duces some safety concerns that should be addressed before the display is developed outside the lab. A particle primitive in an OTD display can be viewed from virtually any direction. From most angles only scattered light from the particle can enter the viewer’s eyes. This scattered light is strongly diverging and is unlikely to be dangerous, just as light bouncing off any other round, diffuse surface in the room would be tend not to be dangerous—and more so for OTD particles given their very small reflective cross section. The scattered optical power is estimated of the order of nanowatts allowing for comfortable viewing in average indoor lighting conditions. Brightness varies with illumination power as well as particle characteristics such as size and scattering pattern.

However, if the viewer is staring along the line that connects the OTD image and the OTD projection aperture, they run the risk of having unscattered light enter the eye. This creates a practical limitation on view angle in the current display prototype without the use of protective equipment. The view angle of the display is not affected by this limitation, but safety measures must be taken in order to access this portion of the view. This unscattered light is still diverging but could still be at a power density above maximum permissible exposure (MPE)—especially

21 for the trap light, which cannot be dialed down as readily as illumination light while still preserving display function. It is possible that by improving the display as described above will obviate any additional solution as the traps become more efficient and more diffuse in multi- particle systems. However, it is worth exploring options for increased safety in the near term.

The easiest way to make an OTD display safe at present would be to simply restrict the viewer to the large subset of view angles free of unscattered light. A second solution would be to have a second, angularly offset projector that could take over if the viewer should stray into the unscattered light. Another, preferable, solution would be to replace the violet trap light with infrared light which, depending on the wavelength, can have an MPE approximately 2 orders of magnitude larger than the MPE of visible light [24]. Ultimately, it is likely that a combination of these solutions will be employed to make the scaled display as safe as possible allowing for unfettered viewing and paving the way for advanced improvements like occlusion, that would make particularly good use of the OTD’s potentially unlimited view zone.

2.8 Occlusion

Occlusion was long thought to be impossible for volumetric displays, but this has been shown not to be a fundamental limitation [25,26] so long as volumetric display point primitives can be made to scatter anisotropically. Anisotropic scatter is more difficult for some point primitives than for others. Plasmas, for example, scatter in a roughly isotropic manner. However, optically trapped particles can also scatter light in an isotropic manner [Fig. 13(c)] or an anisotropic manner[Fig. 13(d)] [27]. Figure 13(a) shows a cross section of a setup, in which a trapped particle is surrounded by several mirrors, making it possible to view and photograph a particle from multiple angles simultaneously [27]. It should be noted that the beam shown in Fig.

13(b) consists of 405 nm light and 532 nm light for trapping and illumination, respectively.

22 These two superimposed beams remain in a fixed orientation through all trials. In Fig. 13(e) we see simultaneous photographs showing an isotropic particle scattering with similar intensity across multiple angles. Figure 13(f ) we see particle scattering anisotropically—strongly in the

104 direction but weakly in the 26 direction. In order to achieve occlusion in OTD images, it must be possible for each particle to have control over its direction of scatter, so that one set of particles can wane in intensity while another, occluding, set waxes in intensity. The alternation of brightness is demonstrated in its most primitive form in Fig. 13(g) where two particles, one trapped above the other, alternate in brightness as the viewer position changes. This demonstration helps to establish the possibility but not necessarily the feasibility of occlusion in

OTD systems. To achieve controlled, directional scatter in OTD systems would require a considerable feat of engineering. Visualization 1 shows observed particle behavior which supports the feasibility of predictable particle dynamics and observed behavior of rapid irregular motion. Previous work explores particle jitter and the relationship between simultaneously trapped particles [28]. The results in Visualization 1 were gathered experimentally using charcoal particles caught near the focus of a 4.5 W, 532 nm laser beam (Coherent Verdi) directed from left to right. The upper right video (labeled Rotating Particle) shows a particle caught in 50 torr of air. The other three videos were captured at 760 torr of air. The movies were taken using a

Pulnix TM7 CCD camera at 30 frames per second through a 20× microscope objective. The experiment labeled “Rapid Tumbling” shows what is believed to be a common particle behavior by the authors and suggests some of the technical challenges that will need to be addressed in order to create an OTD capable of asserting control over particle behavior. The change in ambient pressure of the experiment has been shown [29] to affect particle trapping power requirements. All experiments were performed at approximately 760 torr except the “rotating

23 particle” experiment. It was our assumption that the movement of particles in the trap would be rapid and irregular (other researchers have suggested as much [30,31]). This video shows motion that is ponderous, regular, and nearly periodic. A rotating anisotropic particle could be used to strobe through view angles like a lighthouse to provide angular control and occlusion effects.

These videos show observed behavior critical to the proposed idea of a multi-particle OTD. The observation of both fixed relative orientation and relative spatial position allows for particle arrays to be assembled that are sufficiently stable to maintain their independent positions required to be used as independent voxels in a multi-particle OTD. Furthermore, the fixed particle orientation experiment suggest that complex particle geometries could be employed as a method of creating controlled anisotropic point primitives within a multi-particle OTD.

Specifically, the particle on the left in the experiment “Fixed Relative Particle Orientation” shows a profile shape of a two-sided corner reflector geometry that can be highly directional in scatter.

To illustrate the possibilities and the challenges, the authors would suggest two potential strategies for creating occlusion in OTDs. The first is the “intelligent particle” method. In this method one begins with a particle that is big enough to reflect directionally, faceted might be preferred. Then the particle is made to tumble with a known period within the trap. Slowly rotating trapped particles have been observed (see examples of common particle dynamics in

Visualization 1). Once the rotational period and phase are determined (this could be done by probing the particle in advance of the illumination with an invisible IR beam) the particle

24 Figure 2-13. Occlusion. (a) Anisotropy makes it possible to eclipse or occlude objects. (b) Setup for observing scatter from multiple angles simultaneously. (c) Isotropic scatter. (d) Anisotropic scatter. (e) Particle exhibiting isotropic scatter; this particle has relatively uniform scatter over 4pi steradians. (f) Particle exhibiting anisotropic scatter. (g) Two particles, one above the other, demonstrating alternating brightness moving from front to back [27].

25 could be illuminated in synchrony with its rotation to provide controlled directional scatter.

Within the “intelligent particle” approach there may also be room for luminescent or active particles [32,33]. The second method, the “intelligent illumination” method, was suggested as an alternative [34]. In this method, the spherical particle is illuminated only partially. The illumination light is carefully focused onto the region on the spherical particle’s surface that will result in light scattering in a desired direction. Both of these approaches require a great degree of control of the illumination and/or the particle dynamics. The experiment shown in Fig. 13(b) does not demonstrate control over anisotropic scattering but instead suggests that irregular particle morphologies can be trapped and illuminated producing anisotropic scattering, as shown in Figs. 13(e)–13(g). The development of such a system would certainly be nontrivial, and the bandwidth required for such a display would scale linearly with the number of viewers; however, if developed successfully, the creation of a free-space display capable of occlusion would overcome one of the most persistent, most ubiquitous, and most vexing limitations of volumetric displays.

The advantages of an occlusion-capable OTD display would extend far beyond the ability to make images that self-eclipse and look self-solid. In fact, the ability to control directional scatter in such a display is no less than the power to control what every viewer sees—even when they are all looking at the exact same spot in the view volume. This prospect has remarkable ramifications for the utility of the display. Each viewer can be gazing at the same volume of space but seeing something customized to their proclivities, or security clearance, or native language. Imagine a future in which such a display exists in a family living room, as depicted in

Figs. 13(h)–3(j). In the first panel, the family’s daughter traces her finger along the surface of a mobius strip as part of her “Discovering Math 101” class. In the second panel, the mother, who is

26 gazing into the same volume, is seeing and talking to a volumetric image of grandpa. In the final

volume, dad is living his dream of winning the world cup as part of an immersive volumetric

sports program. When coupled with highly directional parametric speakers, each of these

participants could be having entirely independent or care- fully interwoven immersive 3D

experiences, yet none of them are wearing goggles, or staring at a screen. Their eyes are visible

to us and to each other (how different from the family rooms of today!).

2.9 Applications

The number of possible applications of OTD displays grows rapidly with the display’s

Figure 2-14.Interactive applications. (a) Composite of photos from first OTD animation (see Visualization 2). (b) Satellite surveillance concept. (c) Guided catheterization concept. (d) Corporeal AI agent “holonurse” concept. size, and it is worth a pause to con- sider what applications might be most appropriate for a next-

generation display with a linear dimension of 20 cm (or approximately 8 in). The authors suggest

that early target applications might include aerospace surveillance, image- guided

27 catheterization, and corporeal AI agents. All of these are applications which leverage the OTD’s unique advantages.

2.9.1. Surveillance

In light of recent announcements of major corporations to create large satellite constellations [35], the need for surveil- lance to avoid satellite conjunctions is greater than ever

[36]. Currently practitioners abstract information from a 2D screen.

A free-space volumetric display could make the spatial relations of these objects viscerally apparent even as they move in relatively complex nonlinear trajectories. In air traffic control this technology could mitigate the cognitive loading and increase decision confidence in one of the more stressful jobs still performed by humans [37].

2.9.2 Medicine

During catheterization, medical practitioners must navigate tortuous 3D paths that get progressively more complicated as they approach the human heart. A volumetric display update with x-ray data could help practitioners understand the 3D path they are navigating and perhaps avoid arterial abrasion and the possibility of abrasion, embolism, and later deep vein thrombosis

[38] [Fig. 14(c)].

Both of these applications are similar in that they value precise spatial relationships above other considerations such as photorealism. The datasets are also sparse by 3D standards

[20]. These criteria help to make these attractive first applications for scaled volumetric displays.

2.9.3 Corporeal AI Agents

In Fig. 14(a), and in Visualization 2, the authors demonstrate a simple, animated OTD image—a color stick figure walking, and leaping, both in space and on the researcher’s finger.

28 This early-stage test for interactive images suggests the possibility of corporeal agents that exist within an individual’s interactive space—within arm’s reach. The researchers imagine a scenario like a holonurse shown in Fig. 14(d) in which a corporeal AI agent is tasked with helping an aging loved one. This “holonurse” could help with medication compliance, serve as a portal to emergency healthcare services, and point out fall dangers. This agent could be projected from a fixed OTD, a projector on a rail, or from a portable device so that it could remain with the senior as they went about their daily tasks. The agent would interact naturally within the senior’s space and never once require them to look at a screen. Real-time animation was achieved on an OTD through continued development of software focused on drawing speed and added additional features required for proper display of animation. This included mechanisms for tracking the position within frames while drawing to allow for frames to be seamlessly stitched together, lateral movement offsets to allow the animation to walk across the display while reusing the positional data of earlier frames, and frame repetition to allow for longer animation sequences to be displayed compared to limitations in memory. One of the main challenges we faced in animation was memory storage limitations of the Arduino Mega that our original prototype was built on. The Arduino Mega has 256 kB of memory which we stored the functional code of the display and the animation data. In order to optimize for speed, we removed all real-time interpolation computations as these slowed down the response time of the display. Removing these interpolation steps allowed for higher frame rates giving stronger persistence of vision; however, it came at the cost of memory as we now precomputed interpolation and saved all the values into the memory of the Arduino Mega. This created a trade-off between speed and available data. The data was also limited by the complexity of the individual frames; more complex geometries required more positional data per frame.

29 2.10 Conclusion

These applications point to a screenless future in which our data becomes a physical part of the world around us. In so doing they could give us godlike creative powers—to literally bring forth new creations from the dust, breathe AI life into them, and send them forth to live with us.

By exploring the improvements that should be part of the next generation of OTD displays the authors hope to focus the efforts of interested researchers and establish a vision for a new screenless paradigm for interacting with data.

Funding. National Science Foundation (1846477).

2.11 References

1. D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, and K. Costner, “A photophoretic-trap volumetric display,” Nature 553, 486–490 (2018).

2. D. Smalley, T. C. Poon, H. Gao, J. Kvavle, and K. Qaderi, “Volumetric displays: turning 3-D inside-out,” Opt. Photonics News 29(6), 26–33 (2018).

3. L. T. Sharpe, A. Stockman, W. Jagla, and H. Jägle, “A luminous effi- ciency function, V*(λ), for daylight adaptation,” J. Vis. 5(11), 962–965 (2005).

4. A. Turpin, V. Shvedov, C. Hnatovsky, Y. V. Loiko, J. Mompart, and W. Krolikowski, “Optical vault: a reconfigurable bottle beam based on conical refraction of light,” Opt. Express 21, 26335– 26340 (2013).

5. J. Peatross, D. Smalley, W. Rogers, E. Nygaard, E. Laughlin, K. Qaderi, and L. Howe, “Volumetric display by movement of particles trapped in a laser via photophoresis,” Proc. SPIE 10723, 1072302 (2018).

6. H. He, N. R. Heckenberg, and H. Rubinsztein-Dunlop, “Optical par- ticle trapping with higher- order doughnut beams produced using high efficiency computer generated holograms,” J. Mod. Opt. 42, 217–223 (1995).

7. P. Zhang, Z. Zhang, J. Prakash, S. Huang, D. Hernandez, M. Salazar, D. N. Christodoulides, and Z. Chen, “Trapping and transporting aerosols with a single optical bottle beam generated by moiré techniques,” Opt. Lett. 36, 1491–1493 (2011).

30 8. P. Xu, X. He, J. Wang, and M. Zhan, “Trapping a single atom in a blue detuned optical bottle beam trap,” Opt. Lett. 35, 2164–2166 (2010).

9. M. Woerdemann, C. Alpmann, M. Esseling, and C. Denz, “Advanced optical trapping by complex beam shaping,” Laser Photonics Rev. 7, 839–854 (2013).

10. Z. Gong, Y. L. Pan, G. Videen, and C. Wang, “Optical trapping and manipulation of single particles in air: principles, technical details, and applications,” J. Quant. Spectrosc. Radiat. Transfer 214, 94–119 (2018).

11. V. G. Shvedov, A. V. Rode, Y. V. Izdebskaya, A. S. Desyatnikov, W. Krolikowski, and Y. S. Kivshar, “Giant optical manipulation,” Phys. Rev. Lett. 105, 118103 (2010).

12. S. K. Bera, A. Kumar, S. Sil, T. K. Saha, T. Saha, and A. Banerjee, “Simultaneous measurement of mass and rotation of trapped absorbing particles in air,” Opt. Lett. 41, 4356–4359 (2016).

13. THORLABS, “Large beam diameter scanning galvo systems,” 2019, https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=6057.

14. V. G. Shvedov, C. Hnatovsky, N. Shostka, A. V. Rode, and W. Krolikowski, “Optical manipulation of particle ensembles in air,” Opt. Lett. 37, 1934–1936 (2012).

15. F. Liu, Z. Zhang, S. Fu, Y. Wei, T. Cheng, Q. Zhang, and X. Wu, “Manipulation of aerosols revolving in taper-ring optical traps,” Opt. Lett. 39, 100–103 (2014).

16. Y. Ochiai, K. Kumagai, T. Hoshi, J. Rekimoto, S. Hasegawa, and Y. Hayasaki, “Fairy lights in femtoseconds: aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields,” ACM Trans. Graph. 35, 17 (2016).

17. K. Kumagai, S. Hasegawa, and Y. Hayasaki, “Volumetric bubble dis- play,” Optica 4, 298– 302 (2017).

18. V. G. Shvedov, A. V. Rode, Y. V. Izdebskaya, A. S. Desyatnikov, W. Krolikowski, and Y. S. Kivshar, “Selective trapping of multiple particles by volume speckle field,” Opt. Express 18, 3137–3142 (2010).

19. F. Liu, Z. Zhang, Y. Wei, Q. Zhang, T. Cheng, and X. Wu, “Photophoretic trapping of multiple particles in tapered-ring optical field,” Opt. Express 22, 23716–23723 (2014).

20. G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. Giovinco, M. J. Richmond, and W. S. Chun, “100-million-voxel volumetric display,” Proc. SPIE 4712, 300–312 (2002).

21. S. McLaughlin, C. Leach, S. Gneiting, V. M. Bove, S. Jolly, and D. E. Smalley, “Progress on waveguide-based holographic video,” Chin. Opt. Lett. 14, 010003 (2016).

31 22. A. Henrie, J. R. Codling, S. Gneiting, J. B. Christensen, P. Awerkamp, M. J. Burdette, and D. E. Smalley, “Hardware and software improve- ments to a low-cost horizontal parallax holographic video monitor,” Appl. Opt. 57, A122–A133 (2018).

23. N.A.D.C.A. Association “Why clean air ducts?” https://nadca.com/homeowners/why-clean- air-ducts.

24. “Safety of laser products—Part 1: Equipment classification and requirements,” IEC 60825-1 (2014).

25. O. S. Cossairt, J. Napoli, S. L. Hill, R. K. Dorval, and G. E. Favalora, “Occlusion-capable multiview volumetric three-dimensional display,” Appl. Opt. 46, 1244–1250 (2007).

26. A. Shiraki, M. Ikeda, H. Nakayama, R. Hirayama, T. Kakue, T. Shimobaba, and T. Ito, “Efficient method for fabricating a directional volumetric display using strings displaying multiple images,” Appl. Opt. 57, A33–A38 (2018).

27. D. Smalley, E. Nygaard, W. Rogers, and K. Qaderi, “Progress on photophoretic trap displays,” in Frontiers in Optics (Optical Society of America, 2018), paper FM4C-2.

28. A. Hendrickson, “Radiometric levitation of opaque particles in a laser beam,” https://www.physics.byu.edu/thesis/archive/2005.

29. N. Eckerskorn, R. Bowman, R. A. Kirian, S. Awel, M. Wiedorn, J. Küpper, M. J. Padgett, H. N. Chapman, and A. V. Rode, “Optically induced forces imposed in an optical funnel on a stream of particles in air or vacuum,” Phys. Rev. Appl. 4, 064001 (2015).

30. O. Jovanovic, “Photophoresis—Light induced motion of parti- cles suspended in gas,” J. Quant. Spectrosc. Radiat. Transfer 110, 889–901 (2009).

31. J. Lin and Y. Q. Li, “Optical trapping and rotation of airborne absorb- ing particles with a single focused laser beam,” Appl. Phys. Lett. 104, 101909 (2014).

32. B. Redding, S. C. Hill, D. Alexson, C. Wang, and Y. L. Pan, “Photophoretic trapping of airborne particles using ultraviolet illumination,” Opt. Express 23, 3630–3639 (2015).

33. Y. Uno, H. Qiu, T. Sai, S. Iguchi, Y. Mizutani, T. Hoshi, Y. Kawahara, Y. Kakehi, and M. Takamiya, “Luciola: a millimeter-scale light-emitting particle moving in mid-air based on acoustic levitation and wire- less powering,” in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (2018), Vol. 1, p. 166.

34.P.Blanche, Brigham Young University, Clyde Engineering Building Campus Dr, Provo, UT 84604 (personal communication, 2017).

35. SpaceX, “Starlink mission: mission overview,” 2019, https://www. spacex.com/sites/spacex/files/starlink_mission_press_kit.pdf

32

36. J. O’Callaghan, “SpaceX’s Starlink could cause cascades of space junk,” 2019, https://www.scientificamerican.com/article/spacexs- starlink-could-cause-cascades-of-space- junk/.

37. S. Tesh, “The politics of stress: the case of air traffic control,” Int. J. Health Serv. 14, 569–587 (1984).

38. J. A. Heit, M. D. Silverstein, D. N. Mohr, T. M. Petterson, W. M. O’Fallon, and L. J. Melton, “Risk factors for deep vein thrombosis and pulmonary embolism: a population-based case-control study,” Arch. Intern. Med. 160, 809–815 (2000).

33

CHAPTER 3: SIMULATING VIRTUAL IMAGES IN OPTICAL TRAP DISPLAYS

I hereby confirm that the use of this article is compliant with all publishing agreements.

© Simulating Virtual Images in Optical Trap Displays [2020] Optical Society of America. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modifications of the content of this paper are prohibited.

Simulating Virtual Images in Optical Trap Displays

Wesley Rogers, Daniel Smalley

3.1 Abstract

Optical trap displays (OTD) are an emerging display technology with the ability to create full-color images in air. Like all volumetric displays, OTDs lack the ability to show virtual images. However, in this paper we show that it is possible to instead simulate virtual images by employing a time-varying perspective projection backdrop.

© 2019 Optical Society of America

3.2 Introduction

Volumetric images are defined as having image points co-located with physical point scatterers [1]. The physicality of these volumetric points gives them perfect accommodative cues (because the viewer is focusing on a physical object). However, this definition requires that

34 volumetric images be composed of real image points that must exist only in a finite drawing volume. So, to display an optically correct volumetric image of the moon seen through a window would require the OTD display to be scaled to astronomical proportions. This is not unlike a movie set or theatrical stage, where props and players must occupy a fixed space even when trying to capture a scene meant to occur outdoors or in outer space. In the theater, this limitation is mitigated by including a flat backdrop that contains pictorial 3D cues such as a road winding to a point (perspective cues) or mountains eclipsing one another (occlusion cues) as they fade

(atmospheric cues) into the distance in order to create the sense of enlarged space. In a modern theater production, using projections for backdrops, motion can also be used to simulate parallax.

This is effective, because the background depicts sites at distances where the focus cues like accommodation and vergence would not be dominant. This approach could also be used to simulate virtual images. Volumetric images share these challenges and could share these solutions. With purely real volumetric image points, freespace volumetric displays will be forever confined to the drawing volume. What is needed is a ‘backdrop’ for volumetric displays.

In this work we apply and extend the backdrop analogy to simulate virtual images in a photophoretic optical trap display (OTDs). OTDs can draw flat and 3D structures in air (Figures

15b, c). It is possible to draw an image at the edge of the drawing volume and modify its apparent parallax while tracking the viewer to create an image that behaves optically as if it is located behind the display volume. This technique is referred to in the field of computer graphics as ‘perspective projection’ and it is achieved in OTDs by modifying the scale, shape, and parallax of the content on a background image plane as the viewer moves. The plane may also rotate to face the viewer, in situations where the plane is finite (not spherical to encompass the viewer), see Visualization 3. Cossairt [2] points out the limitation that all image points must

35 lie along a line extending from the observer through the display volume. The points that the user perceives in the back plane are no longer volumetric because they no longer coincide with physical scatters, so they lose the attribute of perfect accommodation [2, 3, 4, 5], but they gain the ability to dramatically increase the perceived size of the image volume. Using a perspective projection, an OTD can simultaneously generate both real volumetric image points for the foreground and simulated, non-volumetric image points for the background, greatly expanding the usefulness of the OTD platform.

Figure 3-15 OTD Display and Simulated Virtual Images Concept a. Optical Trap Display (OTD) b. 3D Vector, long exposure, image drawn by OTD c. Flat, rastered, long-exposure image drawn by OTD (content from Big Buck Bunny) d. Simulated virtual image concept with flat moving/rotating plane at the back of a draw volume filled with real images/objects such as 3D OTD images or 3D printed objects.

36 3.3 Theory

3.3.1 Optical Trap Displays

Optical trap displays operate by confining one or more particles in a photophoretic trap.

Particles of many different materials, sizes, and geometries have been demonstrated in optical traps [6, 7] . This paper uses Cellulose particles estimated at 10µm [8]. When the trap is moved, the particle is dragged along with it. The particle is moved through all of the image points in succession. When the particle reaches an image point, it is illuminated with a combination of red, green, and blue light. The particle moves through every point in the image several times a second creating an image by persistence of vision (see Figure 15a). The persistence of vision refresh rate (>10 frames/sec.) could be considered a lower bound for creating a convincing

‘backdrop’. The higher the resolution and the refresh rate of the system, the more convincing this effect can be made as the user will not be able to perceive updates to the imagery displayed to them and at sufficient resolution will have difficulty distinguishing display image points from real world image points.

3.3.2 Perspective Projection

One of the most general forms of perspective is ray tracing where the observer or camera is considered as a single point E = (x_0, y_0, z_0), the image point to be displayed X = (x, y, z), and the plane on which to display P. Finding the intersection of the line EX with the plane P gives the pixel coordinate of the point X. The perspective projection can be defined by the following matrix relationship for a plane P perpendicular to the line EO where O is the origin:

37 -r^2 r^2x_0 0 0 -r x_0 z_0 -r y_0 z_0 rρ^2 0 S = (3.1) 0 0 0 rρ -ρx_0 -ρy_0 −�� r^2ρ

� = (� + � + � ) (3.2) � = � + � (3.3)

The perspective projection matrix is designed to project a scene from space to the plane

[9, 10]. This allows for the representation of 3D points using a 2D surface which preserves all pictorial cues for a specific 3D observation point. For a video example at several depth planes see supplemental document and Visualization 1. By using a dynamic observation point, co- located with a real observer or a simulated observer such as a camera, and an updating image plane the visual cues of 3D image points can be achieved for the pictorial cues and motion parallax cues.

3.4 Experiment

To demonstrate simulated virtual images using modified parallax (perspective projection) we drew a flat (2D) OTD image of the moon at the back face of our drawing volume. This plane, in turn, sat at the front face of a 3D printed miniature of a house (See Figure 16b). A camera was placed on a rotating arm. The OTD image of the moon was drawn and redrawn at persistence of vision rates (12 frames per second). The OTD drawing function was modified by perspective projection in synchronization with movement of the camera arm. The speed of panning was approximately 0.0194 m/s. Camera and lens used were a Canon EOS 6D and Canon MP-E

65mm f/2.8 1-5X macro lens, respectively. The camera was focused at the chimney of the house

(approximately z=2 mm). The radius of swing was 100 mm to the front face of the camera lens.

38 The house had dimensions of 7.7 x 10.6 x 7.4 mm. The moon had a diameter of 0.5 mm (which varied during the experiment). The moon was updated at 12 frames per second. The draw volume measured 0.5 mm in y and 9.2mm in x (z volume was not used).

Figure 3-16 Experiment Setup a. An OTD display projects a flat moon image at the back of a draw volume that contains a 3D printed house. The image is updated at persistence of vision frame rates (12 frames per second) using the perspective projection base based on expected camera location. b. A close-up of the house position, moon position, and perceived moon position in 3D space.

3.5 Results

In Figures 17a-c , the moon is drawn in a plane in front of the house (flush with the front face at z=0 mm). The moon is not modified as the camera rotates providing a control image. In

Figures 17d-f, the moon is still drawn at z=0, but the moon is shifted laterally as the camera rotates to give parallax consistent with an object perceived at z=8 mm. In Figures 17g-i, the camera footage is superimposed over a Blender simulation (both with perspective projection activated). There is a bias due to imperfections in the setup, but the relative parallax agrees with simulation to within a 5.88% average error.

(spc-epc) error = ×100 (3.4) spc

39 Where spc is one component of the simulation pixel coordinate vector and epc is the corresponding component of the experimental pixel coordinate vector. Taking the Euclidean

Average of the two error components gives an error of 5.88% in the image. In this experiment we increased the display space by 80% to 1.8cm in one dimension compared to the physical volume of 1cm3 of the display.

Figure 3- 17 Experiment results a-c. Parallax for particle at z=0 (in front of the house) d-f. Simulation result, parallax for particle at z=0, with perspective projection. g-i. Experiment result, parallax for particle at z=0, with perspective projection The parallax is consistent with a particle at z=8 mm (behind the house). For full video see supplemental Document and Visualization 2.

40 3.6 Analysis

The modified parallax does appear to create images perceived behind the drawing volume. Our calculated error supports the use of this method. The modified parallax, after accounting for bias, shows good agreement with simulation. This proof of concept shows the potential effectiveness of increasing the display space of the volumetric display beyond the physical boundaries of the display. The increase of display volume by 80% in one dimension demonstrated here can be extrapolated to infinity given an immersive display where the viewer is always looking through the display volume.

Limitations of this approach include i. a lack of binocular disparity, ii. requirement of motion tracking of the viewers eye position, and iii. mismatch of accommodation/vergence and other visual cues. To the first limitation, this experiment was a monocular test. To be effective for normal-sighted human viewers our approach must eventually be modified to also provide accurate binocular parallax. For binocular parallax to function the OTD must be capable of controllable anisotropic scatter. To-date we have demonstrated anisotropic scatter [7] and we have outlined two possible methods for exerting control over this directional scatter in the future

[7, 11] which would allow for each eye of the user to receive a different perspective based on their respective spatial locations. With the possible future addition of directional output control the method proposed here would become more effective without any additional changes needed.

The second limitation is that this method requires the viewer to be tracked (specifically the viewers head); this is a significant encumbrance as normal OTD real images require no knowledge of the user’s position and still provide almost 4p steradians of view angle. However, we can say that once directional scatter has been achieved, tracking of the viewer could be omitted in at least two dimensions (horizontal and vertical). The angular outputs of the display

41 having image points corresponding to the perspective from that position updated regardless of viewer presence. The third dimension of the viewer position, the distance of the viewer from the display, would still be needed for ideal perspective reconstruction as the perspective projection is based on a 3D observation point. Further pursuit of directional scattering control is thus capable of solving one major shortcoming of OTD technology at this time, reducing the complexity of the method presented here, and extending the usefulness of the method presented here to include independent virtual images for several viewers at once. The final limitation is that of mismatch between the accommodative cue, which leads the user to focus at the projection plane, and the parallax cue which leads the viewer to focus at the perceived point. This stereopsis/accommodation mismatch is common in other systems [12, 13] sometimes causing adverse side effects to users [14, 15]. To mitigate it, we must place the perspective projection plane at a distance where parallax is more dominant than accommodation. This requirement is in harmony with the theatrical backdrop approach that we have proposed in this paper especially given the relatively rapid drop off of accommodation dominance with image distance [16].

We would argue that, these limitations notwithstanding, simulating virtual images with

OTD would be preferable to the use of a hybrid OTD/ system, which has been proposed [1]. Unlike OTDs, holograms are extremely computationally intensive and their computational complexity scales rapidly with display size. The complexity also scales rapidly with point spread function. Neither is true for OTD displays. Consider a background of stars; regardless of the number of stars, a holographic display would require Terabytes/per second of data to provide the diffractive focusing power to render sharp star-like points, and the parallax and focus cues would be wasted given the extreme distance of the virtual points. By comparison

42 OTDs would only require a bandwidth proportional to the number of visible stars (1.8 Mb/s to represent the approximately 5000 visible stars)

data = image points × bytes per point × frame rate (3.5) sec

and would provide pinpoint acuity. Combined with the advantages of a single homogeneous display technology there is a strong motivation to pursue simulated virtual OTD images.

3.7 Conclusion

We have demonstrated a display-level application for OTDs for the first time and established a proof of concept for simulating virtual images in optical trap displays. This result leads us to contemplate the possibility of immersive, OTD environments that not only include real images capable of wrapping around physical objects (or the user themselves), but that also provide simulated virtual windows into expansive exterior spaces. See Fig S1 in supplemental document and Visualization 4. The next steps in this work should include cues beyond parallax such as occlusion and defocus. This work also strongly motivates the need for controllable directional scatter in OTD systems.

Funding. National Science Foundation (NSF) (1846477)

Disclosures. The authors declare no conflicts of interest

See Supplemental Document, Visualization 1, Visualization 2, Visualization 3, Visualization 4 for supporting content.

43

3.8 References

1. Smalley, Daniel, et al. Optics and Photonics News 29.6 (2018)

2. Cossairt, Oliver S., et al. Applied optics 46.8 (2007)

3. Kakeya, Hideki, Shuta Ishizuka, and Yuya Sato. Optics express 22.20 (2014)

4. Yasui, Ryota, Isamu Matsuda, and Hideki Kakeya. Stereoscopic Displays and Virtual Reality Systems XIII. Vol. 6055. International Society for Optics and Photonics, 2006.

5. Favalora, Gregg E. Computer 38.8 (2005): 37-44.

6. Gong, Zhiyong, et al. Journal of Quantitative Spectroscopy and Radiative Transfer 214 (2018).

7. Rogers, Wesley, et al. Applied optics 58.34 (2019).

8. Smalley, D. E., et al. Nature 553.7689 (2018): 486-490.

9. CRC Standard Mathematical Tables and Formulas, by Daniel Zwillinger, CRC Press, 2018.

10. Scratchapixel. Scratchapixel.com, 15 Aug. 2014, www.scratchapixel.com/lessons/3d-basic- rendering/perspective-and-orthographic-projection-matrix/building-basic-perspective-projection- matrix.

11. Smalley, Daniel, and Kenny Squire. "Full-color freespace volumetric display with occlusion." U.S. Patent No. 10,129,517. 13 Nov. 2018.

12. Kramida, Gregory. IEEE transactions on visualization and computer graphics 22.7 (2015).

13. Kim, Donghyun, Sunghwan Choi, and Kwanghoon Sohn. IEEE transactions on circuits and systems for video technology 22.5 (2012).

14. Barrett, Judy. No. DSTO-TR-1419. Defence Science and Technology Organisation Canberra (Australia), 2004.

15. Kim, Joohwan, David Kane, and Martin S. Banks. Vision research 105 (2014): 159-165.

16. Cutting, J. E., & Vishton, P. M. (1995). In W. Epstein & S. J. Rogers (Eds.), Handbook of perception and cognition (2nd ed.). Perception of space and motion. San Diego, CA, US: Academic Press.

44

CHAPTER 4: CONCLUSION AND FUTURE WORK

4.1 Conclusion:

This thesis demonstrates advancements in real time animation, anisotropic scattering of point primitives, and suspension of particle primitives using a single trapping beam, display of virtual images. These advancements in OTD technology expand the visual cues displayed and the practicality of OTD displays.

Real time animation in OTD displays advances the technology enabling the display of video content. This allows OTDs to update in real time based on external variables such as a moving observer position. Updating the display in coordination with viewer position allows for additional control over visual cues. For example, this plays a direct role in the feasibility of the virtual image concept discussed in chapter 3.

Anisotropic scattering demonstrated in chapter 2 shows the potential for controlled scattering in OTD. The proof of concept evidence using OTD point primitives offers exciting potential for multi viewer applications such as two viewers looking into the same volume but seeing unique content based on preferences, see figure 17h-j. This advancement is also the foundation for future work into self-occluding images in OTD.

The proof of concept evidence presented showing multiple suspended OTD point primitives using a single trapping and illumination source provides strong evidence of the potential scalability of OTD technology. The method uses minimally increased complexity while

45 expanding the potential scale by a multiplicative factor offering a potential solution to overcome the scalability challenge common to many volumetric technologies.

Finally, the introduction of a method to simulate the visual representation of virtual images using real time animation of OTD display data advances the field of volumetric displays by decoupling display volume with perceived image size. Using the principles discussed in chapter 3 the OTD could be used to display content perceived at immense distances or sizes previously impossible in OTD technology. However, there are still a number of improvements to be made for the future of the technology.

4.2 Future Work:

The work described in this thesis should be seen only as a starting point for the continued improvement of the OTD technology. Improvements in four distinct areas stand out as critical to the long-term success of the OTD technology. The following paragraphs will address several directions of future advancement.

First, the trapping mechanics which could be considered the heart of the OTD. While optical trapping and specifically photophoretic optical trapping has been explored by the community [18] the specific application into a display has pushed the technology to the edge of current understanding. In particular high acceleration and translational speed in free space has not been well explored and has the potential to greatly improve the performance of the OTD display. High interest improvements for the future of the OTD include: improvements in minimum power required, dynamic reconfiguration of traps to compensate for external factors such as wind, and independently controlled multi-trap systems.

46 Second, improvements in the directional scattering or emission capabilities of OTD. In the analysis section of chapter 3, we briefly describe the advantages made possible by directional control of OTD output. The addition of directional control opens up three critical advancements for OTD: unique multi-viewer experience, self-occluding imagery, reduction in tracking complexity for simulated virtual images, and the possibility of virtual images based on backcast intersecting rays. Each of these is a major advancement in OTD technology. These methods will allow the images of the OTD to be self-occluding but will not allow for occlusion of real-world objects. Other volumetric displays such as layered liquid crystal displays have demonstrated control over environmental occlusion, but work on fundamentally different mechanisms to OTD displays. The path forward for OTD environmental occlusion seems obtaining sufficient point density that the particles begin to both physically occlude the surrounding as well as drown out the signal of the surrounding environment with greater time average brightness.

Third, the path optimization for the particle when drawing vectored images could be further improved. Many commercial products and systems exist these days that control three or more axes for use in additive or subtractive manufacturing. They have spent extensive resources developing optimizations for path planning to reduce operation times. These resources could be adapted to improve the pathing of particles drawing the image points. This would make it possible to have higher refresh rates (refresh rate is inversely linked to time needed to draw a single frame) and optimizations in the particle pathing such as smoothing of sharp corners or other high acceleration features.

Fourth, the field of acoustics has continued to develop in recent years including acoustic levitation. There have been several groups over the last few years that have demonstrated acoustically levitated functioning as a form of volumetric displays [19, 20, 21, 22].

47 Photophoretic trapping depends on the movement of molecules surrounding the suspended object to create the forces needed to maintain positional control. Pressure of the fluid (air in our discussion) is a factor in the behavior of these interactions and has been shown to reduce the required power for stable trapping [23]. The use of acoustic fields to create large pressure differentials in free space has been shown by acoustic levitation and other technologies [24]. The combination of this technology with OTD could help advance the display to be more stable, require lower power which reduces potential size, cost, and safety concerns, and add in the advantages of tactile haptic feedback potentially without the disruption of the trapped particle.

I now extend the challenge to those who come after me to continue the improvement of the OTD and its surrounding systems until it becomes capable of moving out of the research environment and into the world. Physical form factors can be reduced and moved to more solid- state solutions. Different trapping mechanics and methods can be explored. The applications of the OTD can be expanded. Pick a challenge and overcome it.

48

Bibliography

References Included in Chapters 1 and 4:

[1] Blundell, Barry G. "On the uncertain future of the volumetric 3D display paradigm." 3D Research 8.2 (2017): 11. [2] D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, and K. Costner, “A photophoretic-trap volumetric display,” Nature 553, 486–490 (2018). [3] Smalley, Daniel, et al. "Volumetric displays: Turning 3-D inside-out." Optics and Photonics News 29.6 (2018): 26-33. [4] Langhans, Knut, et al. "Solid Felix: a static volume 3D-laser display." Stereoscopic Displays and Virtual Reality Systems X. Vol. 5006. International Society for Optics and Photonics, 2003. [5] Sullivan, Alan. "DepthCube solid-state 3D volumetric display." Stereoscopic displays and virtual reality systems XI. Vol. 5291. International Society for Optics and Photonics, 2004. [6] Kumagai, Kota, Satoshi Hasegawa, and Yoshio Hayasaki. "Volumetric bubble display." Optica 4.3 (2017): 298-302. [7] Chekhovskiy, Aleksandr, and Hiroshi Toshiyoshi. "The use of laser burst for volumetric displaying inside transparent liquid." Japanese journal of applied physics 47.8S1 (2008): 6790. [8] O.S. Cossairt et al. “Occlusion-capable multiview volumetric three-dimensional display,” Appl. Opt. 46, 1244 (2007). [9] T. Yendo et al. “The Seelinder: Cylindrical 3-D display viewable from 360 degrees,” J. Vis. Commun. Img. Rep. 21, 586 (2010). [10] Ochiai, Yoichi, et al. "Fairy lights in femtoseconds: aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields." ACM Transactions on Graphics (TOG) 35.2 (2016): 1-14. [11] Ruiz-Avila, J. Holovect : Holographic Vector Display. Kickstarter https://www.kickstarter.com/projects/2029950924/holovect-holographic-vector-display (2016) [12] Perlin, Kenneth. "Volumetric display with dust as the participating medium." U.S. Patent No. 6,997,558. 14 Feb. 2006. [13] Lam, Miu-Ling, Bin Chen, and Yaozhung Huang. "A novel volumetric display using fog emitter matrix." 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015.

49 [14] Sand, Antti, and Ismo Rakkolainen. "A hand-held immaterial volumetric display." Stereoscopic Displays and Applications XXV. Vol. 9011. International Society for Optics and Photonics, 2014. [15] Gneiting, Scott Alexander. "Improved leaky-mode waveguide spatial light modulators for three dimensional displays." (2017). [16] Cutting, J. E., & Vishton, P. M. (1995). Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In W. Epstein & S. J. Rogers (Eds.), Handbook of perception and cognition (2nd ed.). Perception of space and motion (pp. 69-117). San Diego, CA, US: Academic Press. [17] Desyatnikov, Anton S., et al. "Photophoretic manipulation of absorbing aerosol particles with vortex beams: theory versus experiment." Optics Express 17.10 (2009): 8201-8211. [18] Gong, Zhiyong, et al. "Optical trapping and manipulation of single particles in air: Principles, technical details, and applications." Journal of Quantitative Spectroscopy and Radiative Transfer 214 (2018): 94-119. [19] Ochiai, Yoichi, Takayuki Hoshi, and Jun Rekimoto. "Three-dimensional mid-air acoustic manipulation by ultrasonic phased arrays." PloS one 9.5 (2014): e97590. [20] Y. Uno, H. Qiu, T. Sai, S. Iguchi, Y. Mizutani, T. Hoshi, Y. Kawahara, Y. Kakehi, and M. Takamiya, “Luciola: a millimeter-scale light-emitting particle moving in mid-air based on acoustic levitation and wire- less powering,” in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (2018), Vol. 1, p. 166. [21] Hirayama, Ryuji, et al. "A volumetric display for visual, tactile and audio presentation using acoustic trapping." Nature 575.7782 (2019): 320-323. [22] Sahoo, Deepak Ranjan, et al. "Joled: A mid-air display based on electrostatic rotation of levitated janus objects." Proceedings of the 29th Annual Symposium on User Interface Software and Technology. 2016. [23] Eckerskorn, Niko, et al. "Optically induced forces imposed in an optical funnel on a stream of particles in air or vacuum." Physical Review Applied 4.6 (2015): 064001. [24] Gan, Woon-Seng, Jun Yang, and Tomoo Kamakura. "A review of parametric acoustic array in air." Applied Acoustics 73.12 (2012): 1211-1219.

References Included in Chapter 2:

[1] D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, and K. Costner, “A photophoretic-trap volumetric display,” Nature 553, 486–490 (2018). [2] D. Smalley, T. C. Poon, H. Gao, J. Kvavle, and K. Qaderi, “Volumetric displays: turning 3-D inside-out,” Opt. Photonics News 29(6), 26–33 (2018).

50 [3] L. T. Sharpe, A. Stockman, W. Jagla, and H. Jägle, “A luminous effi- ciency function, V*(λ), for daylight adaptation,” J. Vis. 5(11), 962–965 (2005). [4] A. Turpin, V. Shvedov, C. Hnatovsky, Y. V. Loiko, J. Mompart, and W. Krolikowski, “Optical vault: a reconfigurable bottle beam based on conical refraction of light,” Opt. Express 21, 26335–26340 (2013). [5] J. Peatross, D. Smalley, W. Rogers, E. Nygaard, E. Laughlin, K. Qaderi, and L. Howe, “Volumetric display by movement of particles trapped in a laser via photophoresis,” Proc. SPIE 10723, 1072302 (2018). [6] H. He, N. R. Heckenberg, and H. Rubinsztein-Dunlop, “Optical par- ticle trapping with higher-order doughnut beams produced using high efficiency computer generated holograms,” J. Mod. Opt. 42, 217–223 (1995). [7] P. Zhang, Z. Zhang, J. Prakash, S. Huang, D. Hernandez, M. Salazar, D. N. Christodoulides, and Z. Chen, “Trapping and transporting aerosols with a single optical bottle beam generated by moiré techniques,” Opt. Lett. 36, 1491–1493 (2011). [8] P. Xu, X. He, J. Wang, and M. Zhan, “Trapping a single atom in a blue detuned optical bottle beam trap,” Opt. Lett. 35, 2164–2166 (2010). [9] M. Woerdemann, C. Alpmann, M. Esseling, and C. Denz, “Advanced optical trapping by complex beam shaping,” Laser Photonics Rev. 7, 839–854 (2013). [10] Z. Gong, Y. L. Pan, G. Videen, and C. Wang, “Optical trapping and manipulation of single particles in air: principles, technical details, and applications,” J. Quant. Spectrosc. Radiat.Transfer 214, 94–119 (2018). [11] V. G. Shvedov, A. V. Rode, Y. V. Izdebskaya, A. S. Desyatnikov, W. Krolikowski, and Y. S. Kivshar, “Giant optical manipulation,” Phys. Rev. Lett. 105, 118103 (2010). [12] S. K. Bera, A. Kumar, S. Sil, T. K. Saha, T. Saha, and A. Banerjee, “Simultaneous measurement of mass and rotation of trapped absorbing particles in air,” Opt. Lett. 41, 4356– 4359 (2016). [13] THORLABS, “Large beam diameter scanning galvo systems,” 2019, https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=6 057. [14] V. G. Shvedov, C. Hnatovsky, N. Shostka, A. V. Rode, and W. Krolikowski, “Optical manipulation of particle ensembles in air,” Opt. Lett. 37, 1934–1936 (2012). [15] F. Liu, Z. Zhang, S. Fu, Y. Wei, T. Cheng, Q. Zhang, and X. Wu, “Manipulation of aerosols revolving in taper-ring optical traps,” Opt. Lett. 39, 100–103 (2014). [16] Y. Ochiai, K. Kumagai, T. Hoshi, J. Rekimoto, S. Hasegawa, and Y. Hayasaki, “Fairy lights in femtoseconds: aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields,” ACM Trans. Graph. 35, 17 (2016).

51 [17] K. Kumagai, S. Hasegawa, and Y. Hayasaki, “Volumetric bubble display,” Optica 4, 298– 302 (2017). [18] V. G. Shvedov, A. V. Rode, Y. V. Izdebskaya, A. S. Desyatnikov, W. Krolikowski, and Y. S. Kivshar, “Selective trapping of multiple particles by volume speckle field,” Opt. Express 18, 3137–3142 (2010). [19] F. Liu, Z. Zhang, Y. Wei, Q. Zhang, T. Cheng, and X. Wu, “Photophoretic trapping of multiple particles in tapered-ring optical field,” Opt. Express 22, 23716–23723 (2014). [20] G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. Giovinco, M. J. Richmond, and W. S. Chun, “100-million-voxel volumetric display,” Proc. SPIE 4712, 300–312 (2002). [21] S. McLaughlin, C. Leach, S. Gneiting, V. M. Bove, S. Jolly, and D. E. Smalley, “Progress on waveguide-based holographic video,” Chin. Opt. Lett. 14, 010003 (2016). [22] A. Henrie, J. R. Codling, S. Gneiting, J. B. Christensen, P. Awerkamp, M. J. Burdette, and D. E. Smalley, “Hardware and software improve- ments to a low-cost horizontal parallax holographic video monitor,” Appl. Opt. 57, A122–A133 (2018). [23] N.A.D.C.A. Association “Why clean air ducts?” https://nadca.com/homeowners/why-clean- air-ducts. [24] “Safety of laser products—Part 1: Equipment classification and requirements,” IEC 60825-1 (2014). [25] O. S. Cossairt, J. Napoli, S. L. Hill, R. K. Dorval, and G. E. Favalora, “Occlusion-capable multiview volumetric three-dimensional display,” Appl. Opt. 46, 1244–1250 (2007). [26] A. Shiraki, M. Ikeda, H. Nakayama, R. Hirayama, T. Kakue, T. Shimobaba, and T. Ito, “Efficient method for fabricating a direc- tional volumetric display using strings displaying multiple images,” Appl. Opt. 57, A33–A38 (2018). [27] D. Smalley, E. Nygaard, W. Rogers, and K. Qaderi, “Progress on photophoretic trap displays,” in Frontiers in Optics (Optical Society of America, 2018), paper FM4C-2. [28] A. Hendrickson, “Radiometric levitation of opaque particles in a laser beam,” https://www.physics.byu.edu/thesis/archive/2005. [29] N. Eckerskorn, R. Bowman, R. A. Kirian, S. Awel, M. Wiedorn, J. Küpper, M. J. Padgett, H. N. Chapman, and A. V. Rode, “Optically induced forces imposed in an optical funnel on a stream of particles in air or vacuum,” Phys. Rev. Appl. 4, 064001 (2015). [30] O. Jovanovic, “Photophoresis—Light induced motion of parti- cles suspended in gas,” J. Quant. Spectrosc. Radiat. Transfer 110, 889–901 (2009). [31] J. Lin and Y. Q. Li, “Optical trapping and rotation of airborne absorb- ing particles with a single focused laser beam,” Appl. Phys. Lett. 104, 101909 (2014). [32] B. Redding, S. C. Hill, D. Alexson, C. Wang, and Y. L. Pan, “Photophoretic trapping of airborne particles using ultraviolet illumination,” Opt. Express 23, 3630–3639 (2015).

52 [33] Y. Uno, H. Qiu, T. Sai, S. Iguchi, Y. Mizutani, T. Hoshi, Y. Kawahara, Y. Kakehi, and M. Takamiya, “Luciola: a millimeter-scale light-emitting particle moving in mid-air based on acoustic levitation and wire- less powering,” in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (2018), Vol. 1, p. 166. [34] P. Blanche, Brigham Young University, Clyde Engineering Building Campus Dr, Provo, UT 84604 (personal communication, 2017). [35] SpaceX, “Starlink mission: mission overview,” 2019, https://www. spacex.com/sites/spacex/files/starlink_mission_press_kit.pdf. [36] J. O’Callaghan, “SpaceX’s Starlink could cause cascades of space junk,” 2019, https://www.scientificamerican.com/article/spacexs- starlink-could-cause-cascades-of-space- junk/. [37] S. Tesh, “The politics of stress: the case of air traffic control,” Int. J. Health Serv. 14, 569– 587 (1984). [38] J. A. Heit, M. D. Silverstein, D. N. Mohr, T. M. Petterson, W. M. O’Fallon, and L. J. Melton, “Risk factors for deep vein thrombosis and pulmonary embolism: a population-based case-control study,” Arch. Intern. Med. 160, 809–815 (2000). References included in Chapter 3: [1] Smalley, Daniel, et al. "Volumetric displays: Turning 3-D inside-out." Optics and Photonics News 29.6 (2018): 26-33. [2] Cossairt, Oliver S., et al. "Occlusion-capable multiview volumetric three-dimensional display." Applied optics 46.8 (2007): 1244-1250. [3] Kakeya, Hideki, Shuta Ishizuka, and Yuya Sato. "Realization of an aerial 3D image that occludes the background scenery." Optics express 22.20 (2014): 24491-24496. [4] Yasui, Ryota, Isamu Matsuda, and Hideki Kakeya. "Combining volumetric edge display and multiview display for expression of natural 3D images." Stereoscopic Displays and Virtual Reality Systems XIII. Vol. 6055. International Society for Optics and Photonics, 2006. [5] Favalora, Gregg E. "Volumetric 3D displays and application infrastructure." Computer 38.8 (2005): 37-44. [6] Gong, Zhiyong, et al. "Optical trapping and manipulation of single particles in air: Principles, technical details, and applications." Journal of Quantitative Spectroscopy and Radiative Transfer 214 (2018): 94-119. [7] Rogers, Wesley, et al. "Improving photophoretic trap volumetric displays." Applied optics 58.34 (2019): G363-G369. [8] Smalley, D. E., et al. "A photophoretic-trap volumetric display." Nature 553.7689 (2018): 486-490.

53 [9] “4.15.3.” CRC Standard Mathematical Tables and Formulas, by Daniel Zwillinger, CRC Press, 2018, pp. 256–257. [10] Scratchapixel. “The Perspective and Orthographic Projection Matrix (Building a Basic Perspective Projection Matrix).” Scratchapixel.com, 15 Aug. 2014, www.scratchapixel.com/lessons/3d-basic-rendering/perspective-and-orthographic-projection- matrix/building-basic-perspective-projection-matrix. [11] Smalley, Daniel, and Kenny Squire. "Full-color freespace volumetric display with occlusion." U.S. Patent No. 10,129,517. 13 Nov. 2018. [12] Kramida, Gregory. "Resolving the vergence-accommodation conflict in head-mounted displays." IEEE transactions on visualization and computer graphics 22.7 (2015): 1912-1931. [13] Kim, Donghyun, Sunghwan Choi, and Kwanghoon Sohn. "Effect of vergence– accommodation conflict and parallax difference on binocular fusion for random dot stereogram." IEEE transactions on circuits and systems for video technology 22.5 (2012): 811- 816. [14] Barrett, Judy. Side effects of virtual environments: A review of the literature. No. DSTO- TR-1419. Defence Science and Technology Organisation Canberra (Australia), 2004. [15] Kim, Joohwan, David Kane, and Martin S. Banks. "The rate of change of vergence– accommodation conflict affects visual discomfort." Vision research 105 (2014): 159-165. [16] Cutting, J. E., & Vishton, P. M. (1995). Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In W. Epstein & S. J. Rogers (Eds.), Handbook of perception and cognition (2nd ed.). Perception of space and motion (pp. 69-117). San Diego, CA, US: Academic Press.

54

Appendix A: Generating Perspective Projection in MATLAB.

clear all; close all;

%% How to use this program

%======

% This program is part of a work by Wesley Rogers and Daniel Smalley titled

% Simulating virtual images in Optical Trap Displays.

% This program was used to generate image points using a perspective

% transform to modify motion parallax in conjunction with an observers

% location relative to an optical trap display. For further explanation

% see full article.

% User Input

% Note: All Distances units must be consistant and were in mm for the

% example code shown here.

% X_Center, Y_Center, Z_Center defines the coordinates of the scene

% Vertical_offset_observer defines the vertical difference between the

% observer eye (or camera sensor) and the Y = 0 plane.

% stlRead allows the user to input an stl file to be displayed with the

% generated image points to simulate additonal items in the scene.

% scale_stl is a scale factor applied to the STL file when displayed in

% the scene.

% rotation_stl is a rotation offset applied to the STL file when displayed

% in the scene.

55 % shift is a translation offset applied to the STL file when displayed in

% the scene.

% enable_plot allows the user to enable or disable the plotting function,

% disabling the plotting will typically cause the program to execute

% much faster.

% use_projection_as_object is an option that allows the user to take a

% perspective transform of the orignal data from the 0 deg position

% and then use the output of this transform as the 'original data'.

% This was added to allow for the motion parallax to be easily

% isolated from scale as the original object size would become

% irrelevant as the new projected object would match the scale of

% the other projected views.

% fps allows the user to set the frames per second of the output videos

% saved. Note: video is only saved if plotting is enabled.

% background allows the user to control the color of the scene background

% grid_color allows the user to control the color of the scene grid

% enable_plot_pivot_point toggles on or off the point plotted at the pivot

% location.

% pivot_point_coord sets the location of the "observer" pivot. Note: the

% perspective transform does not require a fixed distance to operate

% correctly but this was needed to maintain focus in our real world

% experiments since we did not have the ability to dynamically change

% the focus of the camera in the experiment.

% pivot_color sets the color of the pivot point plotted

% observation_color sets the color of the observation point plotted

% projection_plane_depth sets the distance that the plane to be projected

% onto is from the pivot point.

% offset_real_object sets the distance offset for the object from the pivot

% point. Note: This can have any value but the object may not be

56 % visible in the projection to the plane depending on this value.

% Effectively the object can be moved out of frame making it no

% longer visible.

% coord allows the user to select from multiple object data sets however

% this functionallity was removed to provide a more streamlined

% experience.

%======

%% World Coordinates

X_Center = 0;

Y_Center = 0;

Z_Center = 0;

Vertical_offset_observer = 29;

[vertices,faces,normals,name] = stlRead('house_v1.stl'); scale_stl = 1.5; rotation_stl = 160; shift = [0,0,-13];

%% Simulation for Virtual Images in Volumetric Displays

%======

% This section contains some of the basic options that can be changed to

% generate different behavior from this program.

% These include:

% Display plots

% Video file frame rate

% Color of various objects in the scenes

% Pivot point for the moving camera

57 % Distances of different objects such as the projection plane

%======

enable_plot = 0; use_projection_as_object = 1; %1 = use the projection 0 = use normal fps = 24; background = [0.0 0.0 0.0]; grid_color = [1 1 1]; enable_plot_pivot_point = 1; pivot_point_coord = [X_Center,Y_Center, Z_Center]; pivot_color = 'w'; observation_color = 'r';

disp(' ') projection_plane_depth = 0; if isempty(projection_plane_depth)

projection_plane_depth = 0;

disp('Defualt 0'); else

projection_plane_depth = -1 * projection_plane_depth; end disp(' ') offset_real_object = input...

('Enter distance of real object from pivot point: '); defualt_value_offset_real_object = 8; if isempty(offset_real_object)

offset_real_object = defualt_value_offset_real_object;

disp('Defualt 8'); end

58

disp(' '); disp('Shapes available: '); disp('5: Crescent Moon') coord = 5;%input('Enter a number: '); if isempty(coord)

coord = 3;

disp('Defualt Cube');

shape = "Cube_"; end

switch coord

%% Coordinates of crescent moon

%======

% This section generates the 3D data of a crescent moon

%======

case 5

shape = "Crescent_Moon_";

moon_vertical_offset = 0;

if isempty(moon_vertical_offset)

moon_vertical_offset = 0;

disp('Defualt 0');

end

distance_to_moon_real_world = 8;

radius_of_moon_real_world = 1.5;

if offset_real_object == defualt_value_offset_real_object

offset_real_object = distance_to_moon_real_world;

text = sprintf('Moon defualt distance: %d',...

distance_to_moon_real_world);

59 disp(text)

moon_scale = radius_of_moon_real_world;

text = sprintf('Moon defualt size: %d',...

radius_of_moon_real_world);

disp(text)

else

moon_scale = 0.5;

end

number_of_points = input('number of points in moon: ');

if isempty(number_of_points)

number_of_points = 26;

disp('Defualt 26');

end

disp(' ')

number_of_points = number_of_points + 1; % add one because so it will close the loop

% Y_bad = offset_real_object : 2*pi/number_of_points : offset_real_object+1; % start : density of points : end

% Y = linspace(offset_real_object,offset_real_object,number_of_points); % start, end, number of points

% t = linspace(0,2*pi,number_of_points);

% Z = moon_scale*cos(t); %size vertical

% X = moon_scale*sin(t); %size horizontal

fade1 = linspace(1,.85,round(number_of_points/4));

fade2 = linspace(.85,1,round(number_of_points/4));

fade = [fade1,fade2];

ONES = ones(1,number_of_points);

60 t1 = linspace(1*pi/3 + pi/12, -2*pi/3 + pi/12, length(fade));

t2 = linspace(-2*pi/3 + pi/12, 1*pi/3 + pi/12, length(fade));

X = [cos(t1) * moon_scale, (fade) .* cos(t2) * moon_scale]...

+ X_Center;

Z = [sin(t1) * moon_scale, fade .* sin(t2) * moon_scale]...

+ moon_vertical_offset + Z_Center;

% start, end, number of points

Y = linspace(offset_real_object,offset_real_object,...

length(fade)*2);

otherwise end

R = linspace(1,1,length(Y));

G = linspace(1,1,length(Y));

B = linspace(0,0,length(Y));

%% Arc the viewer is translating along

%======

% This section generates the positions along an arc path for the observer

% to view from. These points are arbitrary and an arc was chosen to allow

% the easiest camera setup as the scene would remain at a constant distance

% to the camera.

%======

disp(' '); %skip a line number_of_observation_points = input('Number of observation points: '); if isempty(number_of_observation_points)

number_of_observation_points = 15;

61 disp('Defualt 15'); end disp(' ') starting_angle_deg = input('Starting angle: '); if isempty(starting_angle_deg)

starting_angle_deg = -54; % degrees

starting_angle = deg2rad(starting_angle_deg);

text = sprintf('Defualt %10.2f degrees',starting_angle_deg);

disp(text);

end disp(' ') ending_angle_deg = input('Ending angle: '); if isempty(ending_angle_deg)

ending_angle_deg = -142;

ending_angle = deg2rad(ending_angle_deg);

text = sprintf('Defualt %10.2f degrees',ending_angle_deg);

disp(text);

end disp(' ') theta = linspace(starting_angle, ending_angle, ...

number_of_observation_points);

% this will determine the size of the projected image by affecting how

% close we are to the projection plane (closer up the projection will be

% smaller to appear the right size) since it is the angle that we shift

% that the moon is responding to not the arc length radius_of_observation = input('Enter observer distance: '); if isempty(radius_of_observation)

62 radius_of_observation = 86;

text = sprintf('Defualt to: %d', radius_of_observation);

disp(text) end disp(' ') x22_1 = (radius_of_observation * cos(theta)) + X_Center; y22_1 = radius_of_observation * sin(theta); z22_1 = ones(1,length(x22_1))*1 + Z_Center + Vertical_offset_observer; start_point_x = x22_1(1); start_point_y = y22_1(1);

% viewing points in cartesian coordinates ob_pt_set_arc = [x22_1; y22_1; z22_1];

% viewing points in spherical coordinates

[az_1,el_1,radius_computed_1] = cart2sph(ob_pt_set_arc(1,:),...

ob_pt_set_arc(2,:),ob_pt_set_arc(3,:)); az_1 = rad2deg(az_1); el_1 = rad2deg(el_1);

disp('Shape of the observation path: '); disp('1: arc motion'); disp('2: linear motion'); disp('3: linear motion at 45deg'); n = input('Enter a number: '); if isempty(n)

n = 1;

disp('Defualt arc motion'); end disp(' ')

63 %% Generate a projection at normal incidence

%======

% This section generate a projection at normal incidence to match the scale

% but with different levels of parallax (isolate the motion parallax

% parameter)

%======projected_original = zeros(3,1); projected_original_R = projected_original; projected_original(:,1) = []; % remove zeros added at start projected_original_R(:,1) = []; % remove zeros added at start projected_original_G = projected_original_R; projected_original_B = projected_original_R; ob_pt = [radius_of_observation * cos(-(2*pi)/4), radius_of_observation...

* sin(-(2*pi)/4),0];

for inc = 1:length(Z)

x_to_project = X(inc);

y_to_project = Y(inc);

z_to_project = Z(inc);

R_to_project = R(inc);

G_to_project = G(inc);

B_to_project = B(inc);

% observation point

point_of_observation = ob_pt;

% point on 3d model object

point_on_object = [x_to_project,y_to_project,z_to_project];

% plane to project to (location of particle in physical space)

64 plane_to_project_to = [0, projection_plane_depth-.001 ,0];

% normal vector of plane

normal_vector_of_plane_to_project_to = [0,1,0];

[point_of_intersection,feedback_on_intersection] = ...

plane_line_intersect(normal_vector_of_plane_to_project_to,...

plane_to_project_to,point_of_observation,point_on_object);

projected_original =horzcat(projected_original,point_of_intersection');

projected_original_R = horzcat(projected_original_R,R_to_project);

projected_original_G = horzcat(projected_original_G,G_to_project);

projected_original_B = horzcat(projected_original_B,B_to_project); end

switch n

%% Projection: Viewer translation in arc

case 1 projected_points_1 = zeros(3,1); projected_R = projected_points_1; projected_points_1(:,1) = []; % remove zeros added at start projected_R(:,1) = []; % remove zeros added at start projected_G = projected_R; projected_B = projected_R;

for inc1 = 1:length(ob_pt_set_arc)

ob_pt = [ob_pt_set_arc(1,inc1),ob_pt_set_arc(2,inc1),...

ob_pt_set_arc(3,inc1)]; %change the observation point

for inc = 1:length(Z)

if(use_projection_as_object)

65 x_to_project = projected_original(1,inc);

y_to_project = projected_original(2,inc);

z_to_project = projected_original(3,inc);

R_to_project = projected_original_R(inc);

G_to_project = projected_original_G(inc);

B_to_project = projected_original_B(inc);

disp('using projection as original object!!') else

x_to_project = X(inc);

y_to_project = Y(inc);

z_to_project = Z(inc);

R_to_project = R(inc);

G_to_project = G(inc);

B_to_project = B(inc); end

% observation point point_of_observation = ob_pt;

% point on 3d model object point_on_object = [x_to_project,y_to_project,z_to_project];

% plane to project to (location of particle in physical space) plane_to_project_to = [0, projection_plane_depth ,0];

% normal vector of plane normal_vector_of_plane_to_project_to = [0,1,0];

[point_of_intersection,feedback_on_intersection] = ...

plane_line_intersect(normal_vector_of_plane_to_project_to,...

plane_to_project_to,point_of_observation,point_on_object); projected_points_1 = horzcat(projected_points_1,...

point_of_intersection');

66 projected_R = horzcat(projected_R,R_to_project);

projected_G = horzcat(projected_G,G_to_project);

projected_B = horzcat(projected_B,B_to_project);

end end

otherwise

disp('') end

%% Create Video Object Perspective

%======

% This section creates a video file to save the output of

% the perspective camera

%======

% view figure

% Create video object

if enable_plot > 0

if(use_projection_as_object)

video_name = strcat('proj2',shape,num2str((length(Y))),...

'_points_',num2str((number_of_observation_points)),...

'_positions_',num2str(offset_real_object),'_fps',...

num2str(fps),'black',num2str(radius_of_observation),...

'obser','.avi');

else

video_name = strcat(shape,num2str((length(Y))),'_points_',...

num2str((number_of_observation_points)),'_positions_',...

num2str(offset_real_object),'_fps',num2str(fps),...

67 'black',num2str(radius_of_observation),'obser','.avi');

end

video_name = char(video_name); %needs to be a char not a string

writerObj = VideoWriter(video_name);

writerObj.FrameRate = fps; % set framerate

open(writerObj);% open the video writer

%% Create Video Object Gods Eye Perspective

%======

% This section creates a video file to save the output of the fixed camera

% view figure

%======

if(use_projection_as_object)

video_name_GE = strcat('proj2',shape,num2str((length(Y))),...

'_points_',num2str((number_of_observation_points)),...

'_positions_',num2str(offset_real_object),'_fps',...

num2str(fps),'black',num2str(radius_of_observation),...

'obser_GE','.avi');

else

video_name_GE = strcat(shape,num2str((length(Y))),...

'_points_',num2str((number_of_observation_points)),...

'_positions_',num2str(offset_real_object),'_fps',...

num2str(fps),'black',num2str(radius_of_observation),...

'obser_GE','.avi');

end

%needs to be a char not a string

video_name_GE = char(video_name_GE);

writerObj_GE = VideoWriter(video_name_GE);

writerObj_GE.FrameRate = fps; % set framerate

68 open(writerObj_GE);% open the video writer

end

%% Plotting perspective

%======

% This section generates the figure that shows a perspective view of the

% scene from the observation point used in the perspective transform. From

% this view the scene will appear with the intended motion parallax.

%======if enable_plot > 0

fig1 = figure(1);

fig2 = figure(2);

figure(1);

set(gca, 'Clipping', 'off');

pause_time = 0.0001;

cam_VA = 70;

if enable_plot_pivot_point == 1

%plot pivot point

plot3([pivot_point_coord(1), pivot_point_coord(1)],...

[pivot_point_coord(2),pivot_point_coord(2)],...

[pivot_point_coord(3),pivot_point_coord(3)],...

'-o','MarkerSize',5,'Color',pivot_color)

hold on

end

switch n

case 1

disp('ARC MOTION')

69 %Go through each observation point

for index1 = 1:length(ob_pt_set_arc)

if enable_plot_pivot_point == 1

%plot pivot point

plot3([pivot_point_coord(1),...

pivot_point_coord(1)],[pivot_point_coord(2)...

,pivot_point_coord(2)],...

[pivot_point_coord(3),pivot_point_coord(3)]...

,'-o','MarkerSize',5,'Color',pivot_color)

hold on

end

% set(gca, 'Projection','perspective');

hold on;

offset_thing = 10000;

scale_axis = 10;

%force the plot size

plot3([X_Center+offset_thing,...

X_Center+offset_thing],[0,0],...

[Z_Center,Z_Center],'-o','MarkerSize',1,...

'Color','k')

plot3([X_Center-offset_thing,...

X_Center-offset_thing],[0,0],...

[Z_Center,Z_Center],'-o','MarkerSize',1,...

'Color','k')

plot3([X_Center, X_Center],...

[0+offset_thing,0+offset_thing],...

[Z_Center,Z_Center],'-o','MarkerSize',1,...

'Color','k')

plot3([X_Center, X_Center],...

70 [0-offset_thing,0-offset_thing],...

[Z_Center,Z_Center],'-o','MarkerSize',1,...

'Color','k') plot3([X_Center, X_Center],[0,0],...

[Z_Center+offset_thing,Z_Center+offset_thing],...

'-o','MarkerSize',1,'Color','k') plot3([X_Center, X_Center],[0,0],...

[Z_Center-offset_thing,Z_Center-offset_thing],...

'-o','MarkerSize',1,'Color','k')

set(gca,'GridColor',grid_color)

if(use_projection_as_object)

% don't draw original shape else

% draw original shape

for index = 1:length(Z)-1

plot3(X(index:index+1),Y(index:index+1),...

Z(index:index+1),'color',...

[R(index) G(index) B(index)],'lineWidth',2)

end end index2 = length(Z)*(index1)-1;

%Plot projected object while index2 > 1+length(Z)*(index1-1)

plot3(projected_points_1(1,index2:index2+1),...

projected_points_1(2,index2:index2+1),...

projected_points_1(3,index2:index2+1),...

'color',[projected_R(index2) ...

71 projected_G(index2) projected_B(index2)],...

'lineWidth',0.5)

hold on

index2 = index2 - 1;

end

disp('OBSERVE ARC MOTION')

scatter3(ob_pt_set_arc(1,index1),...

ob_pt_set_arc(2,index1),ob_pt_set_arc(3,index1),...

15,'MarkerEdgeColor','g','MarkerFaceColor',...

[0 0 index1/length(ob_pt_set_arc)])

set(gca,'CameraViewAngleMode','Manual')

camva(cam_VA)

campos([ob_pt_set_arc(1,index1),...

ob_pt_set_arc(2,index1),ob_pt_set_arc(3,index1)])

camtarget('manual');

camtarget(gca,[pivot_point_coord(1), ...

pivot_point_coord(2), pivot_point_coord(3)])

drawnow

% view(az_1(index1)+90,el_1(index1))

set(gca, 'Projection','perspective');

set(gca, 'color', background);

set(gca,'GridColor',grid_color)

grid on;

xlabel('X'); ylabel('Y'); zlabel('Z');

xlim([(-scale_axis*radius_of_observation)+X_Center ...

(scale_axis*radius_of_observation)+X_Center]);

ylim([(-scale_axis*radius_of_observation)+Y_Center ...

(scale_axis*radius_of_observation)+Y_Center]);

72 zlim([(-scale_axis*radius_of_observation)+Z_Center ...

(scale_axis*radius_of_observation)+Z_Center]);

axis equal

% load an ascii STL sample file (STLGETFORMAT and STLREADASCII) house = stlPlot(vertices,faces,name, scale_stl, rotation_stl, shift);

frame_perspective = getframe(fig1);

writeVideo(writerObj, frame_perspective);

pause(pause_time)

clf

end

close(writerObj);

%% Plotting God's eye view

%======

% This section generates the figure with a fixed camera view showing the

% broken illusion of the perspective transform when not at the observation

% point used in the transform

%======figure(2) x45_obs = (radius_of_observation * cos(-45)) + X_Center; y45_obs = radius_of_observation * sin(-45); z45_obs = 1*1 + Z_Center + Vertical_offset_observer;

disp('ARC MOTION')

for index1 = 1:length(ob_pt_set_arc) %Go through each observation point

if enable_plot_pivot_point == 1

%plot pivot point

plot3([0,0],[0,0],[0,0],'-o','MarkerSize',5,...

'Color', pivot_color)

set(gca,'GridColor',grid_color)

end

73 hold on;

% plot original object so that both the projection

% and original can be seen for index = 1:length(Z)-1

plot3(X(index:index+1),Y(index:index+1),Z(index:index+1),...

'color',[R(index) G(index) B(index)],'lineWidth',2)

xlabel('X'); ylabel('Y'); zlabel('Z');

xlim([-1.1*radius_of_observation 1.1*radius_of_observation]);

ylim([-1.1*radius_of_observation 1.1*radius_of_observation]);

zlim([-1.1*radius_of_observation 1.1*radius_of_observation]);

set(gca, 'Projection','orthographic');

hold on end index2 = length(Z)*(index1)-1; while index2 > 1+length(Z)*(index1-1) %Plot projection

plot3(projected_points_1(1,index2:index2+1),...

projected_points_1(2,index2:index2+1),...

projected_points_1(3,index2:index2+1),'color',...

[projected_R(index2) projected_G(index2) ...

projected_B(index2)],'lineWidth',0.8)

hold on

xlabel('X'); ylabel('Y'); zlabel('Z');

xlim([-1.1*radius_of_observation+X_Center ...

1.1*radius_of_observation+X_Center]);

ylim([-1.1*radius_of_observation ...

1.1*radius_of_observation]);

zlim([-1.1*radius_of_observation+Z_Center ...

1.1*radius_of_observation+Z_Center]);

set(gca, 'Projection','orthographic');

74 set(gca,'GridColor',grid_color)

% set(gca, 'Projection','perspective');

index2 = index2 - 1;

end

%plot observation point

disp('OBSERVE')

scatter3(ob_pt_set_arc(1,index1),ob_pt_set_arc(2,index1),...

ob_pt_set_arc(3,index1),55,'MarkerEdgeColor',...

observation_color,'MarkerFaceColor',observation_color)

set(gca,'GridColor',grid_color)

camva(cam_VA)

campos([x45_obs,y45_obs,z45_obs])

camtarget('manual');

camtarget(gca,[pivot_point_coord(1), pivot_point_coord(2),...

pivot_point_coord(3)])

% view(45,45)

% campos('auto')

grid off;

set(gca, 'Projection','orthographic');

set(gca, 'color', background);

% load an ascii STL sample file (STLGETFORMAT and STLREADASCII) house = stlPlot(vertices,faces,name, scale_stl, rotation_stl, shift);

pause(pause_time)

frame = getframe(gcf);

writeVideo(writerObj_GE, frame);

clf

end

close(writerObj_GE);

75

otherwise

disp('')

end end % end enable plot

%% Scale data to correct range

%======

% This is an optional section used to shift the range of the data

%======

% subtract average value in y axis (leia y axis which is Z axis in matlab)

% to center at 0

% avg = mean(projected_points_1(3,:));

% projected_points_1(3,:) = projected_points_1(3,:) - avg;

%

% subtract average value in x axis to center at 0

% avg = mean(projected_points_1(1,:));

% projected_points_1(1,:) = projected_points_1(1,:) - avg;

%% Shift Data to all positive values

%======

% This section changes the range of values to be compatible with

% current OTD software

%======

find_min = min(projected_points_1);

76 mini = min(find_min); if mini < 0

% make all values positive and the minimum value

% should be should be 0 or greater

projected_points_1 = projected_points_1 + abs(mini); else

% do nothing because the values are all postive or zero end

projected_R = projected_R .* 255; % values from 0 to 255 projected_G = projected_G .* 255; projected_B = projected_B .* 255;

%% Write coordinates to output file

%======

% This section is used to produce a file that can be read by current OTD

% display software at Brigham Young University

%======

formatSpec = "%.0f,"; % add .0 to all values to make them doubles formatSpec_end = "%.0f"; fileName = shape+(length(Y))+"_points_"+(number_of_observation_points)...

+"_positions_"+offset_real_object+"_offset"+...

radius_of_observation+"obser"+".txt"; if use_projection_as_object

fileName = shape+(length(Y))+"_points_"+...

77 (number_of_observation_points)+"_positions_"+offset_real_object...

+"_offset"+radius_of_observation+"obser"+"_STATIC.txt"; end fileID = fopen(fileName,"w"); %create the file to write to fprintf(fileID,"// Shape: "); fprintf(fileID,shape); if use_projection_as_object

fprintf(fileID," STATIC"); end fprintf(fileID,"\n// "); fprintf(fileID,"File Name: "); fprintf(fileID,fileName); fprintf(fileID,"\n// "); fprintf(fileID,formatSpec_end,(number_of_points+1)); fprintf(fileID," points per "); fprintf(fileID,shape); fprintf(fileID," (rollover point), "); fprintf(fileID,formatSpec_end,(number_of_observation_points)); fprintf(fileID," "); fprintf(fileID,shape); fprintf(fileID," frames "); fprintf(fileID,"\n"); fprintf(fileID,"// Offset from pivot point: "); fprintf(fileID,formatSpec_end,offset_real_object); fprintf(fileID,"\n"); fprintf(fileID,"// Total number of points: "); fprintf(fileID,formatSpec_end,length(projected_R)); fprintf(fileID,"\n"); fprintf(fileID,"const uint16_t vector_XCoord0_16[] PROGMEM_FAR = {");

78 fprintf(fileID,formatSpec,projected_points_1(1,1:end-1)); fprintf(fileID,formatSpec_end,projected_points_1(1,end)); fprintf(fileID,"};\n");

%intentionally adding Z data here fprintf(fileID,"const uint16_t vector_YCoord0_16[] PROGMEM_FAR = {"); fprintf(fileID,formatSpec,projected_points_1(3,1:end-1)); fprintf(fileID,formatSpec_end,projected_points_1(3,end)); fprintf(fileID,"};\n"); fprintf(fileID,"const uint8_t vector_Red0_8[] PROGMEM_FAR = {"); fprintf(fileID,formatSpec,projected_R(1:end-1)); fprintf(fileID,formatSpec_end,projected_R(end)); fprintf(fileID,"};\n"); fprintf(fileID,"const uint8_t vector_Green0_8[] PROGMEM_FAR = {"); fprintf(fileID,formatSpec,projected_G(1:end-1)); fprintf(fileID,formatSpec_end,projected_G(end)); fprintf(fileID,"};\n"); fprintf(fileID,"const uint8_t vector_Blue0_8[] PROGMEM_FAR = {"); fprintf(fileID,formatSpec,projected_B(1:end-1)); fprintf(fileID,formatSpec_end,projected_B(end)); fprintf(fileID,"};\n"); fprintf(fileID,"const uint8_t offset_to_center_array[] PROGMEM_FAR = {"); fprintf(fileID,formatSpec_end,abs(mini)); fprintf(fileID,"};\n"); fclose(fileID); %close file

%% Projection: Viewer translation in arc only first point projected_points_for_graph = zeros(3,1); projected_points_for_graph(:,1) = []; % remove zeros added at start

79

for inc1 = 1:length(ob_pt_set_arc)

%change the observation point

ob_pt = [ob_pt_set_arc(1,inc1),ob_pt_set_arc(2,inc1),...

ob_pt_set_arc(3,inc1)];

if(use_projection_as_object)

x_to_project = projected_original(1,inc);%X(inc);

y_to_project = projected_original(2,inc);%Y(inc);

z_to_project = projected_original(3,inc);%Z(inc);

else

x_to_project = X(1);

y_to_project = Y(1);

z_to_project = Z(1);

end

% observation point

point_of_observation = ob_pt;

% point on 3d model object

point_on_object = [x_to_project,y_to_project,z_to_project];

% plane to project to (location of particle in physical space)

plane_to_project_to = [0, projection_plane_depth ,0];

% normal vector of plane

normal_vector_of_plane_to_project_to = [0,1,0];

[point_of_intersection,feedback_on_intersection] = ...

plane_line_intersect(normal_vector_of_plane_to_project_to,...

plane_to_project_to,point_of_observation,point_on_object);

projected_points_for_graph = horzcat(projected_points_for_graph,...

point_of_intersection');

if inc1 == 1

80 start_point_x_moon = projected_points_for_graph(1,inc1);

start_point_y_moon = projected_points_for_graph(2,inc1);

end end

%% plot for determining translation vs rotation for index = 1:length(ob_pt_set_arc)

distance_from_start = sqrt((start_point_x - ...

ob_pt_set_arc(1,index))^2 + (start_point_y - ...

ob_pt_set_arc(2,index))^2);

% viewing points in spherical coordinates

[az_1,el_1,radius_computed_1] = cart2sph(ob_pt_set_arc(1,index),...

ob_pt_set_arc(2,index),ob_pt_set_arc(3,index));

az_1 = rad2deg(az_1);

rotation_from_start = az_1;

distance_from_start_moon = sqrt( (start_point_x_moon - ...

projected_points_for_graph(1,index))^2 + (start_point_y_moon - ...

projected_points_for_graph(2,index))^2 );

% viewing points in spherical coordinates

[az_1_moon,el_1_moon,radius_computed_1_moon] = cart2sph(...

projected_points_for_graph(1,index),...

projected_points_for_graph(2,index),...

projected_points_for_graph(3,index));

az_1_moon = rad2deg(az_1_moon);

rotation_from_start_moon = az_1_moon;

figure(3)

scatter(az_1,distance_from_start,'r')

81 hold on

scatter(0,distance_from_start_moon,'b')

xlabel('rotation','FontSize',12,'FontWeight','bold','Color','b');

ylabel({'Translation'},'FontSize',12,'FontWeight','bold','Color','b');

legend('Observer', 'Image Point')

end figure(4) plot3(projected_points_for_graph(1,:),projected_points_for_graph(2,:), ...

projected_points_for_graph(3,:),'b') hold on plot3(ob_pt_set_arc(1,:),ob_pt_set_arc(2,:),ob_pt_set_arc(3,:),'r')

disp(' THE END ');

%% Functions function [I,check] = plane_line_intersect(n,V0,P0,P1)

%======

%plane_line_intersect computes the intersection of a plane

% and a segment(or a straight line)

% Inputs:

% n: normal vector of the Plane

% V0: any point that belongs to the Plane

% P0: end point 1 of the segment P0P1

% P1: end point 2 of the segment P0P1

%

%Outputs:

% I is the point of interection

82 % Check is an indicator:

% 0 => disjoint (no intersection)

% 1 => the plane intersects P0P1 in the unique point I

% 2 => the segment lies in the plane

% 3 => the intersection lies outside the segment P0P1

%

% Example:

% Determine the intersection of following the plane

% x+y+z+3=0 with the segment P0P1:

% The plane is represented by the normal vector n=[1 1 1]

% and an arbitrary point that lies on the plane, ex: V0=[1 1 -5]

% The segment is represented by the following two points

% P0=[-5 1 -1]

% P1=[1 2 3]

% [I,check]=plane_line_intersect([1 1 1],[1 1 -5],[-5 1 -1],[1 2 3]);

% This function was originally written by :

% Nassim Khaled

% Wayne State University

% Research Assistant and Phd candidate

% Modified by:

% Wesley Rogers

% Brigham Young University

% Research Assistant and Masters candidate

%======

I=[0 0 0];

u = P1-P0;

w = P0 - V0;

D = dot(n,u);

83 N = -dot(n,w);

check=0;

if abs(D) < 10^-7 % The segment is parallel to plane

if N == 0 % The segment lies in plane

check=2;

return

else

check=0; %no intersection

return

end

end

%compute the intersection parameter

sI = N / D;

I = P0 + sI.*u;

if (sI < 0 || sI > 1)

check = 3; %The intersection point lies outside the

%segment, so there is no intersection

else

check = 1;

end end % end of function

84

Appendix B: Blender Simulation of Virtual Image

The 3D modeling program Blender was used to create figures: Visualizations 1,2,3, 4, and

Supplementary figure 1 in chapter 3. The approach discussed below may be applicable to other software packages, but the steps may vary slightly based on the software tools available. Blender was selected for the open source nature of the platform and author familiarity. This will not be an exhaustive tutorial of Blender and will assume some reader familiarity with the program.

Additional beginner tutorials can be readily found through online resources such as YouTube.

Step 1. Define the scene. Having a complete scene will be needed to produce a complete visual recreation. With the scene defined, add in the elements that you wish to see projected onto the

OTD plane. The elements should be placed at the appropriate world coordinates the observer is intended to perceive them at with the corresponding world scale. That is to say model the scene as if the objects are world objects and no OTD display is in use.

Step 2. Define the camera. For the process described here to work the camera used must remain the same from the projection to the final viewing steps. This is due to the nature of the perspective projection. As shown in chapter 3 equations 1,2, and 3 the perspective projection is spatially dependent meaning that the view from one position is not necessarily the same as from another. This can be seen (intentionally) in visualizations 3 and 4 from chapter 3 where the bottom right hand corner of the video is a secondary view of the scene. This view is not the view from the camera that was used to generate the perspective coordinates. Using a different camera

85 view will result in a projection that does not accurately recreate the original visual of the scene.

This will also lead to certain key features not aligning with world markers. An example of this is in visualization 4 of chapter 3 when the moon is projected to appear outside the window. From the secondary god’s eye view the wood of the window frame does not align with the transparent section of the moon, see figure 18. When the transparent section does not align we violate the visual cue of occlusion (which as discussed in Chapter 1 is a highly influential cue) causing the intended illusion to be broken. The camera can move in the scene in any direction or speed as long as the projection is synced to that motion.

Figure B- 18. Left. Zoomed in image from God’s eye view of Chapter 3 visualization 4. Looking at the moon in the center of this image we can see that the moon is not occluded by the window frame properly. Right. Same frame from the camera used to generate the perspective projection.

Step 3. Record the projection. With the scene built, the desired objects to be projected placed into the scene at the appropriate size and scale, and camera path determined we will now begin the projection process. The process can be done in multiple ways; here we will primarily focus on the easiest method. The easiest method is to avoid directly handling any of the data and take advantage of the perspective projections built into Blender functions. The harder methods would be to take the approach from Appendix A and implement them into the scripting functions of Blender. Back to the easy approach, the basic idea is to render the scene with the world objects visible and then use the output of that render as the OTD image. Visualization 3 and 4 of chapter 3 were made with this method.

86 Step 4. Apply recorded projection to plane. Now that the perspective we wished to record has been stored we must bring it into the scene as an OTD. This can be done most simply as a plane we define inside the OTD volume. The plane should maintain the aspect ratio of the original render as to prevent distortion and can be placed anywhere in the volume of the OTD such that the normal of the plane is orthogonal to the camera, and the plane covers the intended world objects to be projected. In the example of the Train in Chapter 3 Visualization 3 the plane was placed in front of the train at the pivot point of rotation. The material seen in figure 19 was applied to the plane and then the process is complete. The material essentially has 3 parts: image, transparency, and overall transparency. The image is the video or series of images generated in step 3. The transparency is used to allow the projected objects to appear to be occluded by scene objects. The overall transparency is used to have the image plane appear at a certain time stamp in a scene.

Figure B- 19. Showing the material node structure for the plane displaying the “virtual” images.

87