<<

DRAFT Procedural animations in interactive art experiences - A state of the art review C. Tollola [email protected] University of Ottawa

1. Abstract

The state of the art review broadly oversees the use of novel research utilized in the creation of virtual environments applied in interactive art experiences, with a specific focus on the application of procedural animation in spatially (SAR) exhibitions. These art exhibitions frequently combine sensory displays that appeal, replace, and augment the visual, auditory and touch or haptic senses. We analyze and break down art-technology related innovations in the last three years, and thoroughly identify the most recent and vibrant applications of interactive art experiences in the review of numerous installation applications, studies, and events. Display mediums such as , augmented reality, , and are overviewed in the context of art experiences such as visual art , park or historic site tours, live concerts, and theatre. We explore research and extrapolate how recent innovations can lead to different applications that will be seen in the future.

2. Introduction Procedural animation refers to the generation of rules-based motion Interactive art is an exciting field generation, stemming from inverse and for researchers, technologists and forward kinematics. These rules artists, leveraging modern research to generally apply to robotics control, but provide art entertainment to the masses. software such as In 2020, the need for social distancing open-sourced Unity and Maya allow during a global pandemic has increased amateurs, established production demand for experiences that do not studios, programmers and artists to require travel, or are human-made. benefit from a suite of tools to create realistic animations that apply these ● Hardware formulas in several fields such as virtual ● Possible artistic effects and reality, computer games, and movies. animations Animation is the essential ingredient for a visually-focused interactive art 3. Case Studies and Emerging experience, it is also graphics-heavy, applications TeamLab’s “Borderless” , synchronically renders animations in 8K resolution and laser projectors real-time, providing visitors to the museum a dark walk into a mixed and augmented reality without the use of head-mounted displays. Several advances in spatial augmented reality attempt to solve Figure 1: A theatrical stage (right) enhanced by latency of media, calibration, accuracy 3D projection mapping (left) presenting depth-of-field, 3D In recent years, Virtual Reality reconstruction of real buildings and (VR), Augmented Reality (AR), Mixed objects. Therefore, we present the Reality (MR), (XR), essential innovations in calibration, Projection Mapping/ Spatial Augmented resolution, the unique artistic effects that Reality (SAR) and robotics have propagate and exist due to both become considered the most hardware and algorithms developed and technologically advanced artistic used to construct and fabricate mediums. These mixed technologies technology in suggestively its most extend and have the ability to display direct, and artistic, form. Through the and computerize traditional artistic exploration of limitations of such various disciplines such as visual art, film, technologies in interactive art, games, music, theatre, and puppetry. conclusions are drawn, how do we The depth of man-made reality changes create an experience to connect according to which of the introduced humans to technology, and inspire them display mediums are applied. In the to believe the world around them has mid-1990s, these terms were changed from reality, applying this conceptualized as the “ concept to inspire future generations to continuum” [1] that described the innovate through technology. spectrum of mediums to be “entirely real” to “entirely virtual”. We present the following topics: VR involves computer-generated ● Case studies and emerging visuals and environments that applications completely surround the real-world of a ● Projection mapping user. The user has a central view and is able to view and/ or interact with the The concepts have been applied to virtual environment in real-time. Highly physical environments: museums, engaging hardware such as a VR head amusement parks [c1], arcades, and both visually and auditorily immerses concerts [7,c2]. This section also the user. focuses on projector-camera systems Mixed reality is a broad term that where various objects, such as an blends digitally created content and a object held by the user, clothes, a real-world environment which can human body, and a face, are projection involve AR and VR simultaneously. MR targets [26]. can encapsulate or be compared to Laila [4] is an opera experience immersive entertainment/ hyper-reality with a storyline produced by the Finnish which involves SAR or projection national opera, in the space of a mapping virtual content onto real 360-degree dome. In August, 2020 it will environments. Generally, SAR involves attempt to place users as the driving camera-projector systems capable of forces for animations displayed by laser projecting laser displays onto flat or 3D projectors and partially generated by surfaces. XR is an umbrella term and artificial intelligence. Procedural used to group VR, AR and MR together animations such as flocks of birds are and software, hardware, methods and affected by parameters of swarm experience to see these levels of virtual intelligent algorithms and chaos theory. reality. Triggers by the audience affect the The fluidity and realism of rendering of bird motivations, including interaction between the virtual art their group separation sensitivity, ease experience and a participant vary based of alignment, and cohesive movement. on the devices chosen explained in They otherwise act according to field section 5. functions such as: it is easier to go in In this section, we introduce one direction than another. Examples of applications that attempt to transport triggers are user movement and group users into spaces that are completely dynamics in the space which affect the mad-made. Evidently, when birds that fly in flocks on the screen. The make it possible to virtually travel to butterfly effect implemented introduces distant environments, “here” and “now” unpredictability in animations, such that may begin to take on new meanings. small initial changes in the system created by interaction may accumulate 3.1 MR, XR and Projection Mapping into larger changes later on. The (SAR) Applications specific hardware implemented in the project is not known. Hyper-reality applications have In 2018, mixed reality creation been tremendously present in recent studio TeamLab opened “Borderless” in years due to technological advances. Tokyo It is a dedicated 10 000 square meter projection mapping museum music to mimic the sensation of where corridors lead to different spaces zero-gravity [v]. in which a different artwork of a different A projection mapped gym with style is displayed, generally in the form virtual targets was developed to allow of interactive animations. In 2019, it sports interaction between players with drew 2.3 million visitors from 160 mobility issues and those without [19]. countries around the world. Some 520 Recently, jelly type edible computers, 470 Epson projectors of retroreflectors have been applied 6,000 to 15,000 ANSI lumens are used directly on a pancake [24] to be to render real-time animations in 8K detected by projection mapping quality [5]. The museum has shown systems. They can be detected in low different artwork over the years. In one light conditions, an advantage over “interactive display”, users are able to edible QR codes [25]. This is another touch flowers projected on walls and a medium on which to project art and real-time render dissipates the flower possibly interact with. [ 1]. Such animations use Concerts streamed in real time generated animation randomness for over the in 2020 have evolved unique content similarly to Lilia but are into a mainstream phenomenon, with a unknown. Specific sensors are not single concert grossing about $20 known, but could involve depth sensors, million in sales [6]. Other such RGB cameras and LiDAR. Localised “livestream” adoptions include an artist and spatial sound in soundscapes add streaming a concert in a massive to immersion and transition between multiplayer online game virtual corridors and into the experience by environment [7], a television singing Yamaha VXC and VXS series competition demonstrating projection loudspeakers. A tearoom area in the onto real-life objects and set, [8]. Other museum leverages pose estimation and applications include dance [14] and a object tracking to project different concert using AR animations [9]. animations onto tea in a cup or tea Projection mapping in livestream spilled [video 2]. concerts followed shortly after [video 3]. In 2019, art installation Popular social media platform Facebook “gravityZERO” combined a programmed will introduce a paid streaming platform robotic arm to suspend a participant, [10], due to high demand for live projection mapping for projection of interactive art experiences. realistic images of outer space, and Figure 2: [8] Application of 3D graphics simulation in a singing performance of background LED panel uses tracks the camera movement to render a realistic 3D background

3.2 AR Applications Panasonic demonstrated how AR feedback in ping pong allows the Art and graphics applied to audience to feel immersed during a augmented reality have shown to match [11]. Museums and zoos have enhance application areas such as used AR to annotate and provide sports, zoos, historic sites, and could commentary on exhibits. One such see application in the fashion industry. project was developed for the Singapore Zoo [33] where users wearing a cinema is also exploring annotation [27] One headset can view the through AR and an agenda has been names and details of different zoo proposed for television watching [28] . animals, view AR skeletal structures AR is proposed to revitalize applied to the animals, play mini games cultural heritage sites. In Brisbane, UK with generated images as well as use an AR adventure game is proposed to an AR generated tape measuring tool. deliver a historically accurate and sense The AR zoo experience involves the of “spirit” to recreate a tour experience surrounding spatial 3D environment, [u]. The heritage site only previously had controller touch and triggers, 3D audio signage to communicate the history of elements and head tracking. 360 video the house to visitors. AR art attached to a physical object is being sold with capabilities to connect to AR headsets [13]. In 2018, Disney successfully Virtual puppeteering is another created a system that overlays a person AR implementation where a user can with a “watertight” 3D costume, using use their smartphone to move and only a monocular RGB image [q]. They interact with different animated use a 3D costume model, manually characters through their camera. The create a 3D skeleton with different system leverages procedural animation proportions and poses wearing the such that predefined motions for a given costume, and finally best-match the character are “rigged” with joints using resulting 3D costume data set to the 2D Unity and interactions with the user participants’ skeleton taken from 2D trigger generated motions. The pose tracking. puppeteering application uses a state machine approach to determining rules to animated character movement, such as a state is defined by internal logic, the root position of the character, the configuration of the animation, and the inverse kinematic state. Figure 3: 3D costume augmentation from a single monocular camera image

One limitation of the work is that greater accuracy of overlaying the costume relied on fine tuning parameters to certain people (ie: limb length variance) or poses. Another limitation of the method is that it lacks robust pose and shape estimation which could be solved by machine learning optimized to run in real-time. This innovation could be extended to work in art experiences such as a means of trying on clothing, either for historic museums, or fashion runway shows Figure 4: State machine used between puppet which have struggled with interactivity character animations during a global pandemic [m] since AR has already been used in fashion 3.3 VR Applications magazines [e]. Virtual reality is the highest has been proposed to improve of immersion currently offered by interactivity between players [34]. research, and generally involves a Extending this idea of interactivity head-mounted display that may include as art; social art experiences through headphones and controllers. A games have increased - VR social participant may use virtual reality to experience RecRoom launched in 2016 explore a new environment, watch a now receives 40 million monthly visits. It movie, or play a game. The viewing of includes 3 million user-created games 360 degree video and image artworks in created using an in-game visual fine art, film, and photography node-based programming language disciplines that are compatible with [35]. Furthermore, social media platform headsets, but do not require them, have Facebook is setting the stage for been debated as virtual realities mainstream adaptation of VR by themselves due to their 3D-mimicking launching a similar platform currently in nature. beta release, Horizons [36]. An example of virtual reality A more defined example of a social VR applied in 2020 is a World War II VR game is a collaborative baseball batting experience that combines realistic cage intersects a social art and sports sound and historical data of real war experience. trenches to deliver an immersive In the film industry, film festivals educational experience [29]. Major have accepted VR films since 2016 [38] publishing company National Over 15 VR and XR film festivals have Geographic also published a cinematic taken place online in 2020 [12]. A VR series from the point of view of real notable example of soldiers [30]. executes an interactive narrative where Another VR experience takes the the watcher can make decisions to form of a virtual impressionist-style effect the ending [39, s]. painting world. Visitors to the world The immersive filmmaking studio, become brush strokes themselves as Felix & Paul released a VR series they move in space [31]. “Space Explorers: The ISS Experience”, A VR pottery workshop in 2018 for headsets and 360 experience received high appraise in video enabled mobile devices that 2019, allowing a new virtual medium for depicts the lives of NASA astronauts on art creation [t]. As well, a fully VR the International Space Station (ISS). museum was created with a popular The last episode will film the first-ever block-shaped builder spacewalk in VR outside of the ISS. The Minecraft with areas for visitors to create studio modified a Z-Cam V1 Procamera their own art and disrupt museum with nine 4K sensors to create a 8K exhibits [32]. In video games, the use of 360-degree [e]. a live host in multiplayer escape games 3.4 Robotics and sensing artists without the expert knowledge in technologies path-planning and drone limitations to choreograph their own drone shows. Social robots have been applied Drones have also been used as haptic for the purpose of interactive storytelling devices for VR/ AR outlined in the for young children. A robot made as a haprics section. A dedicated museum to companion for children with cancer drone art and drones-as-sculptures in invited children to make choices about Lisbon has been investigated [44]. the story, to reenact parts in the story The idea of utilizing a network of with it together, and to record and replay sensors to recreate a real marsh into a different sound effects. Compared to virtual reality has been applied [45] by conventional media, the study the MIT Media Lab and NYU with concluded that the robot improved “Doppelmarsh”. The virtual reality uses enjoyment of the story, and that children textures extracted by camera from the recalled more about the story [40]. real environment to be rendered as Robots in standup comedy have been “Doppelmarsh” visual assets using the improved to tell jokes with accurate Unity3D game engine. Furthermore, timing [41], receiving 69.5% positive wind, temperature, precipitation sensors reactions to jokes from a live audience as well as cameras to detect rain, snow, and therefore could see realistic fog and changes in season are sent to a employment in the future. websocket server to facilitate Aerial robots - drones, have sensor-data mapping which is then grown as an artistic medium due to the analyzed with Google Vision API, introduction of swarm path-planning. In stored, and rendered by the game 2017 their GPS positioning abilities were engine used at a Superbowl in America to display the US flag with mounted LED-panels [42]. In 2018, an artist used a drone to create graffiti artwork [43]. Disney also developed an autonomous Figure 5: The virtual marsh and the real world spray-painting drone [r]. Through marsh various path-planning algorithms, The Doppelmarsh’s “reality resynthesis” drones have produced shows similar to can be applied to a variety of mediums fireworks and these algorithms have such as VR and be used as a means to been compared in [p]. The government experience real-life locations. For of South Korea used 300 drones to example, artists created the outer space display an image depicting essential themed installation “Orbits” that follow workers using masks during the the movements of space junk, Covid-19 pandemic [b]. In the future an espionage satellites, and normal application might be created to allow satellites in real-time, The installation also produces a complex soundscape of walls or regular or irregular shapes, and sine waves whose frequency depend on often indoor environments with complex the altitude of a specific satellite above geometry. Projection mapping has also the surface of the Earth and volume been applied to water[17, 23] and fog depends on its distance from the [21, 22] as display mediums. observer [c]. Another sensor musical installation produces a composition 4.2 Overview dependent on live data of sharks tracked with GPS. The widespread use of projection mapping as a tool in digital art museums 3.5 Summary and live concerts recently have shown that projectors improved in resolution, Through this section, we have dynamic range, frame rate, power shown that emerging technologies are consumption, colour gamut in recent often applied in art as an experimental years. The price for small companies to vehicle. Other applications of interactive produce projection mapped artwork has art do not fit into any of the categories also become cheaper, and projectors beforehand. and software kits are available to The first is “video art” which utilized consumers. generative neural networks to Projectors can be classified into photorealistically replace the face of an two categories, those that have the goal actor in the video [g]. An updated of projecting undistorted assets onto version of this recent media (complex) geometry, and those that phenomenon is explained in section 6. have the goal of controlling the color The second is a co-created art appearance of the projection. It is experience where professional artists commonplace to have a mixture of both received users on a cloud-hosted video goals. platform, and adjusted the participants' video feed stylistically [y]. The application of and mix of extended reality, and robotics technology will only increase in the field of interactive art.

4.1 Projection mapping Figure 6: Mesh of watertight triangles to represent the geometry of the scene by surface The implementation of modern [o] projection mapping involves the projection of light onto a surface, especially with the help of a camera as a sensor. Such surfaces are objects and either entirely known and modelled in computer aided software such as AutoCAD, or is reconstructed with a 3D reconstruction method such as 3D point clouds (see figure 7), the optimization of which is a research field of its own. Figure 7: 3D point cloud of colored 3D points [o] 4.3 Calibrations Full projection systems must go A visualization of the two main through calibrations, one of which is calibration methods are shown in Figure known as “geometric”, which involves 2 [46]. In the geometric method, modelling the shape of the geometry of structured light patterns are captured by the projection surface through various the camera to generate pixel techniques as well as physical correspondences between the parameters of the projectors and projector-camera pair (procam). In the cameras. The other method is photometric method, color consistent “photometric” which estimates internal projection is generated by projecting color processing of the projectors and color patterns and sensed by cameras camera and attempts to reduce or colorimeters. The result of the inaccuracies caused by the projection combined two calibrations create an surface such as reflectance. The accurate image construction of the projected surface is

. Figure 8: Combination of geometric and photometric calibration with procams.

The computer vision field in input and output devices. This can be general requires accurate calibration of extended in the space of procam cameras and a method to register all systems for interactive art experiences. The most common method to calibrate beforehand [50]. Disney proposed the cameras that is still applied in practice current most generic method for involves capturing multiple images of a unknown surface geometries with no marker board on a planar surface in parameter tuning which frequently order to estimate focal length, principal needs to be carried out by experts [51]. point, and error compensation It is notable as it is the only calibration parameters to model lens distortion [47]. method that does not require initial To make calibration more quick and guesses. It requires two or more accessible to the general public such as cameras and self-adapts to outlier pixel artists, there have been numerous correspondences. First, a pixel calibration methods. The next methods correspondence is generated using a included treating the projector as the projection of structured light, and pixel inverse of a camera using structured pairs considering outliers are matched. light patterns. A more recent paper Then, it reconstructs the environment as proposed the projector to identify blob a 3D point cloud and removes outliers patterns it itself projected onto a planar after an iterative optimization function. surface that was out of focus for the As the next procams are added, it projector [48]. The most recent employs a fundamental matrix techniques have involved developing an estimation of consecutive device entire pipeline for multiple procam calibration and surface reconstructions. applications with semi-automated Refer to figure 3 for a simplification of self-calibration. In a semi-automated the process. Disney’s method accurately method, the projector projects washing calibrated 33 devices in a procam points to estimate internal and external system in less than 45 minutes. Future parameters on a global scale but work on the method proposes that the requires user guidance and three system will be developed to orthogonal planes.[49] Another automatically suggest a number of proposed method requires the surface cameras and optimal location positions geometry of the space to be known for a given project. Figure 9: Disney’s procam self-calibration algorithm

4.4 Rigid and non-rigid dynamic solving a light transportation matrix projection mapping obtained from depth of field relation between projectors and the object. The field of projection mapping is Deformable or “non-rigid” moving developing methods for accurate objects require higher accuracy projection in the case that a projected methods for augmentation with scene is able to move, or the projector projected light. A system was presented or camera itself moves. in 2017 using markerless human face Rigid dynamic objects can be tracking [53] with a latency of 10 ms. tracked with pose estimation of the The system first uses a mesh (see projector. Obtaining a stable pose figure 6) generated from facial estimation can involve markers that are expressions, then deforms it, applies a later diminished by the projector through texture and uses an extended Kalman radiometric compensation [52]. One filter algorithm for motion prediction to such method achieved low latency since correct the latency delay on the a 1000 Hz speed procam was used[53]. projected object. Depth sensors have Pose estimation was also facilitated with been used to estimate pose with a single IR camera [54] (figure 10) by unaligned placement of depth cameras and projectors but with unsuitable mapping onto human arms has been latency for interactive displays [55]. performed within 10 ms of latency [83]. Recently, high-speed projection

Figure 10: Pipeline of the single IR camera pose estimation and projection mapping system [f]

The conducted research used a parametric deformable model of human forearms, and a regression accuracy compensation method for the skin deformation based on texture mapping. The method however, requires predefined body shape parameters, but estimates joint position in real-time. The regression was trained on a dataset of forearms with markers. Possible applications of this implementation of projection mapping might include special effects wound make up, allowing an artist to draw on a participant in real-time, or showing options for tattoos. light emitted by projectors that unwantedly reflects from the target surface, building on algorithmic radiometric compensation techniques [56]. This might be through better production and therefore control of colors spatially and spectrally. Further research is needed to accurately project onto dynamic non-lambertian surfaces of complex textures.

4.6 Limitations

The high dynamic range (HDR) for a projection is currently only achieved in dark rooms or otherwise only for projection of static images. High speed and low latency of projection systems used in various applications require improvement since touch interaction delay is common and disrupts immersion if feedback ie: by (Top) Figure 11: High-speed projection-mapped animation from a projection is greater face paint deformed with facial expressions than 6.04ms. Projectors that display (Bottom) Figure 12: Real-time rendering of 8-bit images running at 1 000 Hz can be “ocean, and volcanos” based depth of moved improved to further reduce motion blur sand using a Kinect v2 RGB-D sensor and [57]. [58] The noticeable appearance of Epson CB-S04 projector pixels still occurs when a visitor to a projection mapped exhibit sees the projected surface closely. Such a 4.5 Hardware limitation is a technical challenge that needs to be solved. To increase focal Advances in hardware can rectify depth and resolution, mitigation of limitations in projection mapping contrast reduction caused by algorithms. non-negative overlapping images and Galvanoscopic laser projectors projected light in itself needs further were calibrated in combination with investigation. procams since they provide a higher color gamut to display content [n]. *mention cameras Advances in hardware will attempt to accurately control and compensate for 5.1 Implementation development Controllers compatible with the tools and hardware headsets commonly offer 6 degrees of freedom in movement to be tracked. A Display surfaces and hardware limitation of interactive art as applied to devices for extended reality including the “virtual experiences” field in general AR, VR and SAR maintain a large is the lack of haptic feedback. Currently impact on the immersive aspect on proposed devices are discussed in users. Artistic effects and intentions are section 5.2. greatly limited or enhanced by choices The tracking methods between in displays and interactivity methods due devices differ, but use a modern pose to technologies that have only seen the estimation algorithm to identify gestures most widespread application in recent from the wearer. The most common years. System development kits, tool approach when not dealing with groups kits, and other software has simplified of objects uses heatmaps on key-point the process of creating interactive art detected joints of the wearer, applying using cutting-edge technologies. In this graph theory to link the grouped joints to section, an outline of displays, the image. A state-of-the-art study can interactive devices, and toolkits currently be viewed here [59]. in use are listed. Toolkits such as WebXR now allow XR and AR content to be displayed on internet browsers. Projection mapping kits assist the projector calibration, 3D or 2D environment reconstruction for artists and projection mapping studios are readily available and have increased the spread of these applications. Common Head-mounted displays are generally divided into fully immersive VR headset, and mixed-reality AR headsets. Hardware advances have sought to improve the tracking of the users hands, and gaze. 5.2 Toolkits

Name Category Description/ Use case

WebXR Web API Display of VR and AR from any browser api provided for collaborative virtual environments support for accessing virtual reality (VR) and augmented reality (AR) devices, including sensors and head-mounted displays, on the Web

WebGL Graphics library

SparkAR System Development Kit Create AR experiences for Instagram

Unreal engine 4 Game engine Aid in game, animation, and graphics rendering through procedural animation “rigging” and physics engine.

Unity3D Game engine Aid in game, animation, and graphics rendering through procedural animation “rigging”and physics engine.

HeavyM[a7] Projection mapping Translate images and animations to projection mapped environments. Environment reconstruction information is accepted.

Madmapper Projection mapping Translate images and animations to projection mapped environments. Environment reconstruction information is accepted.

Notch Projection mapping Translate images and animations to projection mapped environments. Environment reconstruction information is accepted.

LucidWeb Platform Popular distribution platform for VR

Steam Platform Mainstream VR and AR game distribution

Gravity sketch 3D modelling with VR -each stroke is a piece of 3D geometry, software no drop down menus or hot keys. -V headset required -ipad beta version

Flying shapes 3D modelling with VR -2D input to 3D graphic assets software

5.3 Head- Mounted displays (HMDs) display and controllers

Name Catego Display Controller Field of Resolution Refresh rate ry Type Depth of view per eye field

Magic leap AR AMOLED 6 30x40 1280x960 60Hz degrees

Hololens2 AR AMOLED 6 52 degrees 2048 x 120Hz 1080

Oculus Rift S VR LCD 6 90 degrees 1280x1440 80Hz

HTC vive pro VR AMOLED 6 110 1440x1600 90Hz degrees

HTC vive cosmos VR LCD 6 110 1440x1700 90Hz degrees

Samsung HMD VR AMOLED 6 110 1440x1600 90Hz Odyssey+ degrees

Valve index VR 6 130 1440x1600 Option to select 80Hz degrees 90Hz 120Hz 144Hz

5.4 Head-Mounted displays (HMDs) tracking Name Type Method Sensors

Oculus Rift S Positional and AI Markerless, 5 cameras assisted inside-out

HTC vive pro positional Marker-based, 2 IR emitting inside-out beacons

HTC vive cosmos positional Markerless, 6 cameras inside-out

Samsung HMD positional Markerless, 2 cameras Odyssey+ inside-out

Valve index positional Markerless, 2 IR emitting inside-out beacons

Hololens2 positional Markerless, 2 IR cameras 4 rgb inside-out, cameras -Hand and eye-tracking Inside-out VR device tracking systems estimate the user’s position based on the data received from the environment. Marker-based tracking devices have mounted cameras that scan the surroundings to detect the markers placed at fixed points of an interaction area around the user processing and storing their spatial coordinates. In the markerless method, the position of the user is estimated using algorithms solely from camera and sensor input. [hmd]

5.4 Modern wireless haptic devices compared to “Wireality” [g8]

5.5 RGB-D Cameras [rgb] Name Type Field of view Frame 3D resolution Depth range rate

Microsoft Time of flight 70 degree 30 fps 512 x 424 0.5 to 4.5 m kinect 2.0 horizontal (H), 60 degree vertical (V)

Asus XtionPro Structured 58 degree H, 45 30 fps 640 x 480 0.8 to 3.5m Live light degree V Intel Active IR 85.2° x 58° (+/-3°) 30 fps at 1280 x 720 0.105 to 10 m RealSens Stereo using max depth max Camera Global resolution; D435i Shutter up to Sensors and 90fps at IMU lower depth resolution

5.6 Haptics

Haptic devices used to provide realistic feedback in virtual and mixed reality environments in the context of interactive art displays and experience. Recently, a soft texture database was added to OpenHaptics database with Phantom Omni recorded textures [g1]. Wireless gloves without finger tracking devices such as a continue to be developed [g2]. Figure 12: Wireality [g8] In one implementation case, a hand is tracked by an IR camera which detects 6.1 Artistic Effects and Animation three infrared LEDs attached to the 3D printed base on the glove. Object elasticity Recent research innovations for has been explored through large or global artists will increase the types of deformation with a FEM-based approach interactive art experiences available to [g3] and otherwise [g6]. Other wireless the public. In this section, we present devices are outlined. As well, a new concept of distributed haptic display for realistic imagery, projection and graphics effects interaction with a given virtual object of with demonstrated in lab settings that have a shape complexity by a collaborative robot high chance of adaptation by artists. and shape display end-effector has been Finally, we discuss current techniques in developed [g4]. A palette-type controller is procedural animation employing another new device presented in recent machine learning techniques. times [g5]. Magnets fields as haptic feedback devices were studied in the 6.2 Novel artistic effects demonstration of tactile interactions [g7]. Modern wireless wire haptic devices (shows Artificial digital colorization of 3D all modern drones, etc) are compared to a real-world objects [80] has been novel hand-attached-to-string approach in table 5.4 [g8]. developed for the purpose of using projection mapping to map realistic colours onto ancient artifacts. An target`s projection surface and replaces archaeologist or restoration expert can the need for haptic feedback and utilize the proposed “proxy painting” capture 3D gestures from the paintbrush method using a paintbrush by painting itself [81]. No method was applicable to paint onto a smaller 3D printed model of complex shaped three-dimensional the targeted artefact, which is tracked, surfaces due to varied lighting and rendered in real-time using Unreal self-shadowing without tracking a brush Engine 4, and projection mapped onto with size degrees of freedom. This the targeted real object. This method translates the paint into texture implementation avoids occluders space and is an available plugin for resulting from implementations that Unreal Engine 4. attempted to allow a tracked device near

Figure 13: The painting of the colored proxy object is captured per frame and projected onto the target object

A system to enable dynamic control of a composed of shift lenses and rotational projector area in high resolution using a mirrors with a total system latency of static image is proposed with proper 3ms and 1000 fps image projection. In geometrical alignment, overcoming the art, this concept can be developed to trade off between the angle of projected use eye tracking to interactively unveil media and the resolution [82]. The artwork based on the gaze of a viewer research introduced a laser pointer to be and perhaps on complex geometries tracked by a camera sensor, and a [video10]. high-speed optical axis controller Figure: Projection mosaicing system with camera sensor, projector and optical axis controller

A generic framework for absorbed by the black ink and the modelling the interaction of projected surface appears blue. In b) projecting a light and a pattern surface “Paxel”, has red color on red absorbing colorants applications for color-changing effects. while projecting a low saturated blue on In the experiment, a 365nm UV light blue reflecting colourants. When was projected onto a rgd fluorescent projecting onto a white surface, the red pattern inked surface and white surface. remains and overtakes the blue, Specific colour changes are created for whereas projecting onto a pattern example, in figure 9 a) highly saturated surface absorbs the red, and the pattern red and blue colors are projected onto a surface appears more blue. In c) white surface, and results in a red projecting a white light onto a colored surface appearance, while projecting it pattern surface tricks the eye into on a pattern surface, the red light is averaging rgb intensities to form color.

Figure 14: a) black-white pattern surface, b) colored pattern surface, c) monochrome-chrome projection

The novel method of color illusionary-like performance art, since ∗ change obtained an average error ∆E 00 UV light can be projected onto fog with between the mean and measured color high quality, unlike other methods. The change for the prediction through the color-changing effect can be used framework. Paxel can be applied to extensively in art exhibitions such as applying the concept of stenography. triggered or given to certain participants. For example, two pages put together will In the future, the work can be extended form an image that was previously not to different color spaces. Figure 10 able to be seen. In this case it is the explains explored practical applications coding of which pattern the surface of the framework. uses, how many colorants there are, and how many clusters, and this can be

Figure 15: Paxel applied for color-changing effect

Limitations in the framework exist, such still needs to be optimized to process as the resolution, and the fact that the images in real-time for live-content. display only covers a limited color gamut in CIELAB colorspace. The framework

6.2 Neural rendering commonly used to swap the faces of people in and images, but has The field of graphics rendering applications spanning as far as indoor has improved by neural network floor plan generation. This method can rendering methods in recent years and a be used to reduce production costs for popular method “DeepFakes” have been films by paying a studio to train a introduced which utilize General network to replace an actor’s face in film Adversarial Networks (GANs). The most footage a production studio already successful method used in images is owns with a target actor and therefore conditional GANs [92]. It is most create a new video by “face-swapping” the old actor with a new actor, in an old art museums such as “Borderless” movie, or vice-versa, without the need to enabling digital twins of museum visitors hire an actor. An amateur film made in rendered in a style of their choice, or this way has already been released [93]. online platforms that render images of It does not require extensive training your choice into the style of a famous data. An application drawn from a GAN artist as shown below. approach could take the form of digital

Figure 16: DeepFake regeneration of a street in the style of Van Gogh’s starry night (top right) and other famous artists

This idea of face-swapping has between the two inputs, and generating recently been improved to preserve new content. The method first detects a lighting and contrast after the swap and face, and localizes facial landmarks, the to improve inpainting. [94] Disney image is normalized to 1024x1024 created this technique using variational resolution, and fed to the network, autoencoders. An encoder distils a high saving the output of the s-th decoder. dimensional input such as an image Then, the image’s normalization is source into lower dimensional vector reversed, and the save parameters are space (latent space) and a decoder poisson blended with a compositing reconstructs the source content or target method. It is the most accurate content given the aforementioned face-swapping method known to date, vector. Usually this framework is trained but is limited in proper face-swaps of end-to-end until consistency is reached. profile views and certain expressions The latent space can be controlled to such as squinting which do not contain affect the output such as interpolating many facial landmarks. This can be solved by a smaller gap in training data. further research in the field. It is the first All other comparable methods also do method that achieves convincing face not change the face shape, or people swaps on high resolution (1024x1024) wearing glasses and this is a topic for media.

Figure 17: Disney’s face-swapping pipeline

Another application of neural the method could be leveraged to allow rendering enables a character in a video users to create game characters that game to be automatically properly adventure in a virtual world. It can also animated, for example, to run over be applied to improve realism of complex terrain [95]. In interactive art, movement in virtual environments. Figure 18: The terrain animation system presented by [95]

A broad view of current innovations in neural rendering include 7 Conclusion novel-view synthesis and free viewpoint This state of the art review videos where objects can be viewed Explored emerging applications from any angle, realistic relighting, of interactive art experiences and novel reenactment of captured body and face innovations in animation technology data to representative digital avatars, through the outlining of research in the and semantic image editing. Digital past 3 years. The interpretation of avatars have been created through technology and art through technology machine learning techniques without will continue to grow as XR and manual modeling. The reader is referred projection mapping continue to expand to a state-of-the-art report that details into mainstream markets. this topic [96]. References apart. (2020, July 29). About Facebook. [1] Paul Milgram and Fumio Kishino, "A https://about.fb.com/news/2020/04/intro Taxonomy of Mixed Reality Visual ducing-messenger-rooms/ Displays," IEICE Trans. Information [11]PING PONG with real time tracking Systems E77-D, no. 12 (December 1, and projection mapping | 1994): 1,321–29. #PanasonicCES #CES2020. (2020, [4]J. Jaehnig, “Opera Beyond Creates March 18). Channel Panasonic | Virtual Theater With Varjo And The Panasonic video FNOB,” ARPost, 13-Dec-2020. [Online]. portal.https://channel.panasonic.com/co Available: ntents/27902/ https://arpost.co/2020/12/14/opera-beyo [12]https://www.marchedufilm.com/fr/pro nd-virtual-theater-varjo-fnob/ grams/cannes-xr/ [5]Wang, F. (2018). Case study: [13]https://electrifly.co/ TeamLab borderless Mori building digital [14Asai, N. (n.d.). INORI (Prayer). art Museum, Japan. Inavate Vimeo.]https://vimeo.com/210599507?ut APAC.https://www.inavateapac.com/cas m_campaign=5370367&utm_source=affi e-studies/article/case-study-teamlab-bor liate&utm_channel=affiliate&cjevent=d3 derless-mori-building-digital-art-museum 4ab527402a11eb823600990a24060e -japan [19]Graf, R., Benawri, P., Whitesall, A., [6]Millman, E. (2020, June 16). BTS just Carichner, D., Li, Z., Nebeling, M., & proved that paid Livestreaming is here Kim, H. (2019). IGYM: An Interactive to stay. Rolling Stone. Floor Projection System for Inclusive https://www.rollingstone.com/pro/news/b Exergame Environments. In ts-paid-livestream-bang-bang-con-1015 Proceedings of the Annual Symposium 696/ on Computer-Human Interaction in Play [7]UN « concert » de Travis Scott réunit (pp. 31–43). Association for Computing 12 millions de joueurs sur Fortnite. Machinery. (n.d.). La [24]Oku, H., Nomura, M., Shibahara, K., Presse.https://www.lapresse.ca/arts/mu & Obara, A. (2018). Edible Projection sique/2020-04-24/un-concert-de-travis-s Mapping. In SIGGRAPH Asia 2018 cott-reunit-12-millions-de-joueurs-sur-for Emerging Technologies. Association for tnite Computing Machinery. [8]https://www.xrstudios.live/#Work [25]Hindman, N., & Epstein, J. (2013, [9]BTS 'Love yourself"' world tour - July 15). Sushi chef creates edible QR Augmented reality stage. (2020, May codes to end 'Fish fraud' in California 28). restaurants. Business Insider. YouTube.https://youtu.be/V9Qb1OcIl80? https://www.businessinsider.com/sushi- t=19 with-qr-codes-2013-7 [10]Introducing Messenger rooms and [26]Kiyokawa, M., Okuda, S., & more ways to connect when you’re Hashimoto, N. (2019). Stealth Projection: Visually Removing Projectors Yoo, K., Havel, R., & Patel, N. (2019). from Dynamic Projection Mapping. In VR Minecraft for Art. In 25th ACM SIGGRAPH Asia 2019 Posters. Symposium on Virtual Reality Software Association for Computing Machinery. and Technology. Association for [27]Fassold, H., & Takacs, B. (2019). Computing Machinery. Towards Automatic Cinematography [33 and Annotation for 360° Video. In Andrade, R., & Shool, S. (2020). Proceedings of the 2019 ACM Pixeldust Studios Reptopia Magic Leap International Conference on Interactive Experience. In ACM SIGGRAPH 2020 Experiences for TV and Online Video Immersive Pavilion. Association for (pp. 157–166). Association for Computing Machinery Computing Machinery. [34] [28]Popovici, I., & Vatavu, R.D. (2019). Planck, M. (2020). Dr. Crumb's School Towards Visual Augmentation of the for Disobedient Pets: VR's next Step: Television Watching Experience: Combining Escape Games, Role Manifesto and Agenda. In Proceedings Playing, and Immersive Theater to of the 2019 ACM International Create a New Type of Social Conference on Interactive Experiences Entertainment - Live Hosted Adventures for TV and Online Video (pp. 199–204). from the Comfort of Your Home.. In Association for Computing Machinery. ACM SIGGRAPH 2020 Immersive [29]Ogle, J., Duer, Z., Hicks, D., & Pavilion. Association for Computing Fralin, S. (2020). If This Place Could Machinery. Talk ... First World War Tunnel Warfare [y] through Haptic VR. In ACM SIGGRAPH Sargeant, B., Dwyer, J., & Mueller, F. 2020 Immersive Pavilion. Association (2020). Designing for Virtual Touch: A for Computing Machinery. Real-Time Co-Created Online Art [30]Salomon, M., & Martin, M. (2020). Experience. In Extended Abstracts of Long Road Home VR. In ACM the 2020 Annual Symposium on SIGGRAPH 2020 Computer-Human Interaction in Play Festival. Association for Computing (pp. 129–133). Association for Machinery. Computing Machinery. [31] [35]Lang, B. (2017, December 13). Niederhauser, M., Allsbrook, W., Update: ZED mini turns rift and vive into Zananiri, E., & Fitzgerald, J. (2020). an AR headset from the future. Road to Metamorphic: A Social VR Experience. VR. In ACM SIGGRAPH 2020 Immersive https://www.roadtovr.com/rec-room-3m- Pavilion. Association for Computing user-created-levels-creator-monetization Machinery. /z [32] [36]https://www.oculus.com/facebook-ho rizon/ [37] [42] Black, J., Bradshaw, J., Cokas, C., Ham, http://www.wired.com/2017/02/lady-gag K., McNair, E., Rooney, B., & Swartz, J. a-halfime-show-drones/ (2020). ESPN VR Batting Cage. In ACM [43] Tsuru Robotics (2018, March 29). SIGGRAPH 2020 Immersive Pavilion. First in the World Graffiti Drone (Part 1). Association for Computing Machinery. Retrieved at [38]Oculus Blog. (2020, September 1). htps://diydrones.com/profiles/blogs/first-i Venice international film festival brings n-the-world-graffitidrone-part-1, 2018, VR selection to oculus. Oculus | VR December 15. Headsets & Equipment. [44] https://www.oculus.com/blog/venice-inte Andrade, P. (2019). Mobile Social rnational-film-festival-brings-vr-selection Hybrids and Drone Art. In Proceedings -to-oculus/ of the 9th International Conference on [39] Digital and Interactive Arts. Association Cutler, L., Tucker, A., Schiewe, R., for Computing Machinery. 4-5 Fischer, J., Dirksen, N., & Darnell, E. [46]https://s3-us-west-1.amazonaws.co (2020). Authoring Interactive VR m/disneyresearch/wp-content/uploads/2 Narratives on Baba Yaga and Bonfire. In 0180406102342/Recent-Advances-in-Pr Special Interest Group on Computer ojection-Mapping-Algorithms-Hardware- Graphics and Interactive Techniques and-Applications-Paper.pdf?fbclid=IwAR Conference Talks. Association for 1_wlfylFhmEXlZ5j69J7Q0OM03XQqKQ Computing Machinery. dohlDaU0s4_xC928Ylf6MbemJ8#page= [40] 3&zoom=100,68,78 Ligthart, M., Neerincx, M., & Hindriks, K. [r]https://studios.disneyresearch.com/20 (2020). Design Patterns for an 18/10/02/paintcopter-an-autonomous-ua Interactive Storytelling Robot to Support v-for-spray-painting-on-3d-surfaces/ Children's Engagement and Agency. In [t] Proceedings of the 2020 ACM/IEEE https://www.oculus.com/experiences/qu International Conference on est/1891638294265934/?locale=fr_FR Human-Robot Interaction (pp. 409–418). [u] Association for Computing Machinery. Fazio, S., & Turner, J. (2020). Bringing [41] Empty Rooms to Life for Casual Visitors Vilk, J., & Fitter, N. (2020). Comedians Using an AR Adventure Game: in Cafes Getting Data: Evaluating Skullduggery at Old Government Timing and Adaptivity in Real-World HouseJ. Comput. Cult. Herit., 13(4). Robot Comedy Performance. In [v] Proceedings of the 2020 ACM/IEEE Goto, S. (2019). GravityZERO, an International Conference on Installation Work for Virtual Human-Robot Interaction (pp. 223–231). Environment. In Proceedings of the 6th Association for Computing Machinery. International Conference on Movement and Computing. Association for [17] Water screen. (n.d.). Water show: Computing Machinery Aquatic Show International, water [s] special effects. McDonald, N. (2020). Crafting an https://www.aquatic-show.com/en/water- Interactive Childhood Memory. In ACM effects/water-screen SIGGRAPH 2020 Immersive Pavilion. Fog[21,22]Mirai Museum aera | Works | Association for Computing Machinery. WoW. (n.d.). WOW. [q] Maurhofer, C., Cimen, G., Ryffel, M., https://www.w0w.co.jp/en/works/mirai_m Sumner, R. W., & Guay, M. (2018). AR useum_aera costumes. Proceedings of the 16th ACM Water [23] SIGGRAPH International Conference on [z]Hololens 2 AR headset Virtual-Reality Continuum and its https://www.youtube.com/watch?v=uIHP Applications in Industry. PtPBgHk https://doi.org/10.1145/3284398.328440 [47]Z. Zhang. Flexible camera 2 calibration by viewing a plane from https://studios.disneyresearch.com/wp-c unknown orientations. In International ontent/uploads/2019/04/AR-Costumes-A Conference on Computer Vision (ICCV), utomatically-Augmenting-Watertight-Cos pages 666–673, 1999. tumes-from-a-single-RGB-Image.pdf [48]J. . M. G. Yang, L. ; Normand. [p]Liang Yang, Juntong Qi, Xiao, J., & Practical and precise projector-camera Xia Yong. (2014). A literature review of calibration. IEEE International UAV 3D path planning. Proceeding of Symposium on Mixed and Augmented the 11th World Congress on Intelligent Reality (ISMAR), Merida, Mexico, 2016. Control and Automation. [49] O. Fleischmann and R. Koch. Fast https://doi.org/10.1109/wcica.2014.7053 projector-camera calibration for 093 interactive projection mapping. In https://www.researchgate.net/publication Pattern Recognition (ICPR), 2016 23rd /282744674_A_literature_review_of_UA International Conference on, pages V_3D_path_planning 3798–3803. IEEE, 2016. [video 1] [50]C. Resch, H. Naik, P. Keitler, S. https://borderless.teamlab.art/ew/symbio Benkhardt, and G. Klinker. Onsite tic/ semi-automatic calibration and [video 2] registration of a projector-camera https://www.teamlab.art/ew/flowersbloo system using arbitrary objects with m/ known geometry. IEEE Transactions on [video 3] Visualization and Computer Graphics, https://youtu.be/oxpAzIF4-M4?t=112 21(11):1211–1220, Nov 2015. [image 1]XR studios. (2020, September [51Willi, S., & Grundhofer, A. (2017). 21). XR Studios. Robust geometric self-calibration of https://www.xrstudios.live/#BTS generic multi-projector camera systems. 2017 IEEE International Symposium on [n]Pjanic, P., Willi, S., & Grundhofer, A. Mixed and Augmented Reality (ISMAR). (2017). Geometric and photometric https://doi.org/10.1109/ismar.2017.21 consistency in a mixed video and ]https://studios.disneyresearch.com/wp- Galvanoscopic scanning laser projection content/uploads/2019/03/Robust-Geom mapping system. IEEE Transactions on etric-Self-Calibration-of-Generic-Multi-Pr Visualization and Computer Graphics, ojector-Camera-Systems-1.pdf 23(11), 2430-2439. [52] ASAYAMA H., IWAI D., SATO K.: https://doi.org/10.1109/tvcg.2017.27345 Fabricating diminishable visual markers 98 for geometric registration in projection https://www.researchgate.net/publication mapping. IEEE Transactions on /319054980_Geometric_and_Photometr Visualization and Computer Graphics ic_Consistency_in_a_Mixed_Video_and 24, 2 (Feb 2018), 1091–1102. _Galvanoscopic_Scanning_Laser_Proje doi:10.1109/TVCG.2017.2657634. 5 ction_Mapping_System [55] [56]TSUKAMOTO J., IWAI D., WATANABE Y., KATO T., ISHIKAWA M.: KASHIMA K.: Radiometric Extended dot cluster marker for compensation for cooperative high-speed 3D tracking in dynamic distributed multi-projection system projection mapping. In 2017 IEEE through 2-dof distributed control. International Symposium on Mixed and Visualization and Computer Graphics, Augmented Reality (ISMAR) (Oct 2017), IEEE Transactions on 21, 11 (Nov pp. 52–61. doi:10.1109/ISMAR.2017.22. 2015), 1221–1229. [54] HASHIMOTO N., KOIZUMI R., [57]REGAN M., MILLER G. S. P.: The KOBAYASHI D.: Dynamic projection problem of persistence with rotating mapping with a single IR camera. displays. IEEE Transactions on International Journal of Computer Visualization and Computer fGraphics Games Technology 2017, 4936285 23, 4 (April 2017), 1295–1301. (2017). doi:10.1155/ 2017/4936285 doi:10.1109/TVCG. 2017.2656979. [53]SIEGL C., LANGE V., [58] Grundhöfer, A., & Iwai, D. (2018). STAMMINGER M., BAUER F., THIES J.: Recent advances in projection mapping FaceForge: Markerless non-rigid face algorithms, hardware and applications. multi-projection mapping. IEEE Computer Graphics Forum, 37(2), Transactions on Visualization and 653-675. Computer Graphics 23, 11 (2017), https://doi.org/10.1111/cgf.13387 2440–2446. https://s3-us-west-1.amazonaws.com/di doi:10.1109/TVCG.2017.2734428 sneyresearch/wp-content/uploads/2018 [o]Spatial mapping overview. (n.d.). 0406102342/Recent-Advances-in-Proje Stereolabs - Capture the World in 3D. ction-Mapping-Algorithms-Hardware-an https://www.stereolabs.com/docs/spatial d-Applications-Paper.pdf?fbclid=IwAR1_ -mapping/ wlfylFhmEXlZ5j69J7Q0OM03XQqKQdo hlDaU0s4_xC928Ylf6MbemJ8#page=3& Lange, V., Kurth, P., Keinert, B., Boss, zoom=100,68,78 M., Stamminger, M., & Bauer, F. (2020). [a7] https://heavym.net/en/ Proxy Painting: Digital Colorization of [a8] Real-World ObjectsJ. Comput. Cult. https://cdm.link/2020/09/madmapper-4-0 Herit., 13(3). -light-and-projection-mapping-tool/ [hmd] [a9]https://www.notch.one/ https://www.researchgate.net/publication [m]McIntosh, S. (2020, April 30). Will the /343340441_Modern_Virtual_Reality_H fashion industry have to 'rethink its eadsets/figures values'? BBC News. [e]Jaehnig, J. (2020, December 17). https://www.bbc.com/news/entertainmen KHAITE and ROSE reunite for AR t-arts-52394504 technology-enabled fashion promotion. [b]300 drones display COVID-19 ARPost. messaging in Korea. (2020, 7). AV https://arpost.co/2020/12/18/khaite-rose- Magazine. ar-technology-fashion-promotion/ https://www.avinteractive.com/news/covi [82]Dynamic vision system: Tracking d-19/hundreds-drones-display-covid-19- projection Mosaicing. (n.d.). Ishikawa messaging-13-07-2020/ Group Laboratory. [c]On absurd landscapes and habitable http://ishikawa-vision.org/mvf/TrackingPr exoplanets. (2016, September 1). Ars ojectionMosaicing/index-e.html Electronica Blog. [83] https://ars.electronica.art/aeblog/en/201 Peng, H.L., & Watanabe, Y. (2020). 6/09/01/von-absurden-landschaften-und High-Speed Human Arm Projection -habitablen-exoplaneten/ Mapping with Skin Deformation. In [d]Özge Samanci, Gabriel Caniglia, SIGGRAPH Asia 2020 Emerging Adam Snyder: Fiber optic ocean. (n.d.). Technologies. Association for ACM SIGGRAPH ART SHOW Computing Machinery. ARCHIVES – Resources from [f]Hashimoto, N., Koizumi, R., & SIGGRAPH, SIGGRAPH Asia, and Kobayashi, D. (2017). Dynamic Digital Arts Community Exhibitions. projection mapping with a single IR https://digitalartarchive.siggraph.org/art camera. International Journal of work/ozge-samanci-gabriel-caniglia-ada Computer Games Technology, 2017, m-snyder-fiber-optic-ocean/ 1-10. [e]Volpe, J. (2020, October 22). Curious https://doi.org/10.1155/2017/4936285 about what life is really like on the ISS? https://downloads.hindawi.com/journals/i This is your chance. Mashable. jcgt/2017/4936285.pdf https://mashable.com/article/felix-paul-s [90] The first multisensory VR mask to pace-explorers-iss-experience-vr/?europ bring smell, mist, vibrations into the e=true picture! | Yanko design. (n.d.). msensory [80] – Multisensory experimentation & resources. Transactions on Graphics, 36(4), 1-13. http://msensory.com/the-first-multisensor https://doi.org/10.1145/3072959.307366 y-vr-mask-to-bring-smell-mist-vibrations- 3 into-the-picture-yanko-design/ http://theorangeduck.com/media/upload [91] Multisensory experimentation & s/other_stuff/phasefunction.pdf?fbclid=I resources. wAR0c45MIDhyA6rlDnnWGa6vV3Yj0S https://www.kickstarter.com/projects/feel Nykmttzl6-58ZvQN1shOiPLY1WLtgk real/feelreal [96] [59] Tewari, A., Fried, O., Thies, J., Ke Sun, Zigang Geng, Depu Meng, Bin Sitzmann, V., Lombardi, S., Sunkavalli, Xiao, Dong Liu, Zhaoxiang Zhang, & K., Martin-Brualla, R., Simon, T., Jingdong Wang. (2020). Bottom-Up Saragih, J., Nießner, M., Pandey, R., Human Pose Estimation by Ranking Fanello, S., Wetzstein, G., Zhu, J.Y., Heatmap-Guided Adaptive Keypoint Theobalt, C., Agrawala, M., Shechtman, Estimates.https://arxiv.org/abs/2006.154 E., Goldman, D., & Zollhöfer, M. (2020). 80 State of the Art on Neural [g]MIT Lab. (2019, 3). Part 6: Media RenderingComputer Graphics Forum, Co-creation with non-human systems · 39(2), 701-727. Collective wisdom. Works in Progress. https://arxiv.org/pdf/2004.03805.pdf https://wip.mitpress.mit.edu/pub/collectiv [rgb]3D camera survey — e-wisdom-part-6/release/1 ROS-industrial. (n.d.). ROS-Industrial. [93]Haysom, S. (2018, January 31). https://rosindustrial.org/3d-camera-surve People are using face-swapping tech to y add Nicolas cage to random movies and [c1]https://www.ucf.edu/online/hospitality what is 2018. Mashable. /news/vr-theme-parks-revolutionizing-a https://mashable.com/2018/01/31/nicola musement-hospitality/ s-cage-face-swapping-deepfakes/ [c2]https://www.theverge.com/2020/4/23 [94]Naruniec, J., Helminger, L., /21233637/travis-scott-fortnite-concert-a Schroers, C., & Weber, R. (2020). stronomical-live-report High‐resolution neural face swapping [g1]Punpongsanon, P., Iwai, D., & Sato, for visual effects. Computer Graphics K. (2015). SoftAR: Visually manipulating Forum, 39(4), 173-184. haptic softness perception in spatial https://doi.org/10.1111/cgf.14062 augmented reality. IEEE Transactions https://studios.disneyresearch.com/wp-c on Visualization and Computer ontent/uploads/2020/06/High-Resolution Graphics, 21(11), 1279-1288. -Neural-Face-Swapping-for-Visual-Effect https://doi.org/10.1109/tvcg.2015.24597 s.pdf 92 [95]Holden, D., Komura, T., & Saito, J. https://ieeexplore.ieee.org/document/71 (2017). Phase-functioned neural 65660 networks for character control. ACM [g2] Efraimidis, M., & Mania, K. (2020). Proceedings of the 33rd Annual ACM Wireless Embedded System on a Glove Symposium on User Interface Software for Hand and Tactile and Technology (pp. 1035–1045). Feedback in 3D Environments. In Association for Computing Machinery. SIGGRAPH Asia 2020 Posters. https://dl-acm-org.proxy.bib.uottawa.ca/d Association for Computing Machinery. oi/10.1145/3379337.3415862 [g3] [g7] Abdulali, A., Atadjanov, I., Lee, S., & Yasu, K. (2020). MagneLayer: Force Jeon, S. (2019). Measurement-Based Field Fabrication for Rapid Prototyping Hyper-Elastic Material Identification and of Haptic Interactions. In ACM Real-Time FEM Simulation for Haptic SIGGRAPH 2020 Labs. Association for Rendering. In 25th ACM Symposium on Computing Machinery. Virtual Reality Software and Technology. https://dl.acm.org/doi/10.1145/3388763. Association for Computing Machinery. 3407761 [g4] [g8] Fedoseev, A., Chernyadev, N., & Fang, C., Zhang, Y., Dworman, M., & Tsetserukou, D. (2019). Development of Harrison, C. (2020). Wireality: Enabling MirrorShape: High Fidelity Large-Scale Complex Tangible Geometries in Virtual Shape Rendering Framework for Virtual Reality with Worn Multi-String Haptics. Reality. In 25th ACM Symposium on In Proceedings of the 2020 CHI Virtual Reality Software and Technology. Conference on Human Factors in Association for Computing Machinery. Computing Systems (pp. 1–10). https://dl-acm-org.proxy.bib.uottawa.ca/d Association for Computing Machinery. oi/10.1145/3359996.3365049 https://dl-acm-org.proxy.bib.uottawa.ca/d [g5] oi/pdf/10.1145/3313831.3376470 Degraen, D., Reindl, A., Makhsadov, A., Zenner, A., & Krüger, A. (2020). Envisioning Haptic Design for Immersive Virtual Environments. In Companion Publication of the 2020 ACM Designing Interactive Systems Conference (pp. 287–291). Association for Computing Machinery. https://dl-acm-org.proxy.bib.uottawa.ca/d oi/10.1145/3393914.3395870 [g6] Ryu, N., Lee, W., Kim, M., & Bianchi, A. (2020). ElaStick: A Handheld Variable Stiffness Display for Rendering Dynamic Haptic Response of Flexible Object. In