Masaryk University Faculty of Informatics

Video Recording of Molecular Structures in

Master’s Thesis

Vojtěch Brůža

Brno, Spring 2019

Masaryk University Faculty of Informatics

Video Recording of Molecular Structures in Virtual Reality

Master’s Thesis

Vojtěch Brůža

Brno, Spring 2019

This is where a copy of the official signed thesis assignment and a copy ofthe Statement of an Author is located in the printed version of the document.

Declaration

Hereby I declare that this paper is my original authorial work, which I have worked out on my own. All sources, references, and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source.

Vojtěch Brůža

Advisor: doc. RNDr. Barbora Kozlíková, Ph.D.

i

Acknowledgements

Foremost, I would like to thank my thesis advisor doc. RNDr. Barbora Kozlíková, Ph.D. for her support and insight. I really appreciate her trust, invaluable guidance and time. I am very grateful for her belief in my capabilities and success. I would not be able to accomplish the results without her. My thanks belongs also to my colleagues from VisIt Lab and HCI Laboratories. They were always willing to offer help and advice. I especially want to thank Jan Mičan, a biochemist from Loschmidt Laboratories, for his ideas, suggestions, collaboration and enthusiasm. He was always ready for cooperation and willing to assist. Finally, I also thank my wife for her comfort and encouragement. I am extremely thankful for the support she gave me during this challenging time. I would also like to thank my family and closest friends for their support, patience and for motivating me.

iii Abstract

The goal of this work was to implement a virtual environment for interactive video recording of a scene containing molecular visual representation. We have designed a solution for intuitive manipula- tion with molecular representation and recording system. Using Unity engine, we have developed a prototype application for HTC Vive and Rift devices. The application can load a molecule from the online PDB database and display it using a standard molecular repre- sentation technique. We have integrated elements essential for video recording. Using the provided solution, structural biology experts are able to adjust the camera trajectory and generate a resulting video, which can be stored in the MP4 format and used for communication and presentation purposes.

iv Keywords computer graphics, Unity, virtual reality, visualization, 3D data, story- telling, animation

v

Contents

Introduction 1

1 Background and Related Work 5 1.1 Molecular Data ...... 5 1.2 History of Molecular Visualization ...... 6 1.3 Molecular Animations ...... 7 1.3.1 Molecular Dynamics ...... 8 1.3.2 User-driven Animation ...... 9 1.3.3 Automating the Creation of Animations . . . . . 11 1.4 Molecular Visualization in Virtual Reality ...... 13 1.4.1 Virtual Reality Tools ...... 14

2 Design & Requirements 17 2.1 Requirements ...... 17 2.2 Protein Download ...... 19 2.2.1 Input Data ...... 19 2.3 Protein Visualization ...... 20 2.3.1 Color Modes ...... 21 2.4 Interaction with Protein Representation ...... 22 2.5 Real-Time Video Recording ...... 23 2.6 Camera Manipulation ...... 24 2.7 Camera Trajectory Planning ...... 25 2.8 Video Format ...... 27 2.9 Scene Design ...... 28

3 Implementation 31 3.1 Used Technology ...... 31 3.1.1 Unity ...... 31 3.1.2 Hardware Interfaces ...... 33 3.2 The Main Scene ...... 35 3.3 Protein Visualization ...... 35 3.3.1 Molecule Rendering ...... 37 3.4 Interactions and Control ...... 37 3.4.1 Interaction Tools ...... 38 3.4.2 Interactable Objects ...... 39 3.4.3 Virtual GUI ...... 44

vii 3.4.4 Voice Control ...... 44 3.4.5 Movement ...... 45 3.5 Real-time Capture ...... 46 3.5.1 Unity Recorder ...... 47 3.5.2 RockVR ...... 47 3.6 Camera Trajectory Planning ...... 48

4 Results & Conclusion 51 4.1 Results ...... 51 4.2 Conclusion & Future Work ...... 53

Bibliography 57

viii List of Figures

1.1 The illustration depicts the levels of organization of protein structures [21]. 6 1.2 The first system for the interactive display of molecular structures devised at MIT in the mid-1960s [25]. Detail of the CRT screen, with the globe that control the direction and speed of the wireframe image rotation. 7 2.1 The structure of deoxy human haemoglobin [75], represented by the space-filling model (created by QuteMol [76]). 20 2.2 Molecule displayed in three different color modes. 22 2.3 The user avatar pointing at a molecule while watching the video preview on both screens simultaneously. The small preview screen is displayed above the camera. The large screen is on a fixed position in the scene. 24 2.4 An example of the animation curve, used to enhance the camera movement interpolation between two consecutive animation keyframes. It corresponds to the smooth animation of the speed between, with speeding up and slowing down at the beginning and end of the trajectory, respectively. 27 2.5 Representation of the user avatar which consists of a headset visualization and two hands that represent the controllers. 30 3.1 Both supported head mounted display devices. 33 3.2 HTC Vive controller layout of buttons [82] 34 3.3 Illustration of the whole scene of the prototype application. The scene contains two loaded molecules, the camera, the large preview screen, UI keyboard, the shadow cameras (described in Section 3.4.2), the user avatar (also visible through camera preview on the large screen) and the surrounding room. 36 3.4 Avatar of the user in our virtual environment, operating with the protein (PDB ID 1AON [85]). The molecule is colored according to individual chains. 38

ix 3.5 Virtual representation of the hand tools with pointers attached to the index fingers of both hands. 39 3.6 User pointing at the camera which is capturing a selected part of the protein. Both the camera and the selected part are highlighted. The boxes floating around the camera are used to control the camera recording system. The red and blue buttons are related to the recording system and the other two buttons to the trajectory planning. 42 3.7 The camera and its control elements floating next to the camera are also attached to the hand. The recording state indicators are displayed when the recording is active. 43 3.8 The shadow camera with one the control button used for spawning new waypoints. 44 3.9 The shadow camera representing a single keyframe is grabbed. The display shows the preview of the video in this keyframe. 45 3.10 The keyboard being used for importing new pdb file to the application. 46 3.11 The camera following the trajectory of three waypoints. 49

x Introduction

Computer science plays an important role in biology and chemistry already for decades. Computational analysis and visualization tech- niques enable the biologists and biochemists to save time and re- sources. Instead of performing thousands of expensive laboratory experiments, in-silico simulations and analyses can significantly re- duce the amount of experiments by filtering out infeasible options. Visualization is one of the key fields for biology, since visualization helps to explore data sets and communicate hypotheses and findings to others [1]. In general, visualization can aid in three functions that are essential components of the scientific enterprise: synthesis (pro- cess of creating a model), analysis (the examination and exploration of either data or models), and communication (sharing and presenting information) [2]. This thesis is focused mainly on the third function, the communication of information through visualization. One of the main foci of structural biology is visualization at nano- scale. The nanoscale visualization represents an interesting domain, because the objects of study (for example molecules, which are smaller than the wavelength of light) are invisible to the naked eye. Behavior of molecules is governed by physical forces and interactions signifi- cantly different from those forces and interactions that we encounter during our day-to-day experience. In this domain, effective models and visualizations are vital to provide the insight required to make research progress [3]. Using techniques, such as X-ray crystallography, nuclear magnetic resonance (NMR) spectroscopy, and cryo-electron microscopy [4], bio- structural data from experimental approaches are being generated at accelerating rates. Today, there are over 150,000 macromolecules stored in the Protein Data Bank [5]. Significant progress in displaying their structure, properties, and dynamic behavior has been made in the last decades [6], utilizing progress in fields of computer graphics, visualization, and human-computer interaction. Nowadays, we carry high definition color graphics displays in our pockets. Computational power and graphics processing units (GPUs) now enable real-time interactive display of billions of atoms, millions of proteins. Utilizing

1 this growing arsenal of biophysical and visualization techniques, the number of visualization programs and their functionality is growing. As already mentioned, molecules are too small to be analyzed and observed by a naked eye. This problem was initially solved by picturing the molecules as sets of spheres, using manual drawing. Using pictures to communicate ideas is nothing new. Sometimes the work of molecular biologists described just by technical language is incomprehensible by people from other fields, until they see a picture. “A picture is worth a thousand words” [7]. David Goodsell [8, 9] is a structural biologist who paints watercolors of viruses and cells with exacting scientific specifications [10]. His work of handmade paintings representing precise scientific data inspires many other scientists who work in a molecular visualization field [11]. Most molecular animators consider him to be the father of the field in terms of thinking about molecular visualization in a scientifically accurate way [12]. Animation is an important part of visualization, especially useful for communicating dynamic processes which are crucial in biology. Videos help to provide insight into complicated datasets so the users can explore and understand 3D structures better than just from the static images. With all the information we have about the molecu- lar movement and the dynamic environment in the cell, it is much efficient to convey this information via an animation. It can bealso accompanied by speech or sound to achieve the best storytelling result. Storytelling is a powerful tool to communicate ideas and thoughts. This is true even for structural biology where there are many ways to achieve attractive visual storytelling. Simple drawings are sometimes the only way how biologists communicate their ideas of the processes because creating appealing and comprehensible animations is a very complex task. It is a separate field of study which is not accessible to many biologists and can take months or years to study. And it can take also months to create a single animation, even for professionals. These are the main limitations which prevent the researchers to easily use this form of communication of their research outcomes. Molecular animators, such as Drew Berry [13] and Janet Iwasa [14], use modeling tools for creating molecular animations to commu- nicate their ideas and discoveries. Tools like Molecular Maya [15], BioBlender [16], and other tools can be used to create such animations. This approach includes many tasks which can be partially automated.

2 CellVIEW [11] is a tool which can reduce the manual work of molecu- lar animators by automating some of the processes. Time to create one animation video using this application can be shortened from months to weeks. Although the conventional 2D display technique is a common practise for displaying complex structures, using a stereoscopic 3D display makes it possible to better understand the layout of such struc- tures [17]. Examining a dataset using a large immersive display can further enhance interpretation and engagement. The use of immersive 3D display systems can aid the researchers to analyze new or existing datasets, communicate ideas and concepts, and showcase findings to the wider community [18]. There are many tools that can produce stereoscopic 3D images. The existing tools are using different methods to achieve stereoscopy, but the aim is near-3D effect for the end user. These attempts at pro- viding 3D perception lack immersion into a different environment. Virtual reality is the key player in addressing this problem. It helps to introduce a level of human-computer interaction that was never possible before [19]. Nevertheless, to our best knowledge, none of the available molecu- lar animation tools has a native support for VR. The creative process can be more engaging and intuitive in the VR, because most of the in- teractions match how people are used to interact in the real world. By combining the immersion of VR intuitive interactions and storytelling techniques, new level of storytelling is possible. The users can be fully immersed by the story because they can be the internal part and key players in the molecular environment. To tackle the previously mentioned issues, the main aim of the thesis is to design and implement a virtual environment that allows the user to interactively create video sequences of molecular visual representations in virtual reality. The exported video can be used by teachers, students, and researchers in biochemistry for presentation and education purposes. The tool should be intuitive and easy to use. People with minimal experience in animating should still be able to use it. They should spend minimum time with the technical setup and learning of the software user interface. This thesis contains the description of the design of such a software and supplementary and background information for the implementation.

3

1 Background and Related Work

Building the conceptual and physical models that help to organize and make sense of our perception and experience in the natural world is a fundamental human instinct [3]. The practise of creating and manipulating models has a rich history in many domains, including physical sciences, social sciences, engineering, medicine, design, ar- chitecture, and others. Visualizing molecular nanoscale models plays an important role namely in drug design and protein engineering. In this chapter, we will briefly describe the background of the fields of molecular visualization, animation, and virtual reality, as these are the most related ones to our proposed solution. Therefore, this chapter describes the related work in the molecular visualization field, animation, and virtual reality.

1.1 Molecular Data

Proteins are macromolecules with various functions and complex structure [20, 6] (see Figure 1.1). They consist of one or more chains of amino acids. All the amino acids in one chain are connected via peptide bonds. The type and order of the amino acids in the chain define protein primary structure. When the amino acids are combined to form a peptide, their certain elements are removed and what remains of each amino acid is called an amino-acid residue. This chain of amino acids folds into a configuration which is energetically stable. This stability is affected by intramolecular interactions, such as hydrogen bonds. The correct folding of the chain is important for the protein function. The folding introduces certain patterns to the protein chain (such as α-helix and β-sheet), called the secondary structure. The 3D arrangement of the secondary structure of the protein chain is called the tertiary structure. Two or more folded chains can form a functional complex called the quaternary structure. The terms tertiary and quaternary structure are often interchanged and used simply for protein 3D structure. Nucleic acids are biomolecules such as deoxyribonucleic acid (DNA) and ribonucleic acid (RNA) [6]. The nucleic acids are formed from nucleotides which consists of a nucleobase, a sugar, and a phosphate

5 1. Background and Related Work

Figure 1.1: The illustration depicts the levels of organization of protein structures [21]. group. The sugar (deoxyribose or ribose) is the main difference be- tween DNA and RNA. Moreover, DNA is formed from different nucle- obases (adenine, thymine, guanine, and cytosine) than RNA (thymine is replaced by uracil). Two single DNA strands usually form the charac- teristic double helix while RNA is single-stranded and typically forms very complex structures. DNA is situated in the core of cell storing its genetic information.

1.2 History of Molecular Visualization

Physical models were the earliest interactive three-dimensional (3D) molecular visualization tools. Examples, such as the CPK (Corey Paul- ing Koltun) [22] and Dreiding [23] models, built upon earlier work going back to the mid-19th century, when the chemists built a wooden ball-and-stick or wire models to demonstrate molecular properties. In 1953, Watson and Crick were able to explain Franklin’s fiber diffrac- tion data and synthesize the brass model of the DNA structure that revealed its genetic function [2]. Watson and Crick’s model ushered in a new era of discovery in molecular biology. The model and the discoveries enabled to form the foundations for much of the present cutting-edge research in biology and biomedicine [24]. In 1960s, the computer technology became a critical catalyst in crystallography and structural molecular biology. Initially, the com- putational power was mostly dedicated to data analysis which then helped to construct the physical models, representing for example the electron density [2]. The rise of computer graphics gave birth to many applications with interactive graphical user interface (see Fig-

6 1. Background and Related Work

Figure 1.2: The first system for the interactive display of molecular structures devised at MIT in the mid-1960s [25]. Detail of the CRT screen, with the globe that control the direction and speed of the wireframe image rotation.

ure 1.2). While in the early days of computer-based molecular graphics crystallographers and other molecular scientists developed their own software to visualize, explore, and analyze the structures they were solving, today computer graphics and interactive techniques are being developed and utilized by many diverse communities [2].

1.3 Molecular Animations

As already mentioned, visualization definitely helps with percep- tion and graphic representation enhances the comprehension and learning [26, 27, 28]. Graphics have been shown to assist learning by allowing for dual coding when combined with text, being aestheti- cally appealing and motivating [26]. Dynamic visualization drives this effort even more forward. Animation is good not only foredu- cation pruposes [1, 29] and conveying information about research

7 1. Background and Related Work topics among scientists. Thanks to this, the target audience is able to see the information that would not fit to static images. For example, without animation it is very hard to convey the information about dy- namic behavior of objects. Understanding protein dynamic behavior and interaction with other molecules is essential for biochemists and molecular biologists. Analysis and understanding of these phenomena helps drug designers and protein engineers to design functions and properties of drugs or proteins [30]. Animation of molecular motion in biology can be grasped from two points of view. First is the molecular dynamics (MD). This motion is similar to the Brownian motion [31]. It is derived mostly from the forces between atoms and their potential energies [32]. The second possible meaning of animation is connected to manually animating movements of molecules or camera. This user-driven motion enables to create animation where the animator has full control over the movements of molecules. It includes manually animating protein movement and interactions. The camera movement in the environment where the molecule is situated is also manually animated.

1.3.1 Molecular Dynamics Molecular dynamics (MD) is one of the most commonly used and pow- erful computational methods, enabling to simulate and capture the physical movements of molecules and their atoms. It mostly utilizes interatomic potentials or molecular mechanics force fields to calculate forces between atoms and their potential energies [30]. MD simulation output is automatically generated. During the animation computation there is no user input, it can be provided only before the simulation. Simulations, such as the movement of a ligand to the protein active site, can be created using this computationally demanding approach. Over the last few decades, many tools for loading and displaying molecular dynamics movements have appeared. Early solutions were focusing mainly on animation of the movements, making it a standard representation in widely used tools, such as VMD [32], PyMOL [33], Chimera [34], or YASARA [35]. Using tools as Amber [36], Gromacs [37, 38], or CHARMM [39, 40], MD simulation data can be generated. The output simulation frames can be then used for example in PyMOL, VMD, or CAVER Analyst [41] visualization tools. PyMOL can also be

8 1. Background and Related Work

used to generate MD simulations with a plugin using Gromacs tools as the back-end [42, 43], making it a powerful tool for both automatic and user-driven molecular animations.

1.3.2 User-driven Animation MD movements are happening in the order of picoseconds. However, molecules can also be moving around in space and interacting with other molecules and many biologically important processes happen in the timespan which ranges from microseconds to seconds [44]. This movement is also very important because from its observation and comprehension, the domain experts can deduce the protein function. To achieve this movement, where molecules are interacting with other molecules and moving in space, animators must often use tools where they manually create the animation. There are some complex processes within large molecular scenes which cannot be captured or simulated and they have to be manually generated using the individual domain knowledge and experience. These tools are usually used for movement of proteins or their parts around in the scene and navigating the camera. To animate the movement of the camera or proteins in the scene, the user often has to define the animation keyframes which are then interpolated to create smooth camera movement. There are two basic movement types in a scene to achieve storytelling animation. Either the objects in the scene are static and only the camera is moving, or the camera does not move and the proteins are transitioning from one conformation to another. Of course, both these approaches can be combined to achieve the best animation results. In this thesis, we work predominantly with the camera movement animation. Animation tools commonly used by the molecular animators are either directly devoted to molecular visualization, like PyMOL [33] or QuteMol [45], or their main purpose is modeling and animation, like Blender [46] or Maya [47]. By installing plugins, like BioBlender [16] or Molecular Maya (mMaya) [15] to display molecules to the other tools, one can achieve professional molecular animations. PyMOL [33] is a powerful and comprehensive molecular visualiza- tion open-source product for rendering and animating 3D molecular structures. Movie-making in PyMOL is mostly done via the com-

9 1. Background and Related Work mand line commands, but PyMOL’s new graphical user interface for movie-making allows the users to create molecular animations more conveniently. Objects and cameras can be moved around a scene simultaneously or independently of each other to create a movie. Py- MOL also ships with RigiMOL [48], an animation tool for molecular visualization. RigiMOL is a molecular morphing program to create trajectories from a starting conformation to an ending one. Since Py- MOL version 1.6, standard morphing operations are available directly from the PyMOL user interface. RigiMOL includes a number of pre- programmed menus that allow the users to easily create a molecular morph between two input structures and then export the data for sharing. VMD [32] is a widely used graphics program designed for the display and analysis proteins and nucleic acids. VMD can simulta- neously display any number of molecules. It uses a wide variety of rendering styles and coloring methods. It provides complete graphical user interface for program control, as well as text interface with built- in scripting support. The application has the ability to animate MD simulation trajectories, which can be imported either from a file or from a direct connection to a running simulation. It is a freely available open-source program written in C++. YASARA [35] is a program which is under development since since 1993 and is used for displaying, modeling, and simulation of molecular structures. It supports many platforms, like Windows, Linux, MacOS, and Android. They are offering a version with complete hardware, headset, mouse, and keyboard, to do interactive molecular modeling and MD in virtual reality. YASARA View basic version is for free, but other versions used for animation, modeling, and other features including VR support are paid. Although it offers a VR version, its use is very limiting mainly due to the need to use mouse and keyboard to control the application while wearing the head mounted display (HMD). Most of the advanced tools are difficult to use for biochemists because operating them is too complicated and sometimes needs at least basic programming knowledge. Molecular Flipbook [49] is a tool for biochemists created by Janet Iwasa and her team. It is very easy to use even by researchers who lack the time to learn advanced modeling techniques. The project goal is to enable biologists to become

10 1. Background and Related Work

animators and to change the way that biologists create, visualize, and share molecular models. It can import PDB files. The user can easily create a keyframe animation of a protein along with the user defined markers and use a simple Shader tool to animate also the colors of the protein. The output animation can be then easily exported and saved as a video. Molecular Maya (mMaya) [15] is a free plugin for Autodesk Maya that lets the users import, model, and animate molecular structures. They leverage the power and flexibility of Maya while offering tools specialized for the challenges of molecular modeling and animation. mMaya can be enhanced by a series of ’kits’ that expand its function- ality. For example, mMaya’s Modeling kit enables intuitive creation of structurally accurate and simulation-ready macromolecular models for incomplete 3D structures from databases such as the PDB [50]. Many professional animators are using this software to create anima- tions [51]. BioBlender [16] is a software package built on the open-source 3D modeling software Blender. With BioBlender, the users can import and visualize PDB files in the 3D space, displaying their surface andits properties, such as electrostatic potential (EP) and hydropathy (MLP), and elaborate protein movements and dynamics. For protein motion, BioBlender calculates intermediates between two conformations using the Blender Game Engine. The program gives as output both the intermediate PDB files and the rendered images as a movie.

1.3.3 Automating the Creation of Animations Nowadays, there are more and more scientists working towards au- tomatic visualization tools. For example, Le Muzic et al. [52] have a goal to improve visual communication in the field of biochemistry by generating illustrative visualizations of molecular reactions in- volved in biochemical processes. Computational biology provides the description of structural and procedural models replicating the func- tion of biological processes. They suggest to merge these two data sources and to automatically produce visualizations communicating both structure and function of the molecular machinery. They use agent-based simulations as the means of representing the dynamics of metabolic networks. In their system, passive agents are unable to start

11 1. Background and Related Work reactions autonomously, they can only receive reaction orders from an omniscient intelligence (OI), which controls their behavior. OI is responsible for triggering reactions based on simulation of real biolog- ical processes. The user may set the focus on any molecule shown in the scene at any time. Then the camera prioritizes and starts following the actor. They can build stories from chains of reactions. After one re- action of the focused element is completed, the focus is automatically shifted to one of the products. Le Muzic et al. have also invented a technique called Illustrative timelapse [53]. Their goal is to formalize the illustration techniques used by artists to facilitate the automatic creation of visualizations generated from mesoscale particle-based molecular simulations. The technique is used for visual exploration of complex biochemical pro- cesses in dynamic environments, where the Brownian motion makes it impossible to perceive the required information. It includes seamless temporal zooming to observe phenomena in different temporal resolu- tions by skipping certain frames. Secondly, it uses visual abstraction of molecular trajectories to ensure that the observers are able to visually follow the main actors with smooth pursuit eye movements. This is achieved by reducing the speed of the particles so that the amount of per-molecule displacements per frame is significantly below a certain threshold. The technique also increases the visual focus on events of interest (mostly molecular reactions) by color highlighting and pro- longing the duration to make them stand out. Lastly, a lens effect is used to preserve a realistic representation of the environment in the context. Through the MD simulations, protein engineers can study dynamic protein behavior, for instance, how tunnels form over time and how ligands move through these tunnels to and from the protein active site. However, these animations are rarely watched by the domain experts, because the visual clutter is too high and the temporal resolution is of- ten not suitable. Therefore, Byška et al. [30] introduced and evaluated mechanisms to link 3D animations with time-dependent MD simula- tion statistical measurements to support efficient navigation to events of potential importance. They worked on automatic adaptation of the 3D animation temporal resolution and abstraction of the mechanisms under observation.

12 1. Background and Related Work

Although certain tools can significantly improve the process of creating animation using automation, none of these tools has the ability to put the user inside the environment that is being animated. If designed properly, by being a part of the environment the user can have better awareness of the animated keyframes and their focus point. It is easier to actually think in 3D when the perceived environment is three-dimensional. This way, the camera can be easily manipulated around the scene and the user can fine-tune its position. There is also a direct visual video feedback as the video can be previewed inside the VR scene.

1.4 Molecular Visualization in Virtual Reality

Interactive molecular visualization is one of the oldest branches of data visualization. As mentioned before, protein models are important part of understanding the structure. Although computer visualizations of proteins are not the same as physical models, they are definitely a game-changer in examining protein structures in 3D. The field of structural biology heavily relies on computerized geometric and visual representations of three-dimensional, complex, large and time-varying molecular structures. There are many different techniques to make the visualization the most pleasing and informative as possible [6, 54]. Although 2D displays are conventionally used, interactive 3D visualizations aid the exploration of protein structure. Advances in interaction design and reduced cost associated with manufacturing hardware help to increase the rate of development of new technologies used for human-computer interaction. Such exam- ples includes haptic feedback. The integration of haptic feedback has shown a potential benefit in helping students to better understand the complex intermolecular forces. MolDock [55] is one such visuohaptic (combining visual and haptic) system that allows the user to manip- ulate and dock a ligand into a protein binding site. Other examples include motion sensors, 3D printers, virtual reality head mounted displays, , CAVE (projection based VR displays) and other. One of the most popular technologies of nowadays are VR head mounted displays (HMDs) like HTC Vive [56] or [57]. Although VR has many advantages, it can be also an enemy

13 1. Background and Related Work when used improperly. Users can, for example, experience a symptom complex known as visual-vestibular mismatch (also called ) [58] which happens due to different visual and vestibular sensors perception. Usually this occurs when the user is not in control of the movements of the camera in the virtual environment.

1.4.1 Virtual Reality Tools There are many tools to make viewing the protein structures possible, such as PyMOL [33], VMD [32], CAVER Analyst [41], RasMol [59], UCSF Chimera [34], and ISOLDE [60]. To some degree, each of these tools can produce stereoscopic 3D images, be it through passive 3D (using chromatic distortion glasses), active 3D (with shutter glasses synchronized to the image displaying device), or autostereoscopic 3D (no headgear required) [17]. Despite the fact that these methods differ, the aim is a near-3D effect for the end user. Providing more depth cues for depth perception of the user is useful for analysis of the 3D structure [45]. But these attempts to provide the users with 3D perception lack immersion into molecular environment. VR is the key player in addressing this problem of immersion. It helps to introduce a level of human-computer interaction that has never been possible [19]. Year 2016 was labeled as the year of Virtual Reality [61]. Since then, many applications that allow to see and touch molecules in VR have appeared [2, 3, 18, 19, 62, 63, 64, 65]. Virtual reality is gaining popularity as a medium for immersing students. While VR is not a new technology, education and biology targeted applications are relatively new and the benefits have yet to be born out in the literature [1]. MolecularRift [66, 1] is an example that incorporates a molecular viewer within the VR environment for the purpose of advancing drug design. Controlled by gesture-recognition motion sensor (using the gaming sensor MS Kinect v2) this program eliminates the need for standard input devices and controllers. RealityConvert [62] is a tool to convert static protein structures to standard formats like obj and mtl. Using this tool one can easily make a PDB file representation ready to use in VR applications whichare usually able to easily import models encoded in this format. Protein dynamics visualization in VR can be a powerful tool in the hands of biochemists. A tool developed by Ratamero et al. [19]

14 1. Background and Related Work

enables to easily import dynamics into the VR environment created in the Unity game engine [67]. However, proteins must be imported as obj files, which is a standard format that can be easily produced by standard molecular visualization tools, like VMD. On the other hand, it is a step that must be done outside the environment, in the preprocessing phase. It can take some time to produce such data and then correctly import them. The dynamic molecular motion is achieved by showing the model in consequential frames in the way they were produced by VMD. This approach is very easy and can be used by people who are not experienced in programming to be able to display molecules in VR from scratch. Work of O’Connor et al. [3] introduces a multiuser VR framework for interaction with 3D animated molecules. They have built an envi- ronment which can display molecular dynamics of a protein and more users can join simultaneously. Cloud computations are run separately from the application to perform extreme MD simulation computations. They conducted a survey, which shows that some tasks, like tying a knot on a molecule, can be achieved much faster in VR than using a standard 2D display. They state that this framework should accel- erate progress in nanoscale molecular engineering areas, including conformational mapping, drug development, synthetic biology, and catalyst design. Moreover, their findings highlight the potential of VR in scientific domains where three-dimensional dynamics matter, spanning research and education. Goddard at al. [68] discuss virtual reality molecular rendering. Vir- tual reality has certain advantages and disadvantages in comparison with the conventional displays. Typing in VR is often cumbersome but manipulating, moving, scaling, and examining the molecule is much simpler. They mention several existing applications for VR modeling. Most prominent of those are the AltPDB VR [69] and ChimeraX [65, 68, 69]. The AltPDB VR [69] created by the AltspaceVR Project is an appli- cation where users are situated in a shared VR-space. Multiple people can join each other to view and discuss proteins. The application is implemented in JavaScript and the available demo is customized to specifically look at protein components of the immune system. The application needs a PDB file along with obj/mtl files to load proteins. The files in these formats can again be generated using VMD [32]. ChimeraX [65, 68, 69] also supports virtual reality mode for systems

15 1. Background and Related Work supported by SteamVR (Oculus Rift, HTC Vive, Samsung Odyssey). It allows the display and analysis of structures and density maps in multi-person VR sessions where the users are represented by cone hands and 2D picture head as their avatar. In addition to interactive rendering in virtual reality, ChimeraX can record 360°movies for play- back on VR headsets. Nevertheless, the movie itself cannot be recorded directly in VR, but instead it can be used as an alternative to display and explore a scene that is too complex to render interactively. How- ever, as stated on their webpage, ChimeraX VR capabilities have many shortcomings. A few years of development effort will be needed to make these features work well, and only a small fraction of that work has been done until now. Immersive storytelling educational VR application Journey to the centre of the cell [64] puts the user inside a ship floating through blood veins and eventually exploring a human cell and its core. The whole experience is accompanied by a voice explaining what the user sees. The model of the cell was built from serial block-face scanning electron microscope data using imaging techniques. They state that students using this application improved their understanding of cellular pro- cesses. Nanome [70] is a commercial tool for atomic, molecular, and pro- tein visualization in Virtual Reality which appeared also on Steam and Oculus stores on August 29, 2018. Wearing a HMD, it can be used to collaborate in real time and to interact with proteins. The proteins can be downloaded from many online databases, including PDB, or up- loaded directly from a hard drive. It can handle many of common tasks, such as grabbing, rotating, or enlarging the molecule and measuring distances or angles between atoms. You can also mutate through the most common amino acids and rotamer conformations or minimize the structure using force fields to do some real time optimization for molecular building. To our best knowledge, there is no tool that can be used for video shooting of molecules in VR available. Therefore, we would like to explore this area of video shooting in VR as it can be an interactive, useful and engaging way to create video sequences.

16 2 Design & Requirements

During the design process, we have to take the background and re- quirements of the user into consideration. In this way, we can design intuitive environment and interactions which are naturally address- ing the needs of the target group. To achieve this, we are taking full advantage of VR and we design the interactions in a different way from an application designed for conventional 2D displays. Our ap- plication is designed namely for drug designers, structural biologists, protein engineers and biochemists in the first place. Our tool should help them to communicate their work with others and help them with educating and engaging students. It is meant to be immersive and easy to use for the user. For that we want to design and implement intuitive prototype application for video recording of molecules in VR. This chapter formalizes the requirements and describes the high-level design of the application with respect to the specified requirements.

2.1 Requirements

Before the design and implementation process, we have discussed with the protein engineers their actual needs and together compiled a set of initial requirements for the system. We realized that there are many tools and techniques to display various molecular properties (e.g., cellVIEW [11] and others, see Chapter 1) and some of them already use VR (see Section 1.4.1). To avoid duplicating these tools, the priority of requirements was shaped accordingly. Therefore, the main concern and actual benefit of the proposed prototype tool was set to the camera recording functionality. The other standard features, such as the support for different molecular representations, were suppressed as they do not have impact on the main purpose of the tool. It was agreed that such common functionality will be added in the future, when our proposed solution will prove to be a feasible approach. The requirements are based on discussion with a structural biology domain expert and our own research in the fields of biology and virtual reality. From our cooperation, we derived the following essential requirements for the system functionality:

17 2. Design & Requirements

R1 Download protein. Download and import a molecule from the Protein Data Bank database [50].

R2 Protein visual representation. Display the loaded molecule us- ing one of the traditionally used visual representation.

R3 Protein interaction. Make the environment containing the mole- cule interactive and support intuitive user manipulation with the molecule.

R4 Real-time video recording. Record and playback the video se- quence taken by the virtual camera in real-time.

R5 Camera manipulation. Because video recording is one of the main features of the program, the user must also have an inter- active way to fine-tune camera’s position and rotation.

R6 Intuitive camera trajectory planning. Intuitively adjust cam- era’s trajectory parameters.

R7 Video format. Exporting the video to one of the commonly used formats.

Based on these requirements, other more detailed software require- ments have been specified using the MoSCoW method [71]. All the requirements are not exhaustively listed on one place in this document, but instead they are taken into consideration and mentioned in the relevant sections of this chapter. These requirements include more detailed functional requirements, performance requirements (e.g., we want the application run at least 90 frames per second for smooth VR experience), and others. The seven essential requirements are the main building blocks for the design of the prototype tool. They can be divided into two cate- gories: 1) protein representation, which includes requirements R1, R2, and R3 and 2) video recording system, including R4, R5, R6, and R7. We will discuss each of the requirements and its related design concept in more detail in the following sections.

18 2. Design & Requirements 2.2 Protein Download

Macromolecules can be loaded directly from the PDB online database, so the program must be able to download, parse, and process a file from the Protein Data Bank [50]. Internet connection is required for the download, but the application should be able to save the downloaded file to the hard drive so that the download process does not haveto be repeated when the user wants to load the same molecule again. Another benefit is that all previously used proteins can be used offline. Also, downloading molecules at run-time provides more flexibility and adaptability to the user needs. He or she can decide to download any molecule at any time and the application does not have to stop running.

2.2.1 Input Data

This part introduces the input data, mostly formed by biomolecules, along with its composition. Moreover, it introduces Protein Data Bank (PDB) format as one of the most commonly used formats. Protein Data Bank (PDB) [50] was established as the first open access digital data resource in all of biology and medicine and is today a leading global resource for experimental data, central to scientific discovery. It provides the access to 3D structure data for large bio- logical molecules (proteins, DNA, and RNA). RCSB PDB [72] makes PDB data available at no charge to all consumers without limitations on usage. PDB Format is a standard for biomolecule structural data representation. A pdb file is divided into many sections, such as the title section, pri- mary structure section, secondary structure section, crystallographic and coordinate transformation section, coordinate section and oth- ers. The most important section for our application is the Coordinate Section, because it stores all the coordinates of atoms. Each atom is described on one line. The line carries the information about its po- sition, chemical element, residue sequence number, chain identifier, and other. Using this data, we can reconstruct a 3D model of the struc- ture. More details can be found in a well structured documentation online [73].

19 2. Design & Requirements

Figure 2.1: The structure of deoxy human haemoglobin [75], repre- sented by the space-filling model (created by QuteMol [76]).

2.3 Protein Visualization

The focal object in the main scene is the molecule. The user can display all downloaded molecules simultaneously in the single scene. To meet the requirement R2, an appropriate protein visual representation type had to be chosen. There are several different types of molecular vi- sualizations [74]. However, for a prototype tool, one representation is sufficient. The simplest and probably most often used molecular model is the space-filling or calotte model [6] (see Figure 2.1). Here, each atom is represented by a sphere whose radius is proportional to the atomic radius (i.e., covalent radius) of the respective element. The molecular surface can be then derived from this representation and defined as the outer surface of the union of all atom spheres. The van der Waals (vdW) surface is a space-filling model where the radius of the atom spheres is proportional to the van der Waals radius. This surface shows the molecular volume, that is, it illustrates the spatial

20 2. Design & Requirements volume the molecule occupies. The vdW surface is the basis of most other molecular surface representations [6]. With the expert, we have agreed that this model is sufficient for the purpose of this work.

2.3.1 Color Modes There are different coloring modes used in molecular visualization. For example, the molecule can be colored according to atom elements, residues, hydrophobicity, chains, atomic charge. We decided to use the following three basic color modes (see Figure 2.2), where the user is able to distinguish between given parts of the molecule. 1. Each atom is assigned a color according to the chemical element it represents (e.g., each carbon is gray and oxygen red). This is a standard coloring called CPK [22] (see Figure 2.2a). 2. The second color mode is based on types of residues (see Fig- ure 2.2b). After the discussion with biochemists, the amino acids were separated into eight categories, based on their properties, and each category was assigned the corresponding color. The colors are the following: (a) Black: Alanine, Glycine, Valine, Leucine, Isoleucine, Me- thionine (b) Orange: Phenylalanine, Tyrosine, Tryptophan (c) Light blue: Asparagine, Glutamine, Histidine (d) Pink: Serine, Threonine (e) Green: Proline (f) Red: Aspartic acid, Glutamic acid (g) Blue: Arginine, Lysine (h) Yellow: Cysteine 3. The last color mode colors each chain in the protein with a different color (see Figure 2.2c). We chose a list of distinct colors based on a standard list of colors [77]. Then we cyclically choose one of the 20 colors. This is sufficient as the amount of chains in a protein is usually very limited.

21 2. Design & Requirements

(a) Element coloring. (b) Residue coloring. (c) Chain coloring.

Figure 2.2: Molecule displayed in three different color modes.

2.4 Interaction with Protein Representation

To meet the requirement R3, we have to make the protein visualization interactive. The user must be able to manipulate with the protein in space to fine-tune its position in the scene. A physics system (i.e., gravity) should not affect the protein model. The molecule should be floating in the air in an exactly specified position and with aspecific rotation. For moving the molecule around, there are three traditional affine transformations which the application has to support:

1. Translation. The protein can be positioned anywhere by simply grabbing it with the VR controller and placing it to an arbitrary place in the environment. 2. Rotation. Rotation can be achieved in two ways. First way is similar to positioning. The molecule can be grabbed and rotated by hand movements like in real world. To allow more flexibility and avoid unnecessary hand movements, we introduce another way to move molecules. We decided to make use of the controller trackpad or joystick to rotate the molecule, as described more in the Section 3.4. 3. Scaling. The user should be able to scale the molecule using both hands while holding grip buttons on both controllers si- multaneously.

Except for the molecule transformation, highlighting and selection of individual parts must be also supported. When the user points to a specific part (e.g., amino acid) of the molecule, it gets highlighted. Although the implementation may differ, this way of highlighting can

22 2. Design & Requirements

be used not only for the protein, but also for any interactable object, such as the camera. Highlighted objects stay highlighted only when the pointer is still hitting them. Moreover, any part of the molecule can be selected to allow for guiding the attention of the target audience to a specific area of the molecule. Unlike highlighting, here the selected atoms stay highlighted until they are intentionally deselected again. As a consequence of that, multiple selection is possible for keeping more parts of the molecule highlighted at the same time. The user must be able to easily and intuitively control other proper- ties of the molecular visualization, not only its position, rotation, and scale. For example, the user must be able to change the color mode of the displayed molecules. To achieve this, he or she can simply say a predefined phrase that is written on a display in the environment. The application will recognize this command and will behave accordingly. Both HTC Vive and Oculus Rift devices have an embedded micro- phone, so the microphone is always available. In case there is still some obstacle for the speech input (e.g., there are people in the room), the user must be able to change the properties in a different way. For that, there is a floating graphical user interface (GUI) in the roomas an alternative to the voice input.

2.5 Real-Time Video Recording

We want to be able to record video in real-time as mentioned in the R4 requirement. Real-time video capture in VR goes beyond a simple recording of the screen. The user can be directly a part of the animation itself. User’s virtual avatar can become an actor and narrator of the story. He or she can select points of interest using his or her hands or even comment anything that is happening in the environment. To al- low real-time commentary, sound recording has to be available. His or her actions are recorded by the camera and can be replayed by anyone later. Real-time recording also leaves the option of video streaming or collaboration open which can be explored and implemented in the future. Even though real-time video recording is more performance de- manding than offline animation rendering, it brings many benefits. The video camera, which is recording in real-time resembles the real-

23 2. Design & Requirements

Figure 2.3: The user avatar pointing at a molecule while watching the video preview on both screens simultaneously. The small preview screen is displayed above the camera. The large screen is on a fixed position in the scene. world camera behavior which is intuitive for people. This way the user can anticipate the behavior and interact with it naturally. In the scene, there should be a display showing the view of the camera, where the user can see a preview of the video before its recording or even while it is being recorded. This way the user can be more aware of his or her position and also the position of the molecule and camera (see Figure 2.3).

2.6 Camera Manipulation

Th user should be able to interact not only with the protein 3D repre- sentation but also with the camera system (see requirement R5). This involves not only moving and rotating the camera, but also interacting with the camera recording controls so the user is, for example, able to start and stop the recording. Camera movement should mimic the real-life camera movement techniques, such us handheld movement on camera stabilizer or a crane shot. The user must be able to grab the camera and hold it in his

24 2. Design & Requirements

or her hands while walking in the scene. This puts him or her directly into the role of a camera operator. Both the camera and the protein can be grabbed distantly when pointed at them. This allows the user to move the objects that are far away as if they were attached to a virtual stick, allowing to shoot handheld crane shots. At the moment when the camera is grabbed, it must not change its position. This would cause unexpected cuts in the video. It has to stay at the place where it was grabbed even with the same rotation and smoothly follow the hand movements of the user. Because sometimes there might be some desired positions of the camera which are out of reach, the user must be able to manipulate and rotate the camera from the distance as well. Therefor, we designed a solution for this feature. It enables the user to move and rotate the camera even from the distance and adjust its position distantly as well. This can be achieved just by pointing and grabbing it and then swiping with the thumb on the trackpad. The thumb movements in the X axis are then translated into camera’s rotation speed. The Y axis movements affect camera movements towards or away from the user. This approach is described in Section 3.4 in more detail.

2.7 Camera Trajectory Planning

Apart from the hand manipulation with the camera, there must be a way to automate the movement of the camera in the application (see requirement R6). There are to ways to achieve the animation of the molecule (e.g., simple rotation). One way is animating the rotation of the molecule itself. The other way is that the molecule is static and the camera is changing its position and rotation. The other one is a bit more complicated to implement, but offers more flexibility and is more intuitive for the user because of its resemblance to real-life experience. Both these approaches are similar, but we have opted for the second one as it provides more flexibility. This approach can also be extended to record an arbitrary and more complicated scene, not just a simple scene containing few proteins. Though it can be reused in the future. For the camera animation, there must be an interactive system that allows for easy planning of the camera trajectory. The camera trajectory

25 2. Design & Requirements planning should be intuitive for the user and the user should also have a way to see how the trajectory looks like. It should allow him or her to define a path for the camera by inserting, positioning, and deleting keyframes of the animation. The length of the animation (in seconds) between the individual keyframes must be adjustable, so the user can set the time that it takes for the camera to get from one keyframe to the other. The animation system should also be independent from the recording, so the user can, for instance, pause the animation without stopping the video and audio capture. In our solution, we designed the trajectory planning with the following features. It enables to animate the camera movements. In our solution, we introduce so called waypoints. These can be inserted by the user and used for planning the trajectory of the camera. Foremost, the waypoints can be seen as keyframes of the camera ani- mation. Each waypoint represents one keyframe of the final animation of the camera movement. Essentially, they are marking the path for the camera movement. When the animation process starts, the camera is following the sequence of these waypoints. In our solution, each waypoint is represented by a semi-transparent model of the camera, so the user can see where the camera is and what direction it is facing in this specific keyframe. Each of the waypoints has an identifying number assigned, mark- ing its order in the fly-through animation. The position between two keyframes is interpolated using a spline curve to achieve smooth tran- sitions. Ease-In-Out easing Beziere function has been chosen as the default, see Figure 2.4. Using this curve we can achieve smooth tran- sitions between consecutive keyframes. The time of the animation between two frames can be changed for each waypoint separately. Apart from the waypoints order number, it has also a time mark ti. The ti is the number of seconds to get from the waypoint i − 1 to the waypoint i. So when n is the number of waypoints, the whole ani- n mation from the initial position to the last waypoint will take ∑i=1 ti seconds. The camera animation process is independent from the video cap- ture. In consequence, the user can start the camera movement ani- mation without starting the video capture to see just the preview of the video on the virtual screen and adjust the path accordingly. Or, for example, when the camera movement stops, the recording still

26 2. Design & Requirements

Figure 2.4: An example of the animation curve, used to enhance the camera movement interpolation between two consecutive animation keyframes. It corresponds to the smooth animation of the speed be- tween, with speeding up and slowing down at the beginning and end of the trajectory, respectively.

continues until it is switched off manually. Both the animation and recording can, of course, run simultaneously, resulting in the video with the animated movement of the camera. This design is portable and can be used in any other project which requires manipulation with the cameras for recording. It can be used to smooth camera movement when recording game trailers, product advertisement, or educational videos. It is similar to real-life camera dolly tracking system, which puts the camera on a steady rail track and allows for stable movements. Dolly tracking includes panning, tilting, and tracking on a predefined trajectory. Everything can be achieved by this simple trajectory planning concept.

2.8 Video Format

To aid sharing and ease of communication, the video must be exported in a format that is supported by most online multimedia platforms and video players. After the recording session, the user should not have to process the output in any way. Outputting just the frames would mean that the user would have to import the frames into other

27 2. Design & Requirements program that could merge and encode them into a common video format. These steps would be a hindrance for the user and it would take too much time to create the video. The video should be created on the fly within our application and saved to the hard drive immediately after the recording finishes. The MP4 format has been chosen as the supported output format. This file format is widely supported by many media applications which makes it a perfect candidate to meet the requirement R7. MP4, an abbreviation for MPEG-4 Part 14 [78], is a compressed file format created by Motion Picture Experts Group (MPEG) and is based on ISO Base Media File Format. It was defined as part of the MPEG-4 standard. It is a multimedia container format that can include not only videos, but also audio and subtitles. Usually, MP4 files with video and audio use the official .mp4 extension, while the .m4a extension is used for audio-only files. The format supports MPEG-4 Part 10 (H.264) and MPEG-4 Part 2 and other codecs to encode the video and AAC, MP3, ALAC, and others for audio. It can also encode the subtitles.

2.9 Scene Design

To meet all the requirements, we had to focus on the following aspects as well. Apart from the molecule and camera properties setting, there must be also other elements in the scene, necessary for designing a VR application. Each of these elements has an impact on the resulting experience of the user. Mostly the designed elements extend some of the previously mentioned functionality (e.g., displays for the camera) or they supplement the environment (such as the floor to avoid motion sickness) or they, for example, help with more natural lightning of the scene. This section discusses the surrounding environment of the player and all other visual elements in the scene. To achieve safety and comfort of the user, we must prevent the virtual-reality sickness which includes motion sickness and other sen- sory conflicts, arising from using the artificial environment. Theusers can become disoriented, having individual reactions and experience from the VR environment (e.g., neck strain can arise from the headset use or collisions with physical objects in the room can happen) [79]. Most HMDs let the user to set the and then he or she can

28 2. Design & Requirements

see the area boundaries in VR that correspond to the real space. These boundaries are usually grid-like and they do not obstruct the user’s view in the virtual environment. To achieve immersive results [79], we have to focus on certain aspects, such as: • Believability. Intuitive design which reduces the outside-world interference. How the users interact with a new environment must match with what they are used to do in the real world. • Interactivity. User’s actions must be interactively reflected in the and the objects should respond to their behavior. • Explorability. The users should be able to freely move around and discover the virtual environment. The user must get a virtual representation of the controllers. The headset representation is not necessary, but to add to the believability of the environment as the user can see himself or herself in the camera view, we have decided to add even the headset representation. The user avatar consists of the controller and headset virtual models (see Figure 2.5). The controllers are represented by virtual hands. The hands respond to controller events and mirror the movements of real hands of the user. There is also a pointer on each hand used for interaction with the UI and other interactable objects. One hand also comes with a watch to track time, because it is often easier to loose track of time in an immersive environment. Next to the watch, there are menu controls of the video camera. The headset visualization is inspired by the master thesis about virtual environments by Milan Doležal [80]. It consists of two sphere-like objects, one for the body and one for the player’s head. The object representing player’s head has a generic head mounted device model attached to it (see Figure 2.5). The scene is situated in a room which resembles a laboratory envi- ronment. The aim of using this simple environment is not to distract the user with the surroundings too much, so the main focus is on the protein structure. It uses light colors so the protein pops up and is not blended with the environment. The floor gives the user the feeling of standing with both feet on the ground and helps to avoid the motion sickness. The environment uses physically-based lightning for more natural feeling.

29 2. Design & Requirements

Figure 2.5: Representation of the user avatar which consists of a head- set visualization and two hands that represent the controllers.

To allow for the recording preview mentioned in Section 2.5, the camera has a small preview display always turning to and facing the player. Besides the small camera display which is attached to the camera itself, there is a large screen for bigger preview of the video. The user can see the video as it is being recorded. The display is not active only during the recording, it shows exactly what the camera sees so the planning of the trajectory is easier with the preview. In the development version, there are also tables and a debug computer screen. The tables are there for better orientation in the room where the application was developed and tested. The computer screen allows the developer to display the debug console information directly in the VR without the need to remove the head mounted display.

30 3 Implementation

This chapter describes the implementation techniques and approaches used in this thesis. The following sections describe the used technol- ogy, the details of the main scene, interactions and user interface, protein rendering, video recording, and the system for camera trajec- tory planning.

3.1 Used Technology

The project was developed in the Microsoft Windows operating sys- tem, with the use of the Unity [67] game engine together with Mi- crosoft Visual Studio [81] integrated development environment (IDE). This section serves as an overview of the technologies and pieces of software and hardware which were used to develop the resulting application.

3.1.1 Unity

Unity [67] is a widely popular game engine used not only for cre- ation of games, but also for other purposes, such as like film-making, architecture, engineering and others. Unity is extremely useful for fast prototyping, because it offers many ready-to-use tools for render- ing, animation, physics, and other features. On top of that, there are many free and paid assets that can be downloaded and immediately used. The Unity Editor is a powerful tool, which serves as WYSIWYG (acronym for “what you see is what you get”) user interface to work on a project.Unity enables to build any project to most of the used platforms like Windows, iOS, Android, Xbox, PlayStation, and others. We used Unity 2018.2 version for the development of our application. Later we moved to version 2018.3 mainly due to the nested prefabs support and better garbage collection. Unity is built using C++ programming language. However, for the development of Unity applications, Unity offers C# programming interface with the support for object oriented programming. C# is a simple, modern, object-oriented, and static/strong typed program-

31 3. Implementation ming language. C# syntax is very similar to C, C++, and Java languages, because it has its roots in the C family of languages. The VR support is achieved thanks to the Virtual Reality Toolkit (VRTK), which covers a number of common solutions for VR develop- ment. VRTK supports VR Simulator software development kit (SDK) which is included in the toolkit, SteamVR 1.2.3 SDK, and Oculus SDK. Current version at the development stage was VRTK 3.3. VRTK 3.3 supports Unity up to version 2017.x. Due to that, small changes had to be made to VRTK scripts so it is able to run with 2018 Unity versions. At the end of the development, new version of VRTK has appeared, which supports newer versions of Unity. However, the design of the new toolkit is completely different, so there was no time to move the project to this newer version yet. GameObject is the fundamental entity in Unity that represents any visible or functional object in the scene. The GameObject has no func- tionality by itself, is acts only as logical container for components which implement the behavior. Each GameObject has a default Trans- form component, which holds the objects position, rotation, and scale and related functionality. Behind each component, there is a C# script which describes the behavior of the component. To add new com- ponent to any GameObject, C# script has to be created and attached to the GameObject in the Unity Editor. This script has to derive from the MonoBehaviour base class so it can be used as a component. This ensures that certain methods can be implemented in this class. The most important methods are the following:

• Start() method is called on the frame when a script is enabled before any of the Update() methods are called for the first time.

• Update() method is called on each frame to update the object, if the object is enabled.

Unity does not offer a convenient way for working with threads until version 2019.1. There it can be achieved by using the Entity Com- ponent System (ECS) design and Data-Oriented Technology Stack (DOTS). However, this version was not available in the course of devel- opment, so we could use only coroutines. Coroutine in Unity allows for a process to be distributed among more frames, reducing the work- load in each frame. Threads can still be used in Unity, but not with

32 3. Implementation

(a) HTC Vive [56].

(b) Oculus Rift [57]

Figure 3.1: Both supported head mounted display devices. any classes deriving from MonoBehaviour class. Therefore, it can be used only for operations like reading or writing to a file.

3.1.2 Hardware Interfaces

Initially, the main task was to develop the application primarily for the HTC Vive device. Thanks to easy the portability of the VRTK toolkit, the application runs at the end also with the Oculus Rift device. Apart from that, the application is usign VR Simulator SDK so it can be also launched on a computer with just a conventional 2D display. Using the simulator works well as a preview mode and is especially good for the development when any VR headset is not available. HTC Vive [56] (see Figure 3.1a) is a virtual reality platform devel- oped in cooperation of HTC and Valve, released on April 5, 2016. It

33 3. Implementation

Figure 3.2: HTC Vive controller layout of buttons [82]

offers a headset with° 110 field of view, 32 sensors for precise track- ing, and wireless controllers for exploring and interacting with VR experiences. The controllers come with haptic feedback, triggers, and trackpads (see figure 3.2). The display has 1080×1200 pixels per one eye and 90 Hz refresh rate. Two wireless base stations enable 15×15 (around 4.5×4.5 meters) feet room-scale motion tracking. If the user approaches the boundaries of the play space, he or she is alerted by the system to ensure safety. There is also a Pro version of HTC Vive with some improvements such as the resolution (1440 x 1600 per one eye), integrated headphones and others. The HTC Vive Pro is also supported and was used during the development. Oculus Rift [57] (see Figure 3.1b) is a de- veloped and manufactured by Oculus VR, a division of Facebook Technologies LLC. It was released on March 28, 2016. The Rift has displays, with 1080×1200 resolution per eye, a 90 Hz refresh rate, and 110° field of view [83]. The device also features rotational and positional tracking and integrated headphones that provide the users with the 3D audio effect.

34 3. Implementation 3.2 The Main Scene

Scenes in Unity contain the environments and menus of one project. Each unique scene file is like a unique level of a game. In each scene, you place your environments, objects, and decorations, essentially designing and building your project from these pieces, This project consists of only one main scene, which contains everything necessary. The main scene environment is mainly built using a 3D Scifi Kit [84], downloaded from the Unity Asset store. Using the models from this asset, a clear and light room has been created. The whole scene is surrounded by a skybox, which also offers a good contrast with the molecule, so it is nicely visible in the resulting video. In the develop- ment version, there is also an option to display tables that match their real location in the room for better orientation. On one of the tables, there is a debug display which lists the Unity’s debug log and displays it directly in VR. Thanks to that the developer does not have to put down the HMD every time he or she needs to test something and see the log. However, the tables and debug display are not relevant for the end user of the application, they are disabled in the final version. This way we can provide the necessary information only to keep focus of the user on the work and offer him or her an uncluttered experience. Apart from the surrounding environment, there is also the molecule, the camera together with its waypoints, and a graphical user interface (GUI), which are described later. On one side of the room, there is the main display, which is using the texture rendered by the camera used for recording. This display is used to show a video preview. The described scene can be seen in the Figure 3.3.

3.3 Protein Visualization

The macromolecule object is dynamically loaded from the online pro- tein database or from the hard drive and constructed at run-time. The PDBLoader singleton class is responsible for downloading and parsing the data. If the data is located on the hard drive, the file is read in another thread. In the main thread, there is a coroutine waiting for the process to finish. If the pdb file has not been found on the hard drive, it is downloaded from the PDB database, using a coroutine.

35 3. Implementation

Figure 3.3: Illustration of the whole scene of the prototype applica- tion. The scene contains two loaded molecules, the camera, the large preview screen, UI keyboard, the shadow cameras (described in Sec- tion 3.4.2), the user avatar (also visible through camera preview on the large screen) and the surrounding room.

After downloading the file from the database, it is written to thehard drive in another thread. When the file has been loaded, the data from the pdb file is stored into the Structure class which represents the molecule. The class contains all the data loaded from the pdb file like the list of all chains, list of all atoms and its positions, and other information. Each structure can contain one or more chains. Each chain of the molecule is described by its own class Chain. The chain holds a list of one or more amino acids represented by the Residue class, which stores the list of individual atoms. This is the hierarchical structure of the molecule object with all its one-to-many relationships. Each atom has its own collider used for highlighting collision but apart from that, the structure class has its own collider, computed from the atom position. The boundaries are given by the maximum values of the x, y, and z coordinates among all atoms.

36 3. Implementation

3.3.1 Molecule Rendering Rendering is independent from the interaction and other components of the molecule GameObject. A special renderer class has been imple- mented together with a shader used to render the vdW surface of the molecule. The StructureBillboardsRenderer class is used to make sure that data used for rendering is transferred from the CPU to the GPU to be used by the shader. This class sets the buffer size for each of the buffers and is capable of changing the rendering at run-time (e.g., changing the colors and position of the molecule). For the rendering itself, the custom DrawLookAtBillboards shader is used. Each atom is rendered as a 2D texture, which is always turning to the camera. Atoms are positioned at a specific location, but their orientation is automatically computed so that it always faces the cam- era. This approach is inspired by the approach when the billboards are always turning at the direction which the camera is facing. How- ever, this does not work in VR. The atoms that are not in the center of the field of view, are turned in a wrong way. When the user tilts his or her, head he or she can see the billboards moving. For that rea- son, the vector from the billboard world space position to the camera position had to be computed for each atom individually. From this vector we can use the atom up and right vector using vector opera- tions. Using these two vectors we can displace each fragment of the rendered texture to display each billboard facing the headset. This avoids the undesirable tilting and produces the expected result with correct rotation according to the VR camera position. The billboard texture is a rectangular texture but we need to render a circle. We do not need to use the transparent render type for this and the texture can be rendered as an opaque material. In the fragment shader, we can discard all the transparent vertices and the remaining pixels are colored in a normal way. Rendering each atom as a texture is much faster than using sphere models. This way we can render even large molecules (see Figure 3.4).

3.4 Interactions and Control

This section describes the implementation of all supported interactions in the VR environment of our application. The tools used for interac-

37 3. Implementation

Figure 3.4: Avatar of the user in our virtual environment, operating with the protein (PDB ID 1AON [85]). The molecule is colored accord- ing to individual chains. tion are described Section 3.4.1. Section 3.4.2 describes the interactable objects, representing all objects in the scene the user can interact with, such as the molecule or camera. The last three parts of Section 3.4 are dedicated to the graphical user interface (GUI) inside the virtual environment, the voice recognition control and to the movement of the user.

3.4.1 Interaction Tools For the interaction with the virtual world, an extensible tool system has been created. In the current version, there are two main tools used for the interaction. The virtual representation of the wireless controllers has the same appearance as the physical controller. However, this representation is replaced by a tool that is currently being used by the user. The system offers the ability to switch between the tools for each hand separately. The tools can be switched using a menu button on the controller (see Figure 3.2). The first default tool is the hand tool (see Figure 3.5). This tool is implemented in the HandMoveTool class and inherits from the abstract

38 3. Implementation

Figure 3.5: Virtual representation of the hand tools with pointers at- tached to the index fingers of both hands.

class Tool, as any other tool. This tool offers the overall interaction with the molecule and the environment. It is used for moving and rotating objects. The second tool in the application is the delete tool, implemented in the DeleteTool class. This tool is used for deleting the downloaded molecules that are no longer needed in the scene and, most importantly, also for deleting the waypoints used for trajectory planning of the camera. Each tool operates with the environment using a pointer, which is inherited from the base Tool class. The basic functionality of the pointer is implemented in the VRTK toolkit. This pointer allows the user to point at objects with a collider. When the pointer’s tip enters or exits a collider, it triggers an event. This event is detected and the target object is saved in the Tool class, where it can be later accessed, for example, for highlighting.

3.4.2 Interactable Objects Any object which can by manipulated using the HandMoveTool is called interactable. The InteractableViaPointer class is responsible for the main functionality of interactable objects. The interactable objects can be grabbed and rotated in a way that is similar to holding a real-world physical object in hand. When the user presses the grip button on the

39 3. Implementation controller, the object is attached to the hand and its relative position to the controller is fixed. Any interactable object can be highlighted when hovered over with the pointer (see Figure 3.6). The user can grab not only an object in his or her reach but any remote object by pointing at it with the pointer and pressing the grip button. This way, the user can for example take high-angle shots by moving and controlling the camera even when the camera is out of reach. VRTK has its own implementation of grabbing objects but in our case it suffers from performance issues because it was not built for such complex objects as the molecule with thousands of atoms. As a consequence, we had to implement our own system for manipulation with objects. This system is implemented using VRTK as a layer of abstraction between our system and different SDKs, such as Oculus or SteamVR. VRTK_Pointer is using raycasting to provide the reference to the object which is colliding with the tip of the pointer. Using event listener, the grip button press is registered and the target object is attached to the controller. Each interactable is set to follow an empty GameObject that is used to smooth the movement of the interactable objects to reduce shaking. When we want to move an interactable object, we move the empty object that the interactable object is set to follow instead. The interactable object then smoothly follows this empty object. Moreover, the grab interaction has been extended. When an inter- actable object is grabbed, the user can use the trackpad on HTC Vive or joystick on Oculus Rift to move and rotate the object. The thumb movement on the Y axis is used for moving the object forward and backward in the direction of the pointer. When moving the thumb on the X axis, the x value is used to control the speed of rotation around the world up vector. When the thumb is moved left, the object is turn- ing clockwise and vice versa. The further away from the center of the trackpad, the higher speed of movement or rotation. This tech- nique enables to move and precisely position the camera and other interactable objects even when they are further away from the user. Gravity does not affect the interactable objects, so they can be placed anywhere in the mid-air. This is convenient for the user as he or she can place the camera or molecule to the desired places. In the scene, there are several objects with the InteractableViaPointer component. The molecule is one interactable object in the scene. Then

40 3. Implementation

there is the camera together with its waypoints. The interaction with each of these objects is described in the following paragraphs. Molecule interaction. The molecule can be grabbed and manipu- lated as any other interactable object. On top of that, the molecule can be scaled. All molecules in the scene are scaled simultaneously. The scaling is performed only when both grip buttons of the controllers are held. The distance of the controllers affects the scaling factor. The first time, when both grip buttons are triggered, the initial distance d0 is stored in memory. Then, in each subsequent frame i the initial scale s0 is multiplied by the current distance di of the controllers. So when the controllers are then again released in a frame n, the final scale equals to sn = s0 ∗ dn/d0. Every residue of a molecule is highlighted separately when hov- ered over with the pointer. The highlighted residue can be selected by pressing the trigger button on the controller (see Figure 3.6). The selected residues stay selected until they are deselected again by the trigger button or an UI button to deselect all at once. This way the user can draw the attention to specific residues of the molecule, for example, those surrounding the protein active site. The highlighting is implemented using a buffer of boolean values which tells the shader about all atoms that are highlighted. Zero at a specific position in the buffer means that the given atom should use its default color andone means that it should be highlighted. When the pointer aims at specific residue, it is marked as highlighted and the corresponding values are changed in the buffer. Camera interaction. Camera is composed of the main body and the camera controls floating around the camera (see Figure 3.7). These con- trols elements are duplicate and attached both to the camera and the user hand. Each control button is represented by one object with dif- ferent color and description. Pointing at a control button with pointer either in the hand menu or in the camera menu and pressing the trig- ger activates a camera action. There are four different camera actions that can by triggered in this way:

• Start or stop video recording. This control element is used to start the video capture. It works like a switch, so the capture can be turned off with the same button.

41 3. Implementation

Figure 3.6: User pointing at the camera which is capturing a selected part of the protein. Both the camera and the selected part are high- lighted. The boxes floating around the camera are used to control the camera recording system. The red and blue buttons are related to the recording system and the other two buttons to the trajectory planning.

• Pause or continue video recording. Works similar to the first con- trol element, except for that it toggles pause or continue in the currently captured video.

• Spawn new waypoint. Spawning new waypoints for the camera trajectory planning navigation system using the add button.

• Reset camera position. This button resets the camera position to the last initial position of the trajectory planning movement.

Anytime when the recording state of the camera changes (start, pause, continue, stop), it is reflected via the recording indicators, which are placed on the camera, player’s hand, and on the preview displays (see Figure 3.7). Apart from the control buttons, the camera itself is also clickable. Clicking the camera captures a screenshot of the camera’s view. The camera trajectory planning system is described in Section 3.6 and the video and screenshot capture is described in Section 3.5.

42 3. Implementation

(a) Recording state. (b) Paused state.

Figure 3.7: The camera and its control elements floating next to the camera are also attached to the hand. The recording state indicators are displayed when the recording is active.

Camera waypoint interaction. Each camera waypoint is also an interactable object because the user must be able to move it around the scene and fine-tune its position and rotation, as mentioned in Section 2.7. The waypoints are represented by transparent models of the original camera object. These transparent cameras are called the shadow cameras (see Figure 3.8). For spawning a new waypoint, there is the button next to the cam- era. When this button is pressed, new shadow camera is instantiated. Every shadow camera has a number assigned to it. This numbering rep- resents the ordering of the waypoints. The camera passes through the waypoints in the order given by these numbers. When new camera is spawned, all the remaining waypoints in the scene are assigned a new number and relabeled accordingly. Except for adding the waypoint using the button on the camera there is also a button on each shadow camera. This can be also used to add new shadow camera to the scene. The waypoint is then inserted between the two consecutive waypoints or added to the end, if it was spawned using the last shadow camera. Each shadow camera has a preview screen of the keyframe. This screen is displayed only when the waypoint is grabbed (see Figure 3.9). There is always only one waypoint with the preview display at a time, so all the waypoints use the same render texture to render this preview. This aims to save the computational power, because rendering all

43 3. Implementation

Figure 3.8: The shadow camera with one the control button used for spawning new waypoints. preview displays at once would be slow and unnecessary resulting in more cluttered view for the user.

3.4.3 Virtual GUI Unity has its own system and a set of classes for handling UI. Utilizing these resources, we can easily create and customize a GUI. Harnessing the power of VRTK, this GUI can be made interactable directly in VR. By adding the VRTK_UICanvas component to the Unity canvas, the standard Unity UI can be made interactable via the VRTK_Pointer. The user can select the UI elements with the trigger button or by touching it with the virtual hand. This UI is positioned in the world coordinates, so it is fixed in the world space. It namely includes a keyboard (see Figure 3.10) for downloading molecules and other canvases to control the protein visualization and other properties.

3.4.4 Voice Control Voice control can also be used as a mean of interaction with the ap- plication when the user’s hands and vision are occupied. But when the user cannot speak (e.g., in a room full of people) there is of course an alternative way to the voice control: the user can use the standard GUI to achieve the desired action. In this tool, the voice control is

44 3. Implementation

Figure 3.9: The shadow camera representing a single keyframe is grabbed. The display shows the preview of the video in this keyframe. primarily implemented to demonstrate and test its potential. It is used the change the protein color modes. There are currently three phrases that are supported and can be recognized by the application: (a) “color chain” for the chain color mode, (b) “color residue” for the residue color mode, and (c) “color atom” for the default atom color mode. For the voice control, there is a voice recognition script. It enables to listen for specific predefined phrases spoken by the user. It is usingthe KeywordRecognizer from Windows Speech Recognition application programming interface (API), available as a part of the Unity engine. The main benefit of this API is that the voice recognition runs offline. There is no need for sending the speech data to an external server. Subscribing to OnPhraseRecognized event, we can invoke an action anytime a predefined phrase is recognized. This approach is scalable and it is very easy to add new phrases.

3.4.5 Movement The user is able to freely move in the physical room. This movement is mirrored in the virtual environment. However, to extend the play area, the user is able to teleport around the virtual environment. Using the pointer on his left hand and pressing the trackpad on the controller,

45 3. Implementation

Figure 3.10: The keyboard being used for importing new pdb file to the application. the user can immediately change his position to the pointed location. This way, the application can be used even from a standing or seat- ing position in a very restricted physical space. This functionality is implemented using the VRTK_HeightAdjustTeleport component.

3.5 Real-time Capture

Video recording. The video is captured at HD resolution (1280×720 pixels), 24 frames per second (FPS), with 5 Mbps bit rate. Using the standard Camera script in Unity, we can capture a part of the scene. All objects, like the UI and the environment, are culled in the resulting video to reduce the distraction of the audience. The only visible objects in the video are the player and the molecule. This is achieved using a culling mask to cull certain layers. This camera is then rendered to as a texture which can be used as a texture for the preview displays, video recording, or capturing screenshots. The application is capable of capturing screenshots (e.g. Figure 2.5 is a screenshot captured by the user using our application). The screen- shots are saved in Full HD (1920×1080 pixels) in the png format. The ScreenCapture script attached to the camera object is used for taking

46 3. Implementation pictures. The screenshot is rendered from the same camera as the video, so the environment is also culled. It is then rendered to a sepa- rate RenderTexture and then copied to Texture2D using ReadPixels() Unity function. After that the pixels are encoded to the png format. The file is then saved to the hard drive in another thread.

3.5.1 Unity Recorder

One way of recording a video in VR using Unity is to use the Unity recorder [86]. The video recording using this approach is much faster and also very easy to implement because it is shipped as a part of Unity. However, the main disadvantage is, that it cannot be used outside of the Unity Editor. This means that it cannot be used in a standalone build.

3.5.2 RockVR

The RockVR [87] is available for free at the Unity Asset Store. It can be used to record videos in Unity and can also record panorama videos. Utilizing the FFmpeg [88] library for C#, the library is capable of encoding video from a camera in scene directly to the mp4 file. It is also capable of recording sound. The RockVR library encodes each frame into a RenderTexture, this render texture is saved to a queue. This queue is asynchronously accessed and each texture is handed over to an external FFmpeg library using the VideoCaptureLib_WriteFrames() method. This method is an external function imported from the native dll library. This approach is used in the last version of the project. Audio recording is also possible. The microphone recording can be turned on or off via an UI toggle. Using the headset microphone, the user can record video with sound and comment on what is he or she doing. The sound is recorded from Unity AudioListener which is listening to the microphone audio source. The audio source is con- trolled via the MircophoneInput script which is used to handle the microphone. This script is also responsible for setting sound recording parameters. The audio is recorded at standard 48000 Hz frequency which satisfies the Nyquist rate [89] (exceeds the frequency hearable spectrum more than twice times).

47 3. Implementation 3.6 Camera Trajectory Planning

The functionality of the camera trajectory planning system is mostly implemented in the MoveAlongPath script which can be used in any other unity project. There were three approaches to trajectory compu- tation during the development phase. In each approach, the camera is following a set of waypoints that can be positioned by the user (see Figure 3.11). After trying the first and second approach we moved to another the last, because the former did not satisfy our needs. They are all described in this section in the chronological order. The first approach was completely different from the other two and did not use the MoveAlongPath script. Instead, we tried to take the advantage of the Unity Timeline functionality together with Cinema- chine [90] for camera manipulation. This approach is working, but not flexible enough. Using the Timeline and Cinemachine is perfect for creating complex scene animations and videos inside Unity, but it is hardwired inside the Unity editor which makes it impossible to use in a standalone build. Also we were not able to control, for example, adding keyframes in any other way than using the editor. For these reasons we decided to implement our own system to manipulate the camera as described in the following two approaches. The second approach was to precompute the whole trajectory at the beginning of the camera’s movement and save all the waypoints in a list. This approach had several disadvantages. At first, it was more difficult to implement. Then, it was also more computationally demanding because the trajectory had to be recomputed every time it changed. And lastly, it was not that flexible. For example, the trajectory could not be changed while the camera moves. The thrid approach, which is implemented in the final version, is simpler yet more powerful. Here only the next waypoint is considered. So when the camera is moving, it does not know anything about the previous trajectory. It remembers just its last position and the next waypoint. This provides more flexibility. The trajectory can be adjusted during the camera movement. The user must also be able to control the speed of the camera, as mentioned in the camera trajectory planning design (Section 2.7). This was implemented using a simple slider UI element. This slider is placed on each waypoint together with a visible number representing

48 3. Implementation

(a) Starting position. (b) First to second waypoint.

(c) Passing second waypoint. (d) Final position.

Figure 3.11: The camera following the trajectory of three waypoints.

the time to get to this waypoint from the previous one (see Figure 3.8). This number is assigned at the start of the movement to that specific waypoint and changing it while the camera is moving towards this waypoint has no effect. The interpolation is happening inside the Update() function. If the camera should move, we compute the fraction of length of the trajectory which the camera has already traveled towards the currently targeted waypoint and use it to determine its next position. For that we can use the unity AnimationCurve to achieve a nonlinear movement. We use an interpolation method to compute its desired position for the next frame. The default movement is the following. The camera accelerates slowly when leaving one waypoint, reaching maximal speed in the middle between two waypoints. Then it starts to slow down again when approaching the next target, reaching zero speed. The AnimationCurve can be edited directly in the Unity editor, using a visual field to input any desired animation curve. In the future versions, there should be also an option for the user to change this directly in VR.

49

4 Results & Conclusion

Using Unity game engine, we have been able to develop a functional prototype tool for easy recording of video sequences of molecular scenes in virtual reality. The tool allows the user to download and visualize macromolecules from an online database and interact with them. The camera trajectory planning system can be used to record a video of the molecule in the MP4 format. The application can be used both with HTC Vive and Oculus Rift devices. Using a head mounted display, the user is able to see the structure of biomolecules and easily orient in the environment. The controllers can be used to intuitively plan camera’s trajectory as well as for interaction with the molecules, camera, and UI. The application enables us to assess the potential of virtual reality for interactive creation of molecular structure animations. The user also benefits from the real 3D effect providing him or her the feeling of being the part of the immersive environment. The designed intuitive interactions used to manipulate and interact with the molecule can serve as an inspiration for other VR tools. The resulting application was evaluated by two experts in protein engineering, one of them participated in the design of the require- ments on the application and was giving feedback already within the implementation phase. The second expert experienced the application for the first time. In the following, we summarize their experience and feedback from the session, and then outline the possible directions of the future work on our application, as well as in this field.

4.1 Results

As already mentioned, the resulting application was tested by two domain experts from Loschmidt Laboratories at the Faculty of Science at the Masaryk University. The first expert was collaborating also on the design of the functionality and was the originator of the idea of this thesis topic. Within the work on this thesis, we conducted sev- eral discussions and demo sessions, aiming to shape the application according to the expert needs and keeping the application and inter- action simple and intuitive. The second expert did not get in contact

51 4. Results & Conclusion with the application until the final testing. The session with each of them took 30 minutes and after a brief introduction of the environ- ment and interaction (especially for the novice expert), we let them to freely interact with the environment, set the camera trajectory, and create the final video. They were asked to comment their actions and impressions. After the session, they were asked to fill in a simple ques- tionnaire where they evaluated our application and compared it with the existing tools and options. In their comments within the session, as well as the questionnaires, they both agreed that the tool provides a unique ability of recording videos of molecules in an intuitive way. They also compared our ap- plication with the traditionally used tools for molecular visualization. They concluded that these offer only complicated interfaces for record- ing videos. On the other hand, our tool is easy to use for people who are unfamiliar with computational tools and want to create videos or pictures of molecules for publication or educational purposes. In comparison with a widely used YASARA tool for molecule rep- resentation in VR, which is controlled by a mouse and keyboard, our tool fully exploits its VR implementation. In YASARA, the user cannot move around the environment efficiently. In our tool, the user can freely move in all three dimensions and interact with the environment with handheld controllers. This is way more intuitive and accessible and provides better orientation, more detailed understanding, and perhaps a higher chance of inspiration about how to understand or engineer biomolecules. Some tools, such as PyMOL, require the user to understand com- plicated concepts, such as the coordinate states of the molecules, fram- erate, and having to compute their ratios for motion to be able to create movies. The user also has to use the command line. This tool utilizes concepts familiar from real-life movie shooting, such as cameras and the screen. Furthermore, there is no command line and the smooth in- terpolation between keyframes and speed of the camera is controlled intuitively. Other VR tools require preparing the system outside of VR when loading molecule structures, whereas in our tool this can be easily managed directly in VR. The experts mentioned also the current limitations of our tool. The tool is still missing common representation strategies, such as the ribbon, cartoon, and surface representation. However, these are

52 4. Results & Conclusion

suggested for future work on the software. For a user who is not introduced to the tool by the authors, it would be useful to have a tutorial scenario which teaches the user how to use the tool. A pop-up menu with a cheat sheet of controls could also prove to be useful in this manner. The user would benefit from a more complex selection approach in which the user can work with multiple selections of atoms, residues, chains, or molecules. This could be used, for example, for coloring the whole selection at once in the course of shooting the movie. Another way of exploiting these selections could be the automatic orientation of the camera waypoints to face a particular selection instead of having to manually reorient them using the controller. Animation of not only the camera movements but also the orienta- tions of the molecules would allow the tool to create movies showing interactions of molecules. This could be used to create a movie show- ing proteins binding together and animating between their bound and unbound states. Another suggestion for future work is introducing the ability to load and display the dynamic dimension of molecules from molecular dynamics simulations, normal mode analyses, or other data. Despite these limitations, both experts agreed that the functionality supported by the application is very novel and useful and adding the suggested functions would make the tool very powerful for creating animations of molecular scenes. Such a tool would enrich the existing tools with new options for creating storytelling content.

4.2 Conclusion & Future Work

There are still structural biologists using a pen and paper to communi- cate ideas, but it is becoming more and more difficult with all the data we have nowadays. Biologists need to use some more powerful tools to help them with their work. In this thesis, we tried to communicate that virtual environments and their interaction possibilities can change the way the biologist can communicate their ideas and research outcomes to a broader audience. We created a prototype application to explore how virtual reality and animation can operate together to improve the way the biologists work. We utilize the main benefits of virtual reality

53 4. Results & Conclusion to create an environment where the user can interact with molecules and create storytelling animations. The 3D virtual environment allows for clear orientation in the scene while the controls provide intuitive and satisfying interaction with the molecule and recording system. The approach of animating molecules in VR has a potential to offer more engagement and insight into molecular visualization field. The overall feedback on the prototype application is very positive and the experts confirmed that extending our prototype tool by the missing functionality definitely makes sense. Within the demo session with the experts, we obtained many suggestions for future extensions. We tried to summarized some of them. As mentioned, to support the work of biochemists, the tool should be able to visualize the molecule using common representation strate- gies, such as the ribbon, cartoon, and others. The ability to highlight and select various parts of the molecule and group them should also be supported in the application. We would also like to add the possibil- ity to load molecular dynamics and animations of molecules. This can be then used to record a video of moving molecules or to extensively explore molecular interactions, movements, and protein tunnels in VR. Another suggestion is to be able to animate not only the camera movements but also the molecules themselves. The user would def- initely benefit from this approach because he or she could animate molecular interactions. But we assume that this could clutter the users’ view and make the work with the application more complicated and less intuitive. This approach should be tested and evaluated in the future. In our ongoing research, we would like to work also on automating the camera focus. The current approach requires the user to set all the animation parameters manually. But many of them can be automated. The camera could, for example, automatically focus on a selection or a point of interest specified by the user and adjust its orientation accordingly. It could also change its position and follow the trajectory computed by the program, acting like a fully automatic camera opera- tor. Using this approach, the user can act only as a storyteller and drive the focus of the camera while all the details are handled by the tool. Connecting this tool with, for example, cellVIEW’s capabilities and approaches used for storytelling automation can result in a powerful and intuitive tool for video recording.

54 4. Results & Conclusion

From the technical point of view, we would like to move to Unity 2019 in the future and utilize the benefits of using the new ECS and DOTS approaches. This could improve the performance and scalabil- ity of the application. It would also allow for easier implementation of computer graphic effects, such as ambient occlusion, shadows, specu- lar reflections and others, which could provide better depth cuesfor the user and offer more realistic visualization of the molecule. Also new VRTK version 4.0 has been released recently. Using VRTK 4.0 would enhance the scalability and also enable the support for other HMDs. Using third-party Unity assets, we were not able to achieve the desired performance for real-time video recording because none of them was optimized for recording in VR. Therefore, we would like to implement our own solution which can record the frames in VR and utilize the FFmpeg library to merge and encode video in the MP4 format. Also, we can move many of the current computations of our application (e.g., atom positioning) from CPU to GPU to further improve the performance. To enhance the usability of the application, we would like to pro- vide an automatic way of introducing the user to our application control and environment. The current prototype supposes that the user is given the instructions how to use this tool before using it. In the future, we would like to create an in-app tutorial that the user can complete before starting to work in the environment. We would also like to provide him or her with better control description and help that can be read anytime while using the app. We suggest to further evaluate and test the application when the performance and usability of the application improves.

55

Bibliography

1. JENKINSON, Jodie. Molecular Biology Meets the Learning Sciences: Visualizations in Education and Outreach. Journal of Molecular Biol- ogy. 2018, vol. 430, no. 21, pp. 4013–4027. ISSN 0022-2836. Available from DOI: 10.1016/J.JMB.2018.08.020. 2. OLSON, Arthur J. Journal of Molecular Biology. Vol. 430, Perspectives on Structural Molecular Biology Visualization: From Past to Present. Academic Press, 2018. No. 21. ISSN 10898638. Available from DOI: 10.1016/j.jmb.2018.07.009. 3. O’CONNOR, Michael et al. Sampling molecular conformations and dynamics in a multiuser virtual reality framework. Science Advances. 2018, vol. 4, no. 6. Available from DOI: 10.1126/sciadv.aat2731. 4. SCHNEIDER, Michael; BELSOM, Adam; RAPPSILBER, Juri. Protein Tertiary Structure by Crosslinking/Mass Spectrometry. Trends in Biochemical Sciences. 2018, vol. 43, no. 3, pp. 157–169. ISSN 0968-0004. Available from DOI: 10.1016/J.TIBS.2017.12.006. 5. PDB Data Distribution by Experimental Method and Molecular Type [on- line] [visited on 2019-05-05]. Available from: http://www.rcsb. org/stats/summary. 6. KOZLÍKOVÁ, B.; KRONE, M.; FALK, M.; LINDOW, N.; BAADEN, M.; BAUM, D.; VIOLA, I.; PARULEK, J.; HEGE, H.-C. Visualization of Biomolecular Structures: State of the Art Revisited. Computer Graphics Forum. 2016, vol. 36, no. 8, pp. 178–204. Available from DOI: 10.1111/cgf.13072. 7. HEPTING, Daryl H. A new paradigm for exploration in computer-aided visualization. 1999. Available from DOI: 10.13140/RG.2.1.3716. 9362. PhD thesis. Simon Fraser University. 8. GOODSELL, David S. Our Molecular Nature. New York, NY: Springer New York, 1996. ISBN 978-1-4612-7508-4. Available from DOI: 10. 1007/978-1-4612-2336-8. 9. GOODSELL, David S. The Machinery of Life. New York, NY: Springer New York, 2009. ISBN 978-0-387-84924-9. Available from DOI: 10. 1007/978-0-387-84925-6.

57 BIBLIOGRAPHY

10. Goodsell Home Page [online] [visited on 2019-05-19]. Available from: https://ccsb.scripps.edu/goodsell/. 11. LE MUZIC, Mathieu; AUTIN, Ludovic; PARULEK, Julius; VIOLA, Ivan. cellVIEW: a Tool for Illustrative and Multi-Scale Rendering of Large Biomolecular Datasets. In: KATJA BÜHLER AND LARS LIN- SEN AND NIGEL W. JOHN (ed.). Eurographics Workshop on Visual Computing for Biology and Medicine. The Eurographics Association, 2015, pp. 61–70. Available from DOI: 10.2312/vcbm.20151209. 12. COHEN, Jon. Meet the scientist painter who turns deadly viruses into beautiful works of art. Science. 2019. ISSN 0036-8075. Available from DOI: 10.1126/science.aax6641. 13. CV — Drew Berry [online] [visited on 2019-05-05]. Available from: https://www.drewberry.com/about. 14. Janet Iwasa, PhD - University of Utah [online] [visited on 2019-05-05]. Available from: https://medicine.utah.edu/faculty/mddetail. php?facultyID=u0863544. 15. mMaya – Clarafi [online] [visited on 2019-05-05]. Available from: https: //clarafi.com/tools/mmaya/. 16. ZINI, Maria Francesca; POROZOV, Yuri; ANDREI, Raluca Mihaela; LONI, Tiziana; CAUDAI, Claudia; ZOPPÈ, Monica. BioBlender: Fast and Efficient All Atom Morphing of Proteins Using Blender Game Engine. 2010. Available from arXiv: 1009.4801. 17. MCINTIRE, John P.; HAVIG, Paul R.; GEISELMAN, Eric E. Stereo- scopic 3D displays and human performance: A comprehensive review. Displays. 2014, vol. 35, no. 1, pp. 18–26. ISSN 0141-9382. Available from DOI: 10.1016/J.DISPLA.2013.10.004. 18. WIEBRANDS, Michael; MALAJCZUK, Chris J; ANDREW, /; WOODS, J; ROHL, Andrew L; MANCERA, Ricardo L. Molecular Dynam- ics Visualization (MDV): Stereoscopic 3D Display of Biomolecular Structure and Interactions Using the Unity Game Engine. Journal of Integrative Bioinformatics. 2018. Available from DOI: 10.1515/jib- 2018-0010.

58 BIBLIOGRAPHY

19. RATAMERO, Erick Martins; BELLINI, Dom; DOWSON, Christopher G.; ROEMER, Rudolf A. Touching proteins with virtual bare hands: how to visualize protein-drug complexes and their dynamics in vir- tual reality. Journal of Computer-Aided Molecular Design. 2018, vol. 32, no. 6, pp. 703–709. Available from DOI: 10.1007/s10822-018-0123- 0. 20. BRANDEN, Carl; TOOZE, John. Introduction to Protein Structure. The Quarterly Review of Biology. 1992, vol. 67, no. 1, pp. 44–45. Available from DOI: 10.1086/417455. 21. Protein structure - primary, secondary,tertiary and quarternary struc- ture [online] [visited on 2019-05-19]. Available from: http : / / easylifescienceworld . com / levels - of - organization - in - proteins/. 22. KOLTUN, Walter L. Precision space-filling atomic models. Biopolymers. 1965, vol. 3, no. 6, pp. 665–679. ISSN 0006-3525. Available from DOI: 10.1002/bip.360030606. 23. FIESER, Louis F. Plastic Dreiding models. Journal of Chemical Education. 1963, vol. 40, no. 9, pp. 457. ISSN 0021-9584. Available from DOI: 10.1021/ed040p457. 24. Discovery of the structure of DNA (article) | Khan Academy [online] [vis- ited on 2019-04-29]. Available from: https://www.khanacademy. org/science/high-school-biology/hs-molecular-genetics/ hs- discovery- and- structure- of- dna/a/discovery- of- the- structure-of-dna. 25. Early Interactive Molecular Graphics at MIT [online] [visited on 2019-05-19]. Available from: https://www.umass.edu/molvis/ francoeur/levinthal/lev-index.html%7B%5C#%7Dtext1. 26. LOKER, John. Dynamic vs. Static Visualizations for Learning Pro- cedural and Declarative Information. 2016. Available also from: https://csuchico- dspace.calstate.edu/bitstream/handle/ 10211.3/178465/Loker%7B%5C_%7DS%7B%5C_%7Dthesis%7B%5C_ %7DSpring2016.pdf?sequence=1. PhD thesis. 27. HEGARTY, Mary. Dynamic visualizations and learning: getting to the difficult questions. Learning and Instruction. 2004, vol. 14, pp. 343– 351. Available from DOI: 10.1016/j.learninstruc.2004.06.007.

59 BIBLIOGRAPHY

28. WINN, William. Visualization in Learning and Instruction: A Cogni- tive Approach on JSTOR. Educational Communication and Technology. 1982, vol. 30, no. 1, pp. 3–25. Available also from: https://www. jstor.org/stable/30219822. 29. BRAME, Cynthia J. Effective Educational Videos: Principles and Guidelines for Maximizing Student Learning from Video Content. CBE—Life Sciences Education. 2016, vol. 15, no. 4. Available from DOI: 10.1187/cbe.16-03-0125. 30. BYŠKA, J; TRAUTNER, T; MARQUES, S M; DAMBORSKÝ, J; KO- ZLÍKOVÁ, B; WALDNER, M. Analysis of Long Molecular Dynam- ics Simulations Using Interactive Focus+Context Visualization. To appear at Computer Graphics Forum. 2019, vol. 38, no. 3. 31. KRAMERS, H.A. Brownian motion in a field of force and the diffusion model of chemical reactions. Physica. 1940, vol. 7, no. 4, pp. 284–304. ISSN 0031-8914. Available from DOI: 10.1016/S0031- 8914(40) 90098-2. 32. HUMPHREY, William; DALKE, Andrew; SCHULTEN, Klaus. VMD: Visual molecular dynamics. Journal of Molecular Graphics. 1996, vol. 14, no. 1, pp. 33–38. Available from DOI: 10 . 1016 / 0263 - 7855(96)00018-5. 33. SCHRÖDINGER LLC. The PyMOL Molecular Graphics System, Version 1.8. 2015. 34. PETTERSEN, Eric F.; GODDARD, Thomas D.; HUANG, Conrad C.; COUCH, Gregory S.; GREENBLATT, Daniel M.; MENG, Elaine C.; FERRIN, Thomas E. UCSF Chimera – A visualization system for ex- ploratory research and analysis. Journal of Computational Chemistry. 2004, vol. 25, no. 13, pp. 1605–1612. Available from DOI: 10.1002/ jcc.20084. 35. KRIEGER, Elmar; VRIEND, Gert; KELSO, Janet. YASARA View- molecular graphics for all devices-from smartphones to worksta- tions. Bioinformatics Applications. 2014, vol. 30, no. 20, pp. 2981–2982. Available from DOI: 10.1093/bioinformatics/btu426. 36. CASE, David A. et al. The Amber biomolecular simulation programs. Journal of Computational Chemistry. 2005, vol. 26, no. 16, pp. 1668– 1688. ISSN 0192-8651. Available from DOI: 10.1002/jcc.20290.

60 BIBLIOGRAPHY

37. BERENDSEN, H.J.C.; SPOEL, D. van der; DRUNEN, R. van. GROMACS: A message-passing parallel molecular dynam- ics implementation. Computer Physics Communications. 1995, vol. 91, no. 1-3, pp. 43–56. ISSN 0010-4655. Available from DOI: 10.1016/0010-4655(95)00042-E. 38. ABRAHAM, Mark James; MURTOLA, Teemu; SCHULZ, Roland; PÁLL, Szilárd; SMITH, Jeremy C.; HESS, Berk; LINDAHL, Erik. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers. SoftwareX. 2015, vol. 1-2, pp. 19–25. ISSN 2352-7110. Available from DOI: 10.1016/J.SOFTX.2015.06.001. 39. JO, Sunhwan et al. CHARMM-GUI 10 years for biomolecular modeling and simulation. Journal of Computational Chemistry. 2017, vol. 38, no. 15, pp. 1114–1124. ISSN 01928651. Available from DOI: 10.1002/ jcc.24660. 40. BROOKS, B. R. et al. CHARMM: The biomolecular simulation pro- gram. Journal of Computational Chemistry. 2009, vol. 30, no. 10, pp. 1545–1614. ISSN 01928651. Available from DOI: 10 . 1002 / jcc . 21287. 41. JURČÍK, Adam et al. CAVER Analyst 2.0: analysis and visualization of channels and tunnels in protein structures and molecular dynam- ics trajectories. Bioinformatics. 2018, vol. 34, no. 20, pp. 3586–3588. Available from DOI: 10.1093/bioinformatics/bty386. 42. MAKAREWICZ, Tomasz; KAŹMIERKIEWICZ, Rajmund. Molecu- lar Dynamics Simulation by GROMACS Using GUI Plugin for Py- MOL. Journal of Chemical Information and Modeling. 2013, vol. 53, no. 5, pp. 1229–1234. ISSN 1549-9596. Available from DOI: 10.1021/ ci400071x. 43. MAKAREWICZ, Tomasz; KAŹMIERKIEWICZ, Rajmund. Improve- ments in GROMACS plugin for PyMOL including implicit solvent simulations and displaying results of PCA analysis. Journal of Molec- ular Modeling. 2016, vol. 22, no. 5, pp. 109. Available from DOI: 10.1007/s00894-016-2982-4.

61 BIBLIOGRAPHY

44. ANANDAKRISHNAN, Ramu; DROZDETSKI, Aleksander; WALKER, Ross C.; ONUFRIEV, Alexey V. Speed of Conformational Change: Comparing Explicit and Implicit Solvent Molecular Dynamics Simu- lations. Biophysical Journal. 2015, vol. 108, no. 5, pp. 1153–1164. ISSN 0006-3495. Available from DOI: 10.1016/J.BPJ.2014.12.047. 45. QuteMol [online] [visited on 2019-05-07]. Available from: http : / / qutemol.sourceforge.net/. 46. Blender - a 3D modelling and rendering package [online] [visited on 2019-05-13]. Available from: https://www.blender.org/. 47. Maya | Computer Animation & Modeling Software | Autodesk [online] [visited on 2019-05-13]. Available from: https://www.autodesk. com/products/maya/overview. 48. rigimol [PyMOL Documentation] [online] [visited on 2019-05-07]. Avail- able from: https://pymol.org/dokuwiki/doku.php?id=rigimol. 49. Molecular Flipbook [online] [visited on 2019-05-07]. Available from: http://molecularflipbook.org/. 50. BERMAN, H. M.; WESTBROOK, John; FENG, Zukang; GILLILAND, Gary; BHAT, T. N.; WEISSIG, Helge; SHINDYALOV, Ilya N.; BOURNE, Philip E. The Protein Data Bank. Nucleic Acids Re- search. 2000, vol. 28, no. 1, pp. 235–242. Available from DOI: 10.1093/nar/28.1.235. 51. Showcase – Clarafi [online] [visited on 2019-05-07]. Available from: https://clarafi.com/showcase/. 52. LE MUZIC, M.; PARULEK, J.; STAVRUM, A. K.; VIOLA, I. Illustrative Visualization of Molecular Reactions using Omniscient Intelligence and Passive Agents. Computer Graphics Forum. 2014, vol. 33, no. 3, pp. 141–150. Available from DOI: 10.1111/cgf.12370. 53. LE MUZIC, Mathieu; WALDNER, Manuela; PARULEK, Julius; VI- OLA, Ivan. Illustrative Timelapse: A Technique for Illustrative Visu- alization of Particle-Based Simulations. In: Visualization Symposium (PacificVis), 2015 IEEE Pacific. 2015, pp. 247–254. Available from DOI: 10.1109/PACIFICVIS.2015.7156384.

62 BIBLIOGRAPHY

54. KRONE, M.; KOZLÍKOVÁ, B.; LINDOW, N.; BAADEN, M.; BAUM, D.; PARULEK, J.; HEGE, H.-C.; VIOLA, I. Visual Analysis of Biomolecu- lar Cavities: State of the Art. Computer Graphics Forum. 2016, vol. 35, no. 3, pp. 527–551. ISSN 01677055. Available from DOI: 10.1111/ cgf.12928. 55. SCHÖNBORN, Konrad J.; BIVALL, Petter; TIBELL, Lena A.E. Explor- ing relationships between students’ interaction and learning with a haptic virtual biomolecular model. Computers & Education. 2011, vol. 57, no. 3, pp. 2095–2105. ISSN 0360-1315. Available from DOI: 10.1016/J.COMPEDU.2011.05.013. 56. VIVE™ | Discover Virtual Reality Beyond Imagination [online] [visited on 2019-05-14]. Available from: https://www.vive.com/eu/. 57. Oculus Rift: VR Headset for VR Ready PCs | Oculus [online] [visited on 2019-05-14]. Available from: https://www.oculus.com/rift/. 58. JERALD, Jason. The VR book : human-centered design for virtual reality. 1st. New York, NY, USA: Association for Computing Machinery and Morgan & Claypool, 2015. ISBN 978-1-97000-112-9. Available from DOI: 10.1145/2792790. 59. SAYLE, R A; MILNER-WHITE, E J. RASMOL: biomolecular graphics for all. Trends in Biochemical Sciences. 1995, vol. 20, no. 9, pp. 374. Available also from: http://www.ncbi.nlm.nih.gov/pubmed/ 7482707. 60. CROLL, Tristan Ian. ISOLDE: a physically realistic environment for model building into low-resolution electron-density maps. Acta Crystallographica Section D Structural Biology. 2018, vol. 74, no. 6, pp. 519–530. Available from DOI: 10.1107/S2059798318002425. 61. 2016: The Year of VR? - Virtual Reality Society [online] [visited on 2019-03-28]. Available from: https://www.vrs.org.uk/2016-the- year-of-vr/. 62. BORREL, Alexandre; FOURCHES, Denis. RealityConvert: A tool for preparing 3D models of biochemical structures for augmented and virtual reality. Bioinformatics. 2017, vol. 33, no. 23, pp. 3816–3818. ISSN 14602059. Available from DOI: 10.1093/bioinformatics/ btx485.

63 BIBLIOGRAPHY

63. ZHANG, Jimmy F.; PACIORKOWSKI, Alex R.; CRAIG, Paul A.; CUI, Feng. BioVR: a platform for virtual reality assisted biological data integration and visualization. BMC Bioinformatics 2019 20:1. 2019, vol. 20, no. 1, pp. 78. ISSN 1471-2105. Available from DOI: 10.1186/ S12859-019-2666-Z. 64. JOHNSTON, Angus P.R. et al. Journey to the centre of the cell: Virtual reality immersion into scientific data. Traffic. 2018. ISSN 16000854. Available from DOI: 10.1111/tra.12538. 65. GODDARD, Thomas D.; HUANG, Conrad C.; MENG, Elaine C.; PET- TERSEN, Eric F.; COUCH, Gregory S.; MORRIS, John H.; FERRIN, Thomas E. UCSF ChimeraX: Meeting modern challenges in visual- ization and analysis. Protein Science. 2018, vol. 27, no. 1, pp. 14–25. ISSN 09618368. Available from DOI: 10.1002/pro.3235. 66. NORRBY, Magnus; GREBNER, Christoph; ERIKSSON, Joakim; BOSTRÖM, Jonas. Molecular Rift: Virtual Reality for Drug De- signers. Journal of Chemical Information and Modeling. 2015, vol. 55, no. 11, pp. 2475–2484. ISSN 1549-9596. Available from DOI: 10.1021/acs.jcim.5b00544. 67. Unity [online] [visited on 2019-05-14]. Available from: https://unity. com/. 68. GODDARD, Thomas D.; BRILLIANT, Alan A.; SKILLMAN, Thomas L.; VERGENZ, Steven; TYRWHITT-DRAKE, James; MENG, Elaine C.; FERRIN, Thomas E. Molecular Visualization on the Holodeck. Journal of Molecular Biology. 2018, vol. 430, no. 21, pp. 3982–3996. ISSN 0022-2836. Available from DOI: 10.1016/J.JMB.2018.06.040. 69. Immersive Science LLC | Virtual Reality Based Research Tools [online] [visited on 2019-05-05]. Available from: https://www.immsci.com/ %7B%5C#%7Dproducts. 70. Nanome [online] [visited on 2019-05-05]. Available from: https:// nanome.ai/. 71. MoSCoW Prioritisation | Agile Business Consortium [online] [visited on 2019-05-14]. Available from: https://www.agilebusiness.org/ content/moscow-prioritisation. 72. RCSB PDB: Homepage [online] [visited on 2019-05-19]. Available from: https://www.rcsb.org/.

64 BIBLIOGRAPHY

73. Atomic Coordinate Entry Format Version 3.3 [online] [visited on 2019-05-17]. Available from: http : / / www . wwpdb . org / documentation/file-format-content/format33/v3.3.html. 74. KOZLÍKOVÁ, Barbora. Visualization Techniques for Static and Dynamic Protein Molecules and Their Channels. 2011. Available also from: https://is.muni.cz/th/ni59v/Kozlikova%7B%5C_%7Dthesis. PhD thesis. Masaryk University. 75. TAME, J.R.; VALLONE, B. The structures of deoxy human haemoglobin and the mutant Hb Tyralpha42His at 120 K. Acta Crystallogr.,Sect.D. 2000, vol. 56, pp. 805–811. Available from DOI: 10.2210/PDB1A3N/PDB. 76. TARINI, Marco; CIGNONI, Paolo; MONTANI, Claudio. Ambient Oc- clusion and Edge Cueing for Enhancing Real Time Molecular Visu- alization. IEEE Transactions on Visualization and Computer Graphics. 2006, vol. 12, no. 5, pp. 1237–1244. Available from DOI: 10.1109/ TVCG.2006.115. 77. List of 20 Simple, Distinct Colors – Sasha Trubetskoy [online] [visited on 2019-05-14]. Available from: https://sashat.me/2017/01/11/ list-of-20-simple-distinct-colors/. 78. Video | MPEG [online] [visited on 2019-05-12]. Available from: https: //mpeg.chiariglione.org/standards/mpeg-4/video. 79. What is Virtual Reality? | Interaction Design Foundation [online] [visited on 2019-03-28]. Available from: https://www.interaction-design. org/literature/topics/virtual-reality. 80. DOLEŽAL, Bc Milan. Collaborative Virtual Environments. Available also from: https : / / is . muni . cz / th / imoap / diploma - thesis - collaborative-vr.pdf. 81. VisualStudio IDE - Code Editor - Azure DevOps - & App Center - Visual Studio [online] [visited on 2019-05-19]. Available from: https:// visualstudio.microsoft.com/. 82. HTC Vive Tutorial for Unity | raywenderlich.com [online] [visited on 2019-05-14]. Available from: https://www.raywenderlich.com/ 9189-htc-vive-tutorial-for-unity. 83. Powering the Rift | Oculus [online] [visited on 2019-05-14]. Available from: https://www.oculus.com/blog/powering-the-rift/.

65 BIBLIOGRAPHY

84. 3D Scifi Kit Starter Kit - Asset Store [online] [visited on 2019-05-19]. Available from: https://assetstore.unity.com/packages/3d/ environments/3d-scifi-kit-starter-kit-92152. 85. XU, Z.; HORWICH, A.L.; SIGLER, P.B. The crystal structure of the asymmetric GroEL-GroES-(ADP)7 chaperonin com- plex. Nature. 1997, vol. 388, pp. 741–750. Available from DOI: 10.2210/PDB1AON/PDB. 86. Unity Recorder - Asset Store [online] [visited on 2019-05-19]. Available from: https://assetstore.unity.com/packages/essentials/ unity-recorder-94079. 87. RockVR - Asset Store [online] [visited on 2019-05-19]. Available from: https://assetstore.unity.com/publishers/24830. 88. FFmpeg [online] [visited on 2019-05-19]. Available from: https:// ffmpeg.org/. 89. LANDAU, H.J. Sampling, data transmission, and the Nyquist rate. Proceedings of the IEEE. 1967, vol. 55, no. 10, pp. 1701–1706. ISSN 0018-9219. Available from DOI: 10.1109/PROC.1967.5962. 90. Cinemachine – Camera tools for passionate creators [online] [visited on 2019-05-19]. Available from: https://www.cinemachineimagery. com/.

66