<<

Morphology of : Creating a web-based 3D interactive resource to teach the anatomy of the human

by Alisa M. Brandt

A thesis submitted to Johns Hopkins University in conformity with the requirements for the degree of Master of Arts

Baltimore, Maryland March, 2019

© 2019 Alisa Brandt All Rights Reserved AbstrAct

The hippocampus is a critical region of the involved in memory and learning. It has

been widely researched in animals and humans due to its role in consolidating new experiences

into long-term declarative and its vulnerability in neurodegenerative diseases. The

hippocampus is a complex, curved structure containing many interconnected regions that consist of

distinct cell types. Despite the importance of understanding the normal state of hippocampal anatomy

for studying its functions and the disease processes that affect it, didactic educational resources are

severely limited. The literature on the hippocampus is expansive and detailed, but a communication

gap exists between researchers presenting hippocampal data and those seeking to improve their

understanding of this part of the brain. The hippocampus is typically viewed in a two-dimensional

fashion; students and scientists have diffculty visualizing its three-dimensional anatomy and its

structural relationships in space.

To improve understanding of the hippocampus, an interactive, web-based educational resource was created containing a pre-rendered 3D animation and manipulable 3D models of hippocampal

regions. Segmentations of magnetic resonance imaging data were modifed and sculpted to

build idealized anatomical models suitable for teaching purposes. These models were animated

in combination with illustrations and narration to introduce the viewer to the subject, and the

completed animation was uploaded online and embedded into the interactive. A separate section

of the interactive allows the user to rotate the models, hide and show different regions, and access

additional explanatory text. The user interface and interactivity were coded to allow exploration of

hippocampal regions and navigation between sections of the resource. The results of this project

provide a didactic and accessible visualization for graduate students, researchers, clinicians, and

other individuals involved in neuroscience. The animation and interactive models allow users to

reinforce their understanding of 3D hippocampal anatomy and connectivity. By improving visual

understanding of the hippocampus, this project aims to advance the communication and scientifc

study of hippocampus-related topics, such as epilepsy and Alzheimer’s disease.

Alisa M. Brandt

ii chAirpersons of the supervisory committee

David Nauen, MD, PhD, Preceptor

Assistant Professor, Division of Neuropathology, Department of Pathology

The Johns Hopkins University School of Medicine

Lydia Gregg, MA, CMI, FAMI, Department Advisor

Associate Professor, Division of Interventional Neuroradiology, Department of Radiology and

Radiological Science and Department of Art as Applied to Medicine

The Johns Hopkins University School of Medicine

Michael I. Miller, PhD, Content Advisor

Director, Department of Biomedical Engineering

Co-Director, Kavli Neuroscience Discovery Institute

The Johns Hopkins University School of Medicine and Whiting School of Engineering

iii Acknowledgments

This project has been a remarkable journey of learning and insight. It would not have been possible without the support and expertise of many individuals, and I am so grateful that I had this opportunity to work with them and study with them. I would like to extend my sincerest thanks to:

Lydia Gregg, my advisor and Associate Professor in the Department of Art as Applied to

Medicine. Thank you for your skillful guidance, your patience, your honest feedback, and your supportive energy throughout this project. You have inspired me and helped me achieve what I thought was initially beyond my reach.

Dr. David Nauen, my preceptor, Assistant Professor and neuropathologist in the Department of

Pathology, for his enthusiasm and contributions. Thank you for sharing your work, your ideas, and your extensive knowledge with me, and for your time during the many elucidating discussions we had about this project.

Dr. Michael Miller and the Center for Imaging Science, for providing invaluable segmentation data and content feedback. Special thanks to Timothy Brown and Dr. Daniel Tward for our in-depth discussions and their presentation of their work.

Dr. Jianyang Zhang, at the New York University School of Medicine, for providing diffusion data of a hippocampal specimen and assisting with segmentation software. Thank you for your expertise in preparing this informative dataset.

Dr. James Knierim, for his feedback on hippocampal connectivity; Dr. Tilak Ratnanather, for his helpful comments; and the four graduate students I had the pleasure of meeting, for sharing their perspectives and experiences while studying the hippocampus.

The Faculty and Staff of the Department of Art as Applied to Medicine, with special thanks to:

Corinne Sandone, for her leadership and unwavering support; David Rini, for his approachability and for taking care of all things 3D; Jennifer Fairman, for her vibrant energy and for always being willing to assist; Tim Phelps, for his sharp eye and fery humor; Gary Lees, for his perception and indispensable knowledge of the archives; Juan Garcia, for his inspiring drive for accuracy and clarity; Sarah Poynton, for her mastery of effective writing; Norman Barker, for his passion and knowledge of photography; Dacia Balch, for her thoughtfulness and incredible coordination; and

iv Carol Pfeffer, for her warmth and constant encouragement. Thank you for making the Department such a motivating and comfortable place to study and learn with the highest standards.

My classmates in the Class of 2019: Brittany Bennett, Insil Choi, Cecilia Johnson, Lohitha

Kethu, and Vondel Mahon, for their companionship and kindness, their artistry and beautiful work, and their laughter. I am glad to have walked this path with you.

My parents: Kyoko Kubota and Dwayne Brandt, for their support and encouragement

throughout this project and throughout my life. Thank you for always being there to listen, no matter

how far I am from home.

Finally, special thanks to The Vesalius Trust for Visual Communication in the Health

Sciences for their generous support of this project.

v tAble of contents

Abstract ...... ii

Acknowledgments ...... iv

Table of Contents ...... vi

List of Figures ...... ix

List of Abbreviations ...... xii

Introduction...... 1

An overview of the human hippocampus ...... 1

Clarifcation of terminology ...... 3

Architecture of the adult ...... 4

Memory generation via the and the hippocampal circuit ...... 7

Existing educational resources ...... 10

Animation and interactive media as learning tools ...... 12

WebGL-based 3D visualization ...... 12

Objectives ...... 13

Audiences ...... 13

Materials and Methods ...... 14

Sources of data ...... 14

Overview of software ...... 15

Preparation of diffusion data for segmentation in 3D Slicer ...... 16

Segmentation in 3D Slicer ...... 17

Creation of 3D models ...... 25

Hippocampal models ...... 25

1. Sculpting in ZBrush ...... 25

2. Modifcations made to original CIS segmentations...... 28

3. Boolean subtraction ...... 30

Additional anatomical models ...... 31

Pre-rendered introductory animation ...... 37

vi Interactive 3D web-based application ...... 37

Summary of interactivity ...... 37

User interface design ...... 38

Optimization of models for the web ...... 39

Creating a new Verge3D project ...... 42

Camera setup ...... 42

Creating an orientation indicator ...... 46

Creating materials and lights ...... 48

Visual coding with Puzzles ...... 49

Integrating HTML and CSS ...... 53

Additional coding with JavaScript ...... 56

Hosting on the web ...... 57

Results ...... 58

3D models ...... 58

Pre-rendered introductory animation ...... 65

Web-based interactive design and deployment ...... 93

Access to assets resulting from this thesis ...... 100

Discussion ...... 101

Project goals ...... 101

Segmentation and hippocampal model creation ...... 101

Introductory animation ...... 103

Web-based interactive ...... 103

Accessibility ...... 104

Implications for education and biocommunication ...... 104

Future directions ...... 106

Conclusion ...... 107

Appendices...... 108

Appendix A: Questions to guide informal discussions ...... 108

Appendix B: Preliminary wireframes for the interactive ...... 110

vii Appendix C: Descriptive text in the interactive ...... 117

Appendix D: Samples of Verge3D Puzzles and additional code ...... 121

References ...... 129

Vita ...... 134

viii list of figures

Figure 1. Superior view of a right adult human hippocampus specimen ...... 2

Figure 2. A visual summary of the adult human hippocampal formation ...... 6

Figure 3. Diagrammatic summary of the perforant path and hippocampal circuit ...... 9

Figure 4. Photographs of the anterior hippocampal specimen prior to diffusion MRI ...... 14

Figure 5. Diffusion tensor imaging of the anterior hippocampal specimen in Slicer ...... 17

Figure 6. Toggling slice visibility in Slicer ...... 18

Figure 7. Activating the crosshair feature in the top menu bar in Slicer ...... 18

Figure 8. Segmenting the dentate (stratum moleculare) in the Segment Editor of Slicer ...... 19

Figure 9. Using the “Threshold” effect to generate a surface model in Slicer ...... 20

Figure 10. Segmenting a brain from anonymized DICOM data in Slicer ...... 21

Figure 11. Segmenting a from anonymized DICOM data in Slicer ...... 21

Figure 12. Segmenting left and right hippocampi from anonymized DICOM data in Slicer ...... 22

Figure 13. Applying the “Smooth” effect in Slicer ...... 22

Figure 14. Exporting a segmentation from Slicer ...... 23

Figure 15. Completed (stratum moleculare) segmentation in Cinema 4D ...... 23

Figure 16. Dentate gyrus and alveus segmentations in Cinema 4D ...... 24

Figure 17. Brain, ventricular system, and hippocampi segmentations in Cinema 4D ...... 24

Figure 18. Original surface segmentations of a left hippocampal formation provided by CIS ...... 26

Figure 19. Segmentations after Dynameshing and applying the “Polish Crisp Edges” effect ...... 26

Figure 20. Three stages of repairing the missing section of the stratum moleculare in ZBrush ...... 27

Figure 21. Two stages of repairing the stratum granulosum in ZBrush ...... 27

Figure 22. Using the Curve Surface brush to add geometry to the alveus ...... 28

Figure 23. Aligning the stratum moleculare segmentation from Slicer in ZBrush ...... 29

Figure 24. The stratum moleculare of the dentate gyrus after sculpting ...... 29

Figure 25. Sculpting the alveus and fmbria with references ...... 30

Figure 26.1-26.2. Object manager in Cinema 4D showing organization of Boolean subtractions ...... 32

Figure 27. Using Booles in Cinema 4D to inspect coronal views during model optimization ...... 34

ix Figure 28. Before and after Boolean subtraction to remove mesh overlap ...... 34

Figure 29. The viewport appearance of the coronal view models with lines turned on ...... 35

Figure 30. Modifying the brain segmentation with the “Infate” effect in ZBrush ...... 35

Figure 31. Sculpting mossy fbers using the Curve Tube and Infate brushes ...... 36

Figure 32. Sculpting a multiheaded dendritic spine using the Curve Tube brush ...... 36

Figure 33. Information architecture diagram created to plan overall interactivity ...... 38

Figure 34. Sample full-color storyboard created to plan the interactive ...... 39

Figure 35. Optimizing 3D model meshes in MeshLab ...... 41

Figure 36. The Verge3D App Manager ...... 43

Figure 37. Exporting the glTF fle to overwrite the existing one in the application folder ...... 44

Figure 38. Modifying the settings of the 3D scene’s main camera in ...... 45

Figure 39. View through the 3D scene’s main camera in Blender ...... 46

Figure 40. Adding a Copy Rotation type of Object Constraint to the brain object in Blender ...... 47

Figure 41. Default appearance of the Verge3D PBR material node in the Node Editor ...... 49

Figure 42. Modifying a Verge3D PBR material node in Blender ...... 50

Figure 43. Using Verge3D’s visual coding interface, Puzzles, in a web browser ...... 51

Figure 44. The Variables tab in the Puzzles interface of Verge3D ...... 51

Figure 45. Using the “tween camera” Puzzle to modify camera position ...... 52

Figure 46. Using the “add annotation” Puzzle to add a label to a specifed object ...... 52

Figure 47. Accessing the Puzzles Library ...... 53

Figure 48. Creating a new HTML button element with Puzzles ...... 54

Figure 49. Using the “on event of” Puzzle to trigger a change in model visibility ...... 55

Figure 50. Using the “set style: overfow” Puzzle to create scrolling functionality ...... 55

Figure 51. Modifying the background image of the element with Puzzles ...... 56

Figure 52. Rendered model of the brain and brainstem ...... 59

Figure 53. Rendered model of the ventricular system ...... 60

Figure 54. Rendered model of the left hippocampal formation ...... 61

Figure 55. Rendered model of the left hippocampal formation with color coding of regions ...... 62

Figure 56. Rendered model of the left hippocampal formation, select regions hidden ...... 63

x Figure 57. Rendered model of the coronal view of the left hippocampal formation ...... 64

Figure 58.1-58.23. Animation storyboards ...... 65

Figure 59.1-59.32. Animation stills ...... 77

Figure 60. Summary of user interface elements ...... 94

Figure 61.1-61.10. Screenshots of the interactive ...... 95

Figure 62.1-62.7. Preliminary wireframes for the interactive ...... 110

Figure 63.1-63.3. Puzzles to switch to the “3D Adult Anatomy” section ...... 121

Figure 64. Puzzles to embed the pre-rendered animation from a video-hosting website ...... 124

Figure 65. Puzzles to trigger changes in button appearance in response to mouse movements ...... 124

Figure 66. Puzzles to toggle region and label visibility ...... 124

Figure 67. Puzzles to toggle “About” popup visibility ...... 125

Figure 68. Puzzles to switch to a section of the “More info” popup ...... 125

Figure 69. Puzzles to advance through pages with the “Next” button in the “Regions” section ...... 126

Figure 70. Puzzles to return through pages with the “Prev” button in the “Regions” section ...... 127

xi list of AbbreviAtions

Boole: Boolean operation

C4D: Cinema 4D

CA: Cornu ammonis

CIS: Center for Imaging Science

CSS: Cascading Style Sheets

DG: Dentate gyrus

DICOM: Digital Imaging and Communications in Medicine

DTI: Diffusion tensor imaging

DWI: Diffusion-weighted imaging

FTP: File Transfer Protocol

GIF: Graphics Interchange Format glTF: Graphics Library Transmission Format

GW: Gestational weeks

HTML: Hypertext Markup Language

MRI: Magnetic Resonance Imaging

PBR: Physically Based Rendering

QECD: Quadric Edge Collapse Decimation

Slicer: 3D Slicer

STL: Stereolithography

FLAIR: Fluid-Attenuated Inversion Recovery

URL: Universal Resource Locator

WebGL: Web Graphics Library

xii introduction

An overview of the human hippocampus

The human hippocampus is located in the medial , protruding from the foor of the lateral ventricle, where it is bathed by cerebrospinal fuid. It is known to be critical for the generation of long-term declarative memory, learning, and aspects of emotional regulation, among other functions (Deykezer et al. 2017; Duvernoy et al. 2013). Its involvement in these functions, its structure and connectivity, and its vulnerability in various diseases have been intensely investigated.

The rodent and non-human hippocampus have also been studied extensively, providing useful insights into structural or functional similarities and differences (Insausti et al. 2017; Strange et al. 2014), but many aspects of human hippocampal structure and function remain incompletely understood.

The hippocampus is named in reference to its arced, tapering shape resembling that of a seahorse (genus Hippocampus) and has also been compared to a “ram’s horn” (Duvernoy et al. 2013).

From anterior to posterior, it consists of an enlarged head, a body, and a tail (Figure 1). The head curves posterolaterally, transitioning to the body and then the tail. The tail curves superiorly and posteromedially as it leads into the (Duvernoy et al. 2013; Yasutaka et al. 2013). Regularly- spaced projections along the structure and larger “digitations” in the head are commonly seen

(Duvernoy et al. 2013; Yasutaka et al. 2013). Important adjacent structures include the and the of the , which are interconnected with the hippocampus.

Another nearby structure closely involved with the hippocampus is the , a center of emotional experience and response; together they are included in the set of structures referred to as the (Duvernoy et al. 2013; Vanderah & Gould 2016). Within the hippocampus, the two major subdivisions, the cornu ammonis and the dentate gyrus, lie interlocked or rolled into each other along the length of the structure. Each is subdivided further into layered regions with distinct cellular characteristics.

1 Figure 1. Superior view of a right adult human hippocampus specimen. A: anterior, P: posterior, L: lateral, M: medial.

Understanding the structure of the adult hippocampus can be facilitated by studying its embryological development. Rapid transformations in position, shape, volume, and cellular composition produce its complex, rolled structure. The most dramatic changes in shape occur early in fetal development. At about 8 to 9 gestational weeks (GW), hippocampal development begins, and the primordial hippocampal region can be identifed as a fat, unfolded plate of lining the lateral ventricle in the medial aspect of each (Arnold & Trojanowski 1996; Bajic et al. 2012; Yang et al. 2014). At about 11 to 14 GW, the dentate gyrus becomes defned; it increases in thickness and begins to rotate towards the cornu ammonis. The layer of the dentate gyrus condenses, and the hippocampal deepens (Arnold & Trojanowski 1996; Humphrey

1967; Kier et al. 1997). Infolding of the hippocampus over the parahippocampal gyrus begins at about 16 GW as the becomes deeper and wider, then narrower, as a result of continued growth and inward rotation of the dentate gyrus. A vertical to horizontal rotation of the hippocampus and associated structures also occurs between 14 and 20 GW (Ge et al. 2015). By 30

GW, a close resemblance to the adult hippocampus has been reached: infolding is complete, growth

2 has slowed, and the walls of the hippocampal sulcus have fused (Arnold & Trojanowski 1996; Ge et al. 2015; Humphrey 1967; Kier et al. 1997; Yang et al. 2014). Human hippocampal development is an area of study that continues to progress; comprehension of the normal process of development can aid identifcation of features or causes of hippocampus-associated pathology (Gogtay et al. 2006; Milesi et al. 2014; Kier et al. 1997).

Different regions of the hippocampus are selectively affected by diseases and aging (Adler et al. 2018; Beaujoin et al. 2018; Blümcke et al. 2013). Understanding the normal state of hippocampal architecture is crucial for studying disease processes that affect memory, such as temporal lobe epilepsy (Blümcke et al. 2013; Comper et al. 2017) and Alzheimer’s disease (Adler et al. 2018;

Deykeyzer et al. 2017). Alzheimer’s disease is a progressive neurodegenerative disease that affects the and medial temporal lobes and involves buildup of amyloid plaques and neurofbrillary tangles (Dekeyzer et al. 2017; Marks et al. 2017; Robinson et al. 2014). The primary system of connections between mesocortical areas and the hippocampus required for memory consolidation is destroyed in Alzheimer’s disease (Augustinack et al. 2010; Zeineh et al. 2017).

In general, among the medial temporal lobe structures, the hippocampus is known to be an early, prominent site of neurodegeneration (Boutet et al. 2014; Marks et al. 2017). The complexity and vulnerability of this structure, as well as selective vulnerability of certain regions within the structure itself, emphasize the need for effective visual aids when learning about the hippocampus and its pathological states.

Clarifcation of terminology

For the purposes of this thesis: Hippocampus refers to the protrusion into the lateral ventricle, consisting of the cornu ammonis and the dentate gyrus (Dekeyzer et al. 2017; Duvernoy et al. 2013;

Radonjic et al. 2014). The fmbria is also included in the hippocampus due to its close relationship to the alveus. Hippocampal formation refers to a group of structures in the medial temporal lobe that includes the hippocampus, subiculum (as well as the presubiculum and ), and the entorhinal cortex (Duvernoy et al. 2013; Lavenex & Lavenex 2013).

To the author’s knowledge, these are currently the most commonly used terms, but usage can vary in the literature; for example, the subiculum may sometimes be considered part of the

3 hippocampus (e.g. Vanderah & Gould 2016; Yasutaka et al. 2013), or the entorhinal cortex may not be included in the hippocampal formation (e.g. Ge et al. 2015).

Some regional terminology and boundaries between regions are controversial or diffcult to confdently discern. The following descriptions of hippocampal structures are presented as a summary of the views established by Duvernoy et al. (2013) or otherwise prevailing currently.

Architecture of the adult hippocampal formation

The two main divisions of the hippocampus, the cornu ammonis and the dentate gyrus, wrap around each other and contain interconnected layers consisting of axons, , and/or cell bodies of various types (Deykeyzer et al. 2017; Duvernoy et al. 2013) (Figure 2). The cornu ammonis and the dentate gyrus have each been referred to as a gray matter lamina, though some have challenged this manner of describing the hippocampus as a simple bilaminar structure (Yasutaka et al. 2013).

The cornu ammonis (CA) is frequently referred to as the . It includes the alveus, stratum oriens, stratum pyramidale, stratum radiatum, and stratum lacunosum-moleculare.

An additional layer, the stratum lucidum, is present in a limited portion of the cornu ammonis

(Duvernoy et al. 2013). In the stratum pyramidale, four subfelds of pyramidal cell bodies with different characteristics, CA1 through CA4, are recognized. Pyramidal neurons are principal excitatory (glutamatergic) cells (Duvernoy et al. 2013; Heckers & Konradi 2015; Yassa & Stark 2011).

A pyramidal neuron has a triangular cell body, one axon extending from its triangular base, basal dendrites with many dendritic spines, and one large apical (Duvernoy et al. 2013; Yassa &

Stark 2011). Pyramidal neurons are thought to be regulated by inhibitory GABAergic

(such as basket cells) that interact with bodies, axons, and dendrites (Duvernoy et al.

2013; Heckers & Konradi 2015).

CA1, the largest subfeld, begins where the subiculum ends and contains scattered pyramidal cell bodies. CA2 is denser and narrower, and transitions into CA3, where the cornu ammonis curves towards the dentate gyrus. CA3 is characterized by the defnite presence of mossy fbers, which are thin unmyelinated axons from granule cells of the dentate gyrus. CA4 flls the hilus, or the concavity of the dentate gyrus; some authors do not recognize CA4 as signifcantly distinguishable from CA3

(e.g. Insausti et al. 2017), but differences in appearance and vulnerability to disease are distinct in

4 humans (Blümcke et al. 2013; Duvernoy et al. 2013; Zeineh et al. 2017).

The axons of pyramidal neurons traverse the stratum oriens before reaching the alveus. The alveus is a layer that makes up the majority of the hippocampus’ intraventricular surface.

These white matter fbers are oriented anteroinferior to posterosuperior as they pass into the fmbria and eventually reach the fornix (Shepherd et al. 2007; Yasutaka et al. 2013). The apical dendrites of the pyramidal neurons travel deeper into the hippocampus, forming the stratum radiatum, which has a striated appearance due to the arrangement of these dendrites (Duvernoy et al. 2013). In this layer, synapses occur with Schaffer collaterals and other fbers. The stratum lacunosum-moleculare contains the most distal apical dendrites of pyramidal neurons, as well as axon bundles travelling anterior to posterior, forming synapses with the apical dendrites as they pass by (Duvernoy et al.

2013).

The dentate gyrus is characterized by the presence of neurons called granule cells and is generally readily visualized in histological staining. The dentate gyrus can be divided into three layers: the stratum moleculare, containing granule cell dendrites; the stratum granulosum, containing numerous granule cell bodies; and the polymorphic layer, which is mostly acellular and contains mossy fbers (Duvernoy et al. 2013; Lim et al. 1997). Granule cells are excitatory (glutamatergic) and have small, round cell bodies, which are densely packed in the stratum granulosum (Duvernoy et al. 2013; Yassa & Stark 2011). The polymorphic and molecular layers of the dentate gyrus contain basket interneurons, which are important inhibitors of granule cells (Duvernoy et al. 2013). The narrow, medial portion of the dentate gyrus visible between the fmbria and the subiculum is called the margo denticulatus, which typically has a “toothed” appearance consisting of rounded protrusions in humans (Duvernoy et al. 2013; Yasutaka et al. 2013). The hippocampal sulcus separates the dentate gyrus from the portions of the cornu ammonis distal to it; it is fused in the adult, though some cavities may remain in its deeper portions (Duvernoy et al. 2013).

5 © 2019 Alisa Brandt

Figure 2. A visual summary of the adult human hippocampal formation. The illustration shows the hippocampal regions and connectivity discussed in this project. Coronal section of regions created with reference to: Duvernoy et al. 2013; Heckers & Konradi 2015.

6 The subiculum lies medial and adjacent to the hippocampus, forming the fat, superior part of the parahippocampal gyrus (Duvernoy et al. 2013; Yasutaka et al. 2013). It has also been described as a transitional zone between the cornu ammonis and the cortex of the parahippocampal gyrus

(Vanderah & Gould 2016). The subiculum can be further elaborated into different structures, such as the presubiculum, parasubiculum, and subiculum proper, but only the subiculum as a whole is considered here.

The entorhinal cortex, located in the anterior portion of the parahippocampal gyrus, is considered to be the primary input to the hippocampus; its superfcial Layer II and Layer III are origins of neurological pathways that enter the hippocampus (Duvernoy et al. 2013).

In addition to these layered regions, research suggests the presence of longitudinal gradients or differential organization along the anterior-posterior axis of the hippocampus. Yasutaka et al.

(2013) concluded that the human hippocampus contains an anterior to posterior succession of segmented neuronal units. Narrow strips of dentate gyrus and CA subfelds may be linked in precise arrangements (Duvernoy et al. 2013). An anterior to posterior gradient may exist for retrieval of increasingly detailed declarative memories, and the posterior hippocampus may be specialized for accurate spatial processing (Strange et al. 2014). A consistent fnding in humans is that the anterior hippocampus in particular is activated during semantic processing; for example, when making a logical inference based on facts (Strange et al. 2014). This may be important for the ability to link related episodic memories together (Strange et al. 2014).

Memory generation via the perforant path and the hippocampal circuit

The role played by the hippocampal formation in the consolidation of short-term memory into long-term memory has been largely accepted. Scoville and Milner (1957) described various degrees of memory loss or memory impairment following bilateral medial temporal lobe surgery to treat severe epileptic seizures in a series of case studies. The memory defcits appeared to correspond to the extent of hippocampal removal, impairing the ability to form new lasting memories from daily experiences without affecting technical skills, language, or early memories (Milner & Klein 2016;

Scoville & Milner 1957). These fndings emphasized the crucial involvement of the hippocampal formation in memory generation.

7 Advanced brain imaging techniques and recent studies on animals and humans have uncovered more details. New information from the is processed in the hippocampus, and then fxed or stored elsewhere in the brain, in the association areas of the cerebral cortex (Augustinack et al.

2010; Duvernoy et al. 2013; Vanderah & Gould 2016). This process is thought to consolidate new experiences into long-term declarative memory, enabling the conscious recollection of facts and events (semantic and episodic memory), as well as spatial memory enabling navigation (Duvernoy et al. 2013).

A key neurological pathway thought to be involved in the generation of long-term declarative memory in the hippocampus is called the perforant path. The entorhinal cortex, the primary input to the hippocampus, receives sensory information and many projections from various areas of the association cortex, such as the posterior cingulate gyrus, the orbital cortex, and regions of the frontal, parietal, and temporal lobes (Duvernoy et al. 2013; Vanderah & Gould 2016). From the entorhinal cortex (ERC), fbers “perforate” or travel through the adjacent subiculum to reach the hippocampus.

Two paths within this perforant path are recognized: one originating from ERC Layer II projecting to the dentate gyrus and CA3, and one originating from ERC Layer III to project to CA1 (Augustinack et al. 2010; Yassa & Stark 2011; Zeineh et al. 2017). These paths are followed by additional hippocampal connections, including the . The trisynaptic circuit refers to three synapses: between entorhinal cortex to dentate gyrus, between dentate gyrus to CA3, and between

CA3 to CA1 (marked in red in Figure 3; Lavenex & Lavenex 2013). Duvernoy et al. (2013) described the perforant path in combination with these additional connections within the hippocampus, dividing them into the polysynaptic intrahippocampal pathway (containing the Layer II perforant path), and the direct intrahippocampal pathway (containing the Layer III perforant path). Layer II perforant path fbers travel through stratum lacunosum-moleculare and cross or pass around the hippocampal sulcus to reach the stratum moleculare of the dentate gyrus; some bypass the dentate gyrus and reach CA3 directly (Duvernoy et al. 2013; Yassa & Stark 2011; Zeineh et al. 2017). After Layer II fbers synapse on the dentate gyrus, the mossy fbers of granule cells in the dentate gyrus extend to CA4 and CA3

(Frotscher et al. 2006; Lim et al. 1997; Wilke et al. 2014; Zeineh et al. 2017). From CA3 pyramidal neurons, axon branches called Schaffer collaterals curve back through the stratum radiatum as a band of fbers reaching pyramidal neurons in CA1 (Duvernoy et al. 2013; Zeineh et al. 2017). Recurrent

8 collateral fbers in CA3 form a feedback loop by circling back to dendrites of pyramidal cells in the same subfeld (Yassa & Stark 2011). The Layer III perforant path fbers reach CA1 directly, by arcing down from stratum lacunosum-moleculare through stratum radiatum and into stratum pyramidale

(Zeineh et al. 2017).

Key: Alveus Output to subcortical Perforant path from ERC Layer II to dentate regions via fimbria gyrus and CA3. Part of the "Polysynaptic and fornix intrahippocampal pathway" described by Duvernoy et al. (2013). Recurrent CA3 collaterals Perforant path from ERC Layer III to CA1. Part of the "Direct intrahippocampal CA4 pathway" described by Duvernoy et al. (2013). Synapse #2 Synapses of the trisynaptic circuit. Mossy fibers from dentate gyrus to CA4 and CA3. Schaffer collaterals Mossy fibers Recurrent collateral fibers in CA3. Dentate gyrus (DG) Synapse #3 Schaffer collaterals from CA3 to CA1. Output fibers leaving hippocampus and Synapse Collaterals that bypass DG subiculum via the alveus. #1 CA1 Hippocampal fibers projecting back to subiculum and ERC Layers V and VI. Subiculum

Layer III Layer II Entorhinal cortex Layer V/VI (ERC) Begin here

Figure 3. Diagrammatic summary of the perforant path and hippocampal circuit, beginning at the entorhinal cortex. Major neuronal connections involved in memory generation as identifed in various human and animal studies are shown. Created with reference to: Augustinack et al. 2010; Duvernoy et al. 2013; Lavenex & Lavenex 2013; Parekh et al. 2015; Radonjic et al. 2014; Yassa & Stark 2011; Zeineh et al. 2017; and personal communication with Dr. James Knierim, Johns Hopkins University.

The unidirectional mossy fber path is noteworthy due to the powerful synapse that occurs between dentate gyrus granule cells and pyramidal neurons of CA3, in the stratum lucidum

(Duvernoy et al. 2013; Insausti et al. 2017; Yassa & Stark 2011). Large presynaptic expansions along the mossy fbers, called giant boutons, are able to strongly depolarize pyramidal apical dendrites (Lim et al. 1997; Martin et al. 2017; Wilke et al. 2014). These apical dendrites have large,

9 specialized, multiheaded dendritic spines, called thorny excrescences. The mossy fber synapse has been shown to be disrupted by Alzheimer’s disease in mouse models (Wilke et al. 2014). It has been suggested that the dentate gyrus-CA3 synapse supports new learning and separation of similar pieces of information, to distinguish between memories of similar experiences and store them in a non-overlapping fashion (Yassa & Stark 2011). In contrast, collateral fbers from Layer II of the entorhinal cortex travelling directly to CA3 and bypassing the dentate gyrus are thought to be more involved in triggering recollection of that information (Yassa & Stark 2011).

Output fbers leave the hippocampus via the alveus and converge into the fmbria, to eventually reach the fornix. This major output pathway leads through the crus, body, and column of the fornix, and reaches the anterior thalamic nucleus directly or via the mammillary bodies, before continuing to the posterior and (Duvernoy et al. 2013; Vanderah &

Gould 2016). Some fbers split off as the precommissural fornix and reach the orbital and anterior cingulate cortex (Vanderah & Gould 2016). Hippocampal outputs also project directly back to the subiculum and deep Layers V and VI of the entorhinal cortex (Lavenex & Lavenex 2013; Zeineh et al. 2017). The loop known as the Papez circuit, though perhaps an oversimplifcation, summarizes these complex connections as beginning in the entorhinal cortex, reaching the hippocampus, then proceeding through the fornix, mammillary bodies and the anterior thalamic nucleus, cingulate cortex, parahippocampal gyrus (entorhinal cortex), and fnally back to the hippocampus (Augustinack et al. 2010; Vanderah & Gould 2016).

Existing educational resources

It is challenging to communicate the rolled, curving shape of the hippocampus and its inner structure. Students and scientists often struggle to visualize the hippocampus and its regions in multiple dimensions. Resources for learning hippocampal morphology are limited in effectiveness and accessibility, despite the importance of understanding normal structure and development in relation to hippocampus-associated neurological disorders (Arnold & Trojanowski 1996; Beaujoin et al. 2018; Gogtay et al. 2006; Kier et al. 1997; Milesi et al. 2014). Current available visual resources for learning about the hippocampus known to the author consist of simplifed diagrams (illustrating different regions of the hippocampus, or as box-and-arrow diagrams representing connectivity) or

10 still images (such as slices from medical imaging and stained histological sections). These images are further limited because they typically show the hippocampus via a single, standard view: coronally through the anterior-posterior midpoint in the hippocampal body, without three-dimensional context

(e.g. Figure 1 in Blümcke et al. 2013). As a result, students, researchers, and clinicians may hold a limited understanding of this critical structure, without opportunities to learn its shape and the relationships of its different parts in space. Discussions with graduate students and researchers at several labs in the Johns Hopkins University revealed a need for an interactive resource that would offer views of the human hippocampus from different angles, as well as a clear breakdown of its layered structure and neuronal connectivity (see Appendix A for guiding questions).

The literature on the hippocampus and hippocampal formation is expansive. An incredible amount of data exists, but, as noted through informal discussions with graduate students and researchers, it is intricately detailed and diffcult to digest, especially when one is unfamiliar with the anatomy of the area. Textbooks such as Nolte’s The (Vanderah & Gould 2016) present descriptions, images, and brief, simple videos available online, and The Human Hippocampus

(Duvernoy et al. 2013) presents detailed information, numerous illustrations, and photographs of dissections and stained sections. However, these resources lack interactivity and three-dimensional visualizations. Although some websites, such as brainfacts.org (“3D Brain”, 2017) and healthline.com (Seladi-Schulman & Han 2018), provide interactive educational visuals of the human brain, none focus on the human hippocampus specifcally. Some websites exist that present information about the hippocampus in an educational manner using descriptive text, images and diagrams, and simple animated GIFs, without real-time interactivity (e.g. “Neural pathways”, n.d.).

Digital atlases of either the human brain as a whole (e.g. available through the Scalable Brain Atlas,

Bakker et al. 2015), or more rarely, of the human hippocampus (e.g. the Penn Hippocampus Atlas,

Adler et al. 2018) can be found, but these are presented as data visualization methods or repositories of data rather than educational tools. Finally, many resources about the hippocampus are focused almost entirely on the rodent hippocampus, such as those available on temporal-lobe.com (Cappaert

& van Strien 2016). A signifcant gap becomes apparent when searching for interactive, didactic, visual material focused on the human hippocampus.

11 Animation and interactive media as learning tools

With increasing use of modern technology, new methods for scientifc or medical education are emerging. In many cases, learning can be expedited using visual media, particularly when it is animated, multidimensional, or responsive to the learner’s actions. Supplemental 3D animations have been shown to be effective in teaching surgical techniques compared to traditional surgical videos alone (Prinz et al. 2005), interactive has improved surgical planning and learning of patient anatomy (Soler et al. 2014), and interactive or animated anatomical models have been developed to teach complex three-dimensional structures (Pedersen et al. 2013; Raffan et al. 2017; van de Kamp et al. 2014). Though formal reports on the success of these media are limited in number, animated or interactive 3D media are thought to reach wider audiences, reveal structures diffcult to see in dissections, clarify spatial relationships, increase knowledge retention and enjoyment, and reduce costs due to reusability (Pedersen et al. 2013; Prinz et al. 2005; Raffan et al. 2017; Stull et al.

2009).

Learning theories related to instructional design provide valuable insight into how an educational resource can best aid the learner in building their understanding of a complex topic.

During this project, various learning theories were taken into consideration when creating animated and interactive material. Sweller’s cognitive load theory (Sweller et al. 2011), Reigeluth’s elaboration theory (Reigeluth et al. 1980), and Bruner’s constructivism theory (Clark 2018; Stapleton & Stefaniak

2019) are examples of learning theories that were particularly relevant to this project.

WebGL-based 3D visualization

To create an interactive learning resource involving real-time rendering of 3D models, a

WebGL-based method was determined to be the most effective and most accessible. WebGL (Web

Graphics Library), a 3D graphics application program interface developed by the Khronos Group, allows for real-time interactive 3D graphics in any modern web browser. It is a cross-platform web standard that has no need for plug-ins or installation of separate software. WebGL runs in the

HTML5 Canvas element and can be closely integrated with HTML/CSS content to build interactivity

(such as clickable buttons in a user interface). Toolkits such as Verge3D exist for streamlined development of 3D interactive web-based content using WebGL and 3D software, such as Blender.

12 Objectives

Three main objectives directed the development of this project:

1) Communicate the anatomy of the human hippocampus, from its shape and layered structure

in 3D space to its basic neuronal connectivity and cellular features, using pre-rendered

animation and interactive web-based media.

2) Provide an accessible resource that students and researchers can visit at any time through the

internet to watch the animation, review hippocampal anatomy, manipulate the 3D model, and

enhance their understanding of connectivity and spatial relationships.

3) Investigate novel data for morphological features and relationships in the human adult

hippocampus not previously depicted, by comparing new segmentations and visualizations to

the available literature.

Audiences

Primary audiences for the results of this project are graduate students and neuroscience researchers working with the hippocampus, and clinicians working with disease processes such as dementia in Alzheimer’s disease and epilepsy. Secondary audiences include individuals interested in neuroanatomy or brain development, such as undergraduate neuroscience majors, and other professionals such as neuropathologists.

13 mAteriAls And methods

Sources of data

Segmentations of a high-resolution (~500 μm3) ex vivo T2 MRI dataset of an anonymous

normal adult hippocampus were provided in STL (stereolithography) format by the Center for

Imaging Science (CIS) at the Johns Hopkins University. These raw segmentations were modifed to

serve as the foundation for the creation of the idealized hippocampus model.

Diffusion magnetic resonance imaging (MRI) data of the anterior half of an anonymous

normal adult hippocampus was provided by the project preceptor, Dr. David Nauen, Department of

Pathology, the Johns Hopkins University School of Medicine, and Dr. Jiangyang Zhang, Department

of Radiology, the New York University School of Medicine (Figure 4). Diffusion-weighted imaging

(DWI) measures random diffusion of water molecules in biological tissues along diffusion-encoding

gradients (Soares et al. 2013). A technique called diffusion tensor imaging (DTI) acquires DWI data

along at least six independent diffusion-encoding directions and fts the results to a three-dimensional

tensor model to characterize the average water diffusion profle within each voxel (Mori & Zhang

2006; Soares et al. 2013). For free water such as cerebrospinal fuid where diffusion is unrestricted

in all directions, the DTI result will likely indicate an isotropic diffusion profle, whereas for major

white matter tracts, DTI results can often be used to infer the directions of axonal bundles due to

(A) (B)

Figure 4. Photographs of the anterior hippocampal specimen prior to diffusion MRI. (A) Superior view. (B) Medial view. The hippocampal head is oriented towards the top of the image.

14 restriction of diffusion along the axons (Mori & Zhang 2006; Soares et al. 2013). The diffusion

MRI data from the anterior hippocampal specimen was acquired over approximately 16 hours using a horizontal 7 Tesla preclinical MRI system (Bruker Biospin, Billerica, MA, USA) and a volume transceiver coil with a 25-mm inner diameter. The resulting volumetric DTI data contained 322 slices in the coronal plane.

Two anonymized DICOM datasets from the Johns Hopkins Hospital, one T2 and one T2 FLAIR

MRI, were segmented to create a simplifed model of the ventricular system and a normal brain, respectively. Hippocampi were also segmented from the T2 MRI, for orientation purposes.

High-resolution scans of histological slides of the human hippocampus were provided by the project preceptor for use as supplemental imagery in selected portions of the interactive resource.

Overview of software

Various software and tools were used throughout this project to manipulate the data, create

2D and 3D assets, produce a pre-rendered animation, and develop the web-based 3D interactive. 3D

Slicer (version 4.10, Fedorov et al. 2012) was used to view medical imaging datasets and perform segmentation of selected structures. Pixologic ZBrush 2018 was used to modify, sculpt, and idealize segmentations into 3D models suitable for teaching purposes. Cinema 4D (versions R17 and R20) was used to manipulate 3D models further, apply materials and lighting, and render scenes for the pre-rendered animation. Voiceover audio for the animation was recorded using a Rode-NT USB microphone and edited with Adobe Audition 2019. Rendered scenes and audio were arranged and compiled together into the fnal animation using Adobe After Effects 2019. MeshLab 2016 (Cignoni et al. 2008) was used to optimize models for the web-based interactive. Blender (version 2.79b) and Verge3D (version 2.10) were used in combination to build the web-based interactive. Model arrangement, camera setup, materials, and lighting were constructed in Blender. User interface and interactivity were developed using Verge3D and HTML/CSS.

Information architecture diagrams were created with draw.io, an online diagram editor integrated into Google Drive. Storyboards for the 3D pre-rendered introductory animation were created in Adobe Photoshop 2019. A 2D schematic illustration of hippocampal neurons was created in Adobe Photoshop with line work from Adobe Illustrator 2019. Adobe Illustrator was also used to

15 create wireframes, interactive interface elements, arrows, and icons.

Preparation of diffusion data for segmentation in 3D Slicer

To begin processing the diffusion data correctly in 3D Slicer, two new text fles were frst generated from the provided “gradienttable” text fle, after transposing its vertical columns of information into horizontal rows. One text fle was named “bvec”, containing the frst three rows from “gradienttable”, and the other “bval”, containing the last row. The SlicerDMRI project module of tools (dmri.slicer.org, Norton et al. 2017) was installed in 3D Slicer, via the Extension Manager.

The provided Nifti (.nii) data fle containing raw directional information was imported into

3D Slicer, then converted into a DWI fle using the DWIConvert option in the SlicerDMRI module.

“FSLToNrrd” was selected, and the input DWI Volume fle was left blank. A new output DWI volume name was added (for example, “Output DWI Volume fle”). Under the “NiftiFSL to Nrrd” conversion parameters, the .nii data fle, the bval text fle, and the bvec text fle were selected, and “Apply” was clicked to run the conversion.

When the process was completed, the new output DWI volume was saved. Instructions from the Slicer DTI tutorial, “Diffusion MRI Analysis” (Pujol n.d.), were followed after ensuring the new output DWI fle was visible. A mask and a baseline volume were created as per the tutorial’s instructions, the mask was hidden, and Diffusion Tensor Estimation was executed. The resulting DTI fle (.nhdr) was saved. In each of the three 2D image viewers on the right side, the upper-left corner double arrow was clicked to expand the view’s options. The L feld was set to none, the F feld to none, and the B feld to the new color DTI fle. The original specimen was of a right hippocampus, but processing of the data mirrored it, allowing for easier comparison with the left hippocampal segmentations provided by CIS.

16 Segmentation in 3D Slicer

Segmentation in the Segment Editor of 3D Slicer could begin after processing the data into full color DTI, where different colors represent the orientation of fbers in the specimen (Figure 5).

(A) (B) (C)

Figure 5. Diffusion tensor imaging (DTI) of the anterior hippocampal specimen in Slicer. (A) Coronal, (B) sagittal, and (C) axial views. The specimen was not scanned in anatomical position, so the anterior end of the specimen is pointing towards the bottom of the screen in B and C. Green in the images indicates fber directionality up and down on the screen, red indicates left and right on the screen, and blue indicates perpendicularity to the screen.

Segmentation was performed on two selected hippocampal regions: the dentate gyrus (stratum moleculare) and the alveus. In general, manual segmentation of the two hippocampal regions was found to be more effective than an automated process. After loading the data, the Segment Editor of Slicer was selected in the top menu bar, and the Add (+) button was clicked to add a new empty segment. The coronal view was primarily used for initial segmentation, and the view’s slider or the arrow keys were used to navigate through slices. The sagittal and axial views were useful when fne-tuning the segmentations. The Draw tool was used the most frequently to delineate (click and drag to draw) and fll areas of interest (hit the Enter key after drawing) in each slice with the selected color. The Erase and Paint tools were also used. As segmentation progressed, the live 3D view of the segmented area was turned on periodically by clicking “Show 3D”. Slice visibility could also be turned on in the 3D view to aid with orientation (Figure 6). For large segmentations, the 3D view was left off while drawing to avoid lag. Turning on crosshair visibility was useful to mark the location of a specifc point in all four views of the data, by holding Shift and placing the mouse over the desired point after activating the feature in the top menu bar (Figure 7). First segmenting isolated slices with several skipped slices in between helped reveal the overall shape of the structure early on (Figure 8).

17 (A) (B)

Figure 6. Toggling slice visibility in Slicer. (A) Toggling coronal slice visibility. (B) 3D view with coronal, sagittal, and axial slice visibility turned on. Orientation labels (“axial”, “I”, “S”, etc.) are non-anatomical due to specimen alignment during scanning.

Figure 7. Activating the crosshair feature in the top menu bar in Slicer.

18 Figure 8. Segmenting the dentate gyrus (stratum moleculare) in the Segment Editor of Slicer. The Draw tool and live 3D view are active. The segmentation is in progress; only some select slices have been segmented towards the anterior end to quickly reveal the overall 3D shape of the structure. Crosshairs visible in all four views indicate a specifc selected point in the data, chosen by holding Shift and moving the mouse. Text within image not intended to be read. Orientation labels (“axial”, “A”, “P”, etc.) are non- anatomical due to specimen alignment during scanning.

Automated segmentation methods were used to generate a surface model of the hippocampal

DTI specimen and for segmentation of a brain and ventricular system. To segment out the entire

DTI specimen from its surroundings, a grayscale non-DTI version of the dataset was selected as the master volume (for better contrast against the black background). A new empty segment was created in the Segment Editor, and the “Threshold” effect was used to automatically separate the specimen from the background (Figure 9) by setting the Automatic Threshold to “auto->maximum”.

This “Threshold” effect was also used as the frst step for segmenting the alveus, as it represents part of the outer border of the specimen. After generating a segmentation that separated the specimen from the black background, the Erase tool was used in each slice to isolate the alveus.

19 Figure 9. Using the “Threshold” effect to generate a surface model in Slicer, by separating the anterior specimen from the black background. Orientation labels (“L”, “S”, etc.) are non-anatomical due to specimen alignment during scanning.

To segment the entire brain, the ventricular system, and low-resolution hippocampi for orientation and location purposes, the anonymized T2 and T2 FLAIR MRI datasets were imported into Slicer via the File menu > DICOM. Automatic segmentation of the brain from the T2 FLAIR dataset was performed using the “Threshold” effect set to “auto->maximum” and the “Yen” method, and by adjusting the threshold range sliders. However, this method still included many pieces of skin, skull, and other structures in the segmentation, which were manually removed (Figure 10).

Automatic segmentation of the ventricles from the T2 dataset was performed using the “Grow from seeds” effect, where at least two new segments must be created (in this case, one for the ventricles and one for other structures) and the software extrapolates their borders throughout the slices. After this process, the “other” segment was deleted. Manual cleanup through the slices was again necessary to remove traces of unwanted structures (Figure 11). The low-resolution hippocampi were manually segmented from the T2 dataset with the Draw tool (Figure 12).

To align the T2 and T2 FLAIR MRI dataset segmentations as closely as possible, a rough fducial marker of brain location and size was segmented from the T2 dataset. All segmentations and the fducial marker were later imported and aligned in Cinema 4D.

20 Figure 10. Segmenting a brain from anonymized DICOM data in Slicer. Many traces of the skull and other structures remain following automatic segmentation, and have not yet been removed. Text within image not intended to be read.

Figure 11. Segmenting a ventricular system from anonymized DICOM data in Slicer. Text within image not intended to be read.

21 Figure 12. Segmenting left and right hippocampi from anonymized DICOM data in Slicer. This created a reference for hippocampal location and orientation relative to the ventricular system. Text within image not intended to be read.

The “Smooth” effect was used when segmentations were nearly complete to remove sharp points and rough edges. For the anterior specimen segmentations, the Median method with a kernel size of

0.25 mm was effective and preserved details (Figure 13).

(A) (B)

Figure 13. Applying the “Smooth” effect in Slicer. (A) Before and (B) after application. Orientation label (“L”) is non-anatomical due to specimen alignment during scanning.

22 Segmentations completed in Slicer were exported as STL fles by clicking on the drop-down arrow beside the “Segmentations…” button and selecting “Export to fles…” (Figure 14). Spatial relationships of different segmentations were maintained in the exported fles (Figures 15 - 17).

Figure 14. Exporting a segmentation from Slicer.

Figure 15. Completed dentate gyrus (stratum moleculare) segmentation in Cinema 4D, in blue viewed inside a translucent surface model of the anterior specimen. Oblique lateral view. Text within image not intended to be read.

23 Figure 16. Dentate gyrus (blue) and alveus (white) segmentations in Cinema 4D, within a translucent surface model of the anterior specimen. Oblique posterior view.

Figure 17. Brain (translucent beige), ventricular system (light blue), and hippocampi segmentations (pink) in Cinema 4D. Oblique lateral view.

24 Creation of 3D models

Hippocampal models

1. Sculpting in ZBrush

The ten surface segmentations provided by CIS as unprocessed STL fles were imported into

ZBrush for smoothing, sculpting, repairing of holes, and other modifcations, to transform the data into idealized models of hippocampal regions for teaching purposes. Each STL represented a region or combined regions of the hippocampal formation. The middle of the inferior area was missing a large section of tissue (Figure 18). When imported, they were appended as subtools within one

ZBrush project. The dentate gyrus (stratum moleculare) segmentation created from 3D Slicer was also imported into ZBrush and aligned to the CIS segmentations to serve as an additional reference.

Before making any modifcations in ZBrush, each STL was duplicated and renamed as a

“Sculpt” version. Each new, duplicated subtool was then Dynameshed to prepare the surface for modifcation. The “Polish Crisp Edges” effect under the Deformation sub-palette was applied slightly on all of the “Sculpt” subtools to globally smooth out stair-step artifacts from segmentation, sharp corners, and points, without losing details (Figure 19). The main brushes used to sculpt were the

Standard, Infate, Smooth, Move Topological, and Damian Standard brushes. Move Topological was generally used over Move because it does not affect the mesh on the other, unseen side of the model. Regions missing a section of tissue on the inferior edge of the hippocampal formation such as the entorhinal cortex, CA1, the stratum moleculare, and the stratum granulosum, were repaired to bridge the gap, close holes, and refne the geometry (Figures 20 and 21). Geometry was added when necessary with the Curve Surface brush (Figure 22). Trimming of meshes was performed with the

Trim Rectangle or Trim Lasso brushes. Additional small artifacts were removed by using the Select

brush, hiding the area to be removed, and selecting “Delete Hidden” followed by “Close Holes”.

25 (A) (B)

Figure 18. Original surface segmentations of a left hippocampal formation provided by CIS. The models are shown here after import into ZBrush as a single tool containing ten subtools, without modifcations. (A) Oblique superior view. (B) Oblique inferior view.

Figure 19. Segmentations after Dynameshing and applying the “Polish Crisp Edges” effect. This smoothed the surface without losing detail.

26 Figure 20. Three stages of repairing the missing section of the stratum moleculare (of the dentate gyrus) in ZBrush, with the Move Topological and Infate brushes and Dynamesh.

(A) (B)

Figure 21. Two stages of repairing the stratum granulosum in ZBrush. (A) The model after smoothing and Dynameshing. (B) The model after closing large holes, smoothing artifacts, and sculpting details observed in reference material.

27

Figure 22. Using the Curve Surface brush to add geometry to the alveus. Inferior view with head of hippocampus towards the top of the screen. Text within image not intended to be read.

2. Modifcations made to original CIS segmentations

Primary modifcations were made by adding topological details from the new high-resolution segmentations of DTI data from the anterior hippocampal specimen (Figures 23 and 24). Additional repairs and modifcations were based on review of the literature (in particular, Duvernoy et al.

2013, Yasutaka et al. 2013, and Beaujoin et al. 2018), observation and photographs of hippocampus specimens (e.g. Figure 25), and personal communication with the project preceptor.

28 Figure 23. Aligning the stratum moleculare (of the dentate gyrus) segmentation from Slicer in ZBrush, with the stratum moleculare sculpt in progress. The segmentation from 3D Slicer has the “Ghost” effect applied, so that it appears translucent. This alignment provided a guide for sculpting more detailed folds on the idealized model. Text within image not intended to be read.

Figure 24. The stratum moleculare of the dentate gyrus after sculpting, with the alveus hidden and other structures ghosted.

29

Figure 25. Sculpting the alveus and fmbria with reference to a specimen photograph and the surface model of the anterior hippocampal specimen. Text within image not intended to be read.

3. Boolean subtraction

In order to create a cut, coronal view of the hippocampus model, Boolean subtractions in

Cinema 4D were performed using Boole Objects. The location for the coronal section for didactic purposes was chosen to be through the hippocampal body, just posterior to the head.

When the sculpted models were ready to be sent to Cinema 4D, ZRemesh was performed on each of them to produce effcient, re-topologized surface geometry. Models were imported into

Cinema 4D via the “GoZ” script in ZBrush. Boole Objects in Cinema 4D (Figure 26.1-26.2) were used to subtract one Cube Object from each hippocampal region in a pairwise fashion with “High quality” unchecked. These steps were also performed before sculpting was complete to inspect areas of overlap in coronal views of the models (Figure 27). To eliminate areas of mesh overlap directly in

Cinema 4D, additional pairwise Boolean subtractions were created between hippocampal regions. For example, the meshes of CA1 and the alveus were overlapping in areas due to segmentation artifacts, so a Boole Object was created with copies of the models to subtract and remove unwanted areas of overlap. The result was a coronal view with a clean, sharply defned border between the alveus and

CA1 (Figure 28).

The segmentations of the brain, ventricular system, and low-resolution hippocampi were used as guides to fnd the most accurate orientation for the fnal sculpted models. The number of subdivisions

30 for each cube used for the Boole Objects was increased to 200 in each direction. Boolean Objects were made editable for the didactic coronal section (Figure 29).

Additional anatomical models

A simplifed brain model and ventricular system were sculpted in ZBrush starting from the segmentations of anonymized DICOM data from 3D Slicer. The brain and ventricular system were optimized to remove artifacts, improve symmetry, and re-mesh. The brain appeared atrophic in areas and was sculpted in ZBrush to give it a normal appearance (Figure 30). The brainstem was also lengthened. In Cinema 4D, relative sizes and locations of all models were matched closely using the orientation markers segmented from 3D Slicer. As a result, one 3D scene was created with the brain, ventricular system, and hippocampal models together.

31 Figure 26.1. Object manager in Cinema 4D showing organization of Boolean subtractions. Continued on next page.

32 Figure 26.2. Object manager in Cinema 4D showing organization of Boolean subtractions.

33 Figure 27. Using Booles in Cinema 4D to inspect coronal views of the hippocampal model during model optimization. The alveus has been hidden.

(A) (B)

Figure 28. (A) Before and (B) after Boolean subtraction to remove mesh overlap and clarify borders between regions.

34 Figure 29. The viewport appearance of the coronal view models with lines turned on.

Figure 30. Modifying the brain segmentation with the “Infate” effect in ZBrush, under the Deformation sub-palette.

Simplifed representations of axons (the mossy fbers of dentate gyrus granule cells) and dendrites (of CA3 pyramidal neurons) were sculpted in ZBrush de novo with reference to images and descriptions by Lim et al. (1997), Wilke et al. (2014), and Martin et al. (2017). Brushes used in ZBrush were mainly Curve Tube, Infate, and Move Topological (Figure 31). A focal mossy fber and a

35 branched dendritic spine were sculpted to be the centerpiece of the scene (Figure 32). Two additional mossy fbers and one additional dendrite were created and later duplicated in C4D to fll the scene in the foreground and background. Due to the focus on the mossy fber-CA3 dendrite synapse, other components such as interneurons were not included in the scene. The mossy fber giant boutons were simplifed to remove the presence of flopodia, which would be in contact with surrounding interneurons.

Figure 31. Sculpting mossy fbers using the Curve Tube and Infate brushes.

Figure 32. Sculpting a multiheaded dendritic spine (synapsing with a mossy fber giant bouton) using the Curve Tube brush. Text within image not intended to be read.

36 Pre-rendered introductory animation

3D assets were brought together into Cinema 4D to build and animate scenes. Materials were applied, lighting was set up, and movements of objects or cameras were animated. In the scene depicting mossy fbers and CA3 dendrites, the background was created using multiple plane objects with artwork from Adobe Photoshop projected on them, with the Alpha channel enabled. Subsurface scattering and layered effects were applied to most materials in the Luminance channel. Ambient occlusion was turned on and its color adjusted in the Render Settings. Lighting consisted of Physical

Sky with modifed settings, as well as multiple Point lights. The Compositing tag was applied to objects to hide them from the camera without losing their effects on other objects; for example,

Physical Sky was “Hidden from camera” to create a transparent background while keeping its lighting effects active. An image created in Photoshop (later used as a background layer in After

Effects) was applied to a Background Object for color coordination and scene testing within Cinema

4D. For compositing purposes, the “Mat object” option set to the color black in the Compositing tag was used to render only the visible portions of an object when partially hidden behind other objects.

Rendered sequences were imported into Adobe After Effects for compositing. After rendering different parts of models separately, translucency effects and transitions were created by animating the opacities of different layers. Leader lines and most labels were created directly in After Effects.

Other 2D assets were created in Illustrator or Photoshop and imported into After Effects as individual assets or compositions.

Interactive 3D web-based application

Summary of interactivity

The web-based interactive resource was designed to have two main sections: a section containing a pre-rendered, introductory animation, and a section containing the interactive 3D model with additional educational material (Figure 33).

37 User accesses webpage with introductory information about the project and a link to open the web application in a new tab or new window

HOME SCREEN

Dashboard buttons in upper right

INTRO ANIMATION 3D ADULT ANATOMY Home

A pre-rendered 3D animation A manipulable 3D model of an introducing the hippocampus in idealized human hippocampus, About the brain, its structure, and the based on segmentation data. basics of its involvement in Contains labels and additional memory generation. descriptive educational content. Description of the project, credits, and full references list

Embedded video with 3D model of hippocampus Possible user manipulations playback controls loads together with simplified of 3D model brain/ventricles models for orientation Rotate (click + drag)

Buttons to snap to axis views (anterior, lateral, posterior, medial)

Buttons to turn on/off different Overview of the hippocampus Additional details about the regions, turn all off, or all on hippocampus and the different regions shown Buttons to show or hide labels Overview of memory generation (popups when regions or pointing to each region their labels are clicked)

Brief description about each region

2D diagrams/illustrations

Selected portions of histological images

Figure 33. Information architecture diagram created to plan overall interactivity.

User interface design

Grayscale wireframes were developed in Adobe Illustrator during early planning of

interactivity and content organization in the interactive (Appendix B). These preliminary wireframes

helped identify the areas where overall layout, design, navigation, and content could be improved.

Detailed full color storyboards for selected parts of the interactive were also created, to serve as

guides when building the user interface and appearance with Verge3D (Figure 34). Standard symbols for various buttons were designed, such as a house-shaped icon for the Home button and a simple silhouette of a hippocampus for the 3D Adult Anatomy button. In the 3D Adult Anatomy section, all buttons for manipulation of the hippocampal model and the orientation indicator were placed together

38 on the left side of the screen in an organized fashion. Space was thereby created on the right side of the screen for additional written and visual content. Buttons to go back to the Home screen, view information about the project, and switch between the two main sections of the interactive were all placed at the top of the screen.

© 2019 Alisa Brandt

Figure 34. Sample full-color storyboard created to plan the interactive (“3D Adult Anatomy” section). Created in Adobe Illustrator. The image of the hippocampal model is a rendered image from Cinema 4D. Text within image not intended to be read.

Optimization of models for the web

The open-source software MeshLab (Cignoni et al. 2008) was used to optimize 3D model meshes for use in the web-based interactive. This avoided hardware performance issues and ensured the web-based interactive is accessible online for a wider audience.

Final models were exported from Cinema 4D as STL fles. After importing the mesh to be optimized into MeshLab as an STL fle, the mesh was optimized by selecting the Filters menu >

Remeshing, Simplifcation and Reconstruction > Simplifcation: Quadric Edge Collapse Decimation

(QECD). The popup window was flled in to set a target number of faces, leave the percentage reduction as the default value of 0, set the quality threshold to 1, and set the boundary preserving

39 weight to 10. The options “Preserve Normal”, “Preserve Topology”, “Optimal position of simplifed vertices”, “Planar simplifcation”, and “Post-simplifcation cleaning” were checked (Figure 35).

All of the full hippocampal model meshes, as well as the simplifed brain and ventricular system meshes, were optimized with a target number of 20,000 faces. For use in the interactive, an additional, low-resolution merged version of the full idealized hippocampal model was created, by using “Connect Objects and Delete” in Cinema 4D and smoothing in ZBrush. This small model was optimized at a target number of 1,500 faces. Optimized models were exported from MeshLab as STL fles with new names.

40 (A) (B)

(C)

Figure 35. Optimizing 3D model meshes in MeshLab, using the Quadric Edge Collapse Decimation (QECD) flter. Shown here as an example is the model of CA1. (A) Before QECD, the mesh had 268,244 faces and was too complex for web use. (B) After QECD, the mesh had 20,000 faces (the desired target number specifed in (C)).

41 Creating a new Verge3D project

Blender is a free, open-source software used for many steps of the 3D creation process, including modeling, rigging, animating, sculpting, compositing, simulation, and game creation.

Verge3D is a toolkit that integrates with Blender for the development of 3D interactive WebGL-based content. A new Verge3D project was created using the “Create New App” panel on the right side of the App Manager page (Figure 36). After choosing a name for the new project and clicking “Create

App”, a new folder with the chosen name was created locally in the folder “verge3d/applications”, which automatically contained various fles, such as an HTML fle, a CSS fle, JavaScript fles, a glTF fle (Graphics Library Transmission Format, a fle format for 3D models and scenes), and a default

Blender fle with a cube. This Blender fle was modifed to create the 3D scene with the hippocampal models. All optimized STL fles were imported into the Blender scene at a scale of 0.01, set under the “Import STL” settings on the left side of the screen. The brain, ventricular system, and small simplifed hippocampi were initially imported at a scale of 0.01, then rescaled in Blender to 50% of their imported size. With every change, the Blender fle was saved and the glTF fle was overwritten via the Export menu (Figure 37) for the complete application to update during development.

Camera setup

The main, default camera in the Blender scene was modifed to turn off panning and zooming and to set a custom distance from the center of the scene. After selecting the Camera object in the

Outliner Editor, the camera icon was selected in the Properties Editor to begin editing its Verge3D

Settings (Figure 38). “Allow Panning” was unchecked and zooming was inactivated by setting the minimum distance and maximum distance to the same value. The view through the main camera was activated by pressing Numpad-0 or by selecting the camera in the View menu of the 3D View Editor

(Figure 39). To create different positions of the camera around the model that provide the user with anterior, lateral, posterior, and medial views, four Empty Axis objects were created via the Shift+A menu. Each Empty Axis was positioned at a point in 3D space where the distance to the origin would remain the same as the distance specifed in the camera settings. These Empty Axis objects were later used as new positions for the camera when working in Verge3D’s visual coding interface.

42 Figure 36. The Verge3D App Manager, displaying the contents of the local “verge3d” folder. New applications are created using the “Create new App” panel. Listed on the left are numerous example projects provided by the software developers, as well as user-created applications such as this thesis project, “Morphology of Memory”. In the row of buttons for “Morphology of Memory”, clicking the green symbol button leads to viewing the 3D scene (the glTF fle) in the browser without any HTML/CSS content. Clicking the central blue symbol button runs the original HTML fle generated when the new project was created. Clicking the left-most blue button runs the complete Verge3D web application integrated into the index.html fle. The green “Puzzles” button launches Verge3D’s visual coding interface for that project. Not all text within image is intended to be read.

43 Figure 37. Exporting the glTF fle to overwrite the existing one in the Verge3D application folder.

44 Figure 38. Modifying the settings of the 3D scene’s main camera in Blender.

45 Figure 39. View through the 3D scene’s main camera in Blender. Simple materials and lights have already been added.

Creating an orientation indicator

An orientation indicator was created to help the user understand the current view of the hippocampal model and show the location of both hippocampi relative to the brain and ventricular system. Such an orientation indicator needs to be able to update continuously to match the current view, as the user manipulates the view.

First, the ventricular system and the small, simplifed hippocampi models were parented to the brain model so that they inherit the movements of the brain and stay together as a group of objects.

Parenting was done by dragging the icon of the will-be-child object (small left hippocampus, small right hippocampus, or ventricular system) onto the will-be-parent object (brain). An alternative method was to hold down Shift, select the one or more will-be-child objects, and select the will-be- parent object last, followed by CNTRL+P while the mouse is over the 3D View Editor. Selecting a child object and pressing ALT+P brought up options to clear the parent-child relationship.

Next, the brain object (with its three child objects) was parented to the camera, to become a

46 child of the camera. Regardless of how the user looked around the scene with the camera, the child objects remained static relative to the camera and were unaffected by rotation, zooming, or panning.

Looking through the main camera view at its default position allowed for positioning of the brain into a suitable location in the upper left corner (Figure 39).

To keep the location of the brain static in the upper left corner but allow it to rotate in a way that matches the current view of the main hippocampal model, an Empty Axis object was added to the scene using Shift+A. This Empty Axis was placed at the origin (0, 0, 0), where the main model was centered. A “Copy Rotation” type of Object Constraint was added to the brain object in its Object

Constraints properties with the target set to the Empty Axis object at the origin (Figure 40).

Figure 40. Adding a Copy Rotation type of Object Constraint to the brain object in Blender. The target is set to an Empty Axis object placed at the center of the scene.

To separate the orientation indicator brain from the rest of the scene and emphasize its role to the user as a smaller-scale point of reference, a static rectangular design element was added. This shape was created in the 3D scene in Blender to easily place it behind the brain object and control its position. A Plane object was added to the scene via the Shift+A menu. Its corners were rounded by selecting the object, choosing “Edit mode” below the 3D View Editor, holding down Ctrl+Shift+B, clicking again on the plane object (while holding the three keys), and then scrolling with the middle

47 mouse button (or with two fngers on a trackpad) and moving the mouse closer and farther away to adjust the amount of rounding. To make the plane face the camera, it was given the exact rotation values of the camera, and then parented to the camera to keep its position in the view static and its orientation unaffected by camera movements.

Creating materials and lights

Basic materials for the models were created in Blender via the Node Editor. The Render Engine in Blender was set to “Cycles Render” and glTF-compliant PBR (Physically Based Rendering) materials were appended into the Blender fle as node groups, where additional inputs can be connected between different nodes (to specify textures, for example). Verge3D provides a basic material node for use in its applications, called “Verge3D PBR”. By accessing the File menu >

Append, this default PBR material node was added to the scene from verge3d/applications/materials/ pbr_material.blend/Material. An object was selected in the Outliner Editor, and in its Material tab, the newly appended Verge3D PBR material was selected and renamed. By default, the Verge3D PBR node was connected to other nodes for creation of textures (Figure 41). For this project, solid colors were used instead of textures; all of the additional nodes were deleted, leaving only the Verge3D PBR node (Figure 42). Within this node, the “BaseColorFactor” was changed to the desired color for the model, and MetallicFactor was modifed to 0.5. The EmissiveFactor was also modifed to a darker shade of the BaseColorFactor color instead of black.

48 Figure 41. Default appearance of the Verge3D PBR material node in the Node Editor, after it has been appended and selected in an object’s Material tab. The four nodes attached to the Verge3D PBR node specify textures. Text within image not intended to be read.

Lights were added via the Shift+A menu. Most lights were Point lights (light emitted in all directions from a single point) positioned around the scene with various strengths and colors. One

Sun light was also added, where light rays are infnite, parallel, and emitted in a specifc direction to simulate daylight. “Ray Shadow” was turned on and the Map Size was set to 4096 in the Sun light’s

Verge3D Settings.

Visual coding with Puzzles

The visual coding interface provided by Verge3D, called Puzzles, allows for manipulation of object properties and creation of 3D interactivity without direct coding in a language such as

49 JavaScript. Colorful blocks representing different commands, behaviors, logic checks, and identifers were ftted together directly in the browser via the App Manager (Figure 43). Variables were created and stored with Puzzles, to record the state of an object or other condition (Figure 44).

Figure 42. Modifying a Verge3D PBR material node in Blender, for the CA1 model. Because textures were not used, other nodes were deleted and the BaseColorFactor was modifed to the preferred color for the model.

50 Figure 43. Using Verge3D’s visual coding interface, Puzzles, in a web browser to create interactivity. The “Objects” tab has been expanded to reveal the different code blocks available in that category. Text within image not intended to be read.

Figure 44. The Variables tab in the Puzzles interface of Verge3D. A variable can be a number, symbol, or string of characters, specifed by the user to represent a state or condition and keep track of it.

51 The “tween camera” puzzle was used to move the active camera to a new position defned by an Empty Axis object in the 3D scene and keep the camera view pointed towards the scene’s origin (Figure 45). The “add annotation” Puzzle was used to add translucent labels directly on each hippocampal region. Each annotation was attached to an Empty Axis object marking a custom location for the label instead of the axis center of each model (Figure 46).

Figure 45. Using the “tween camera” Puzzle to modify camera position in response to a button click. The camera is animated to the position of “PosteriorView”, an Empty Axis object, and looks towards another Empty Axis object at the center of the scene.

Figure 46. Using the “add annotation” Puzzle to add a label to a specifed object in response to a button click. In this case, the object is an Empty Axis object placed in the preferred location for labeling the hippocampal region of interest. The blue “if” Puzzle specifes a condition under which the annotation will be added; the CA3 model must be visible for the CA3 label to appear.

Toggle buttons were created to turn regions on and off. They represented large stacks of frequently used, customized Puzzles, so their basic structure and styling were saved in the Puzzles

Library for effcient reuse (Figure 47).

52 Figure 47. Accessing the Puzzles Library, where customized Puzzles or combined stacks of Puzzles can be saved.

Integrating HTML and CSS

The HTML and CSS fles automatically contained within the app folder were not modifed.

Using a text editor, a new HTML text fle was created in the app folder called “index.html”, within which the Verge3D content was embedded using an

Most HTML/CSS coding was performed using the HTML Puzzles available in Verge3D. These

Puzzles allowed for creation, manipulation, and CSS styling of every major aspect of various HTML elements. The “add HTML elem” Puzzle was used to add new buttons with individual identifcation names to the interface (Figure 48). These identifers were then used in other HTML Puzzles, such as the “set style” Puzzle, to modify the appearance of each button and its content (CSS properties such as font size, font family, font color, button position, background color or image, border qualities, opacity, and so on). Identifcation names and values for CSS properties were typed or pasted into Text

Puzzles, which were then dragged into spaces in other Puzzles. The “on event of” Puzzle was used to create functionality (e.g. hiding and showing a model) when buttons are interacted with (Figure 49).

Figure 48. Creating a new HTML button element with Puzzles. The green Text Puzzle inserted into the “add HTML elem” Puzzle defnes the identifcation name of the new button, which must be reused in the following Puzzles to modify the button’s attributes and styles. The button’s inner HTML attribute (in this case, the text displayed on the button) is defned with the “set attr” Puzzle. “Set style top” and “set style left” defne the position of the button relative to the top and left side of the web application, respectively. The button’s background color is styled by entering red, green, blue, and alpha values.

54 Figure 49. Using the “on event of” Puzzle to trigger a change in model visibility in response to clicking on a button element. The blue Logic Puzzle provides the button with toggle on-off functionality, so that the model is hidden only if it is currently visible, and is turned on only if it is currently hidden.

The CSS overfow property allowed text scrolling in a box or container (

element)

(Figure 50). This was used for the “About” popup containing a long list of relevant references and the

“More info” popup containing descriptive text and images.

Figure 50. Using the “set style: overfow” Puzzle to create scrolling functionality.

Some specifc aspects of functionality required manual coding, either within the text felds of

Puzzles, or in the project’s index.html fle. Blocks of text in the “About” popup and the “More info” popup were formatted in a plain text editor to insert tags such as
,

, and , to code for

line breaks, new paragraphs, and bold text, respectively. After formatting the text, it was copied and

pasted into a Text Puzzle placed in the “Set innerHTML to…” Puzzle.

To embed images into the “More info” popup, the element was used with its src

attribute (image source) and alt attribute (the image’s text alternative). For example, after creating a new

element with the “add HTML elem” Puzzle, the following code could be pasted into the

“Set inner-HTML to…” Puzzle:

”Brief

55 The pre-rendered animation was embedded into the interactive with Puzzles by adding a new

128 references

Adachi, M., Kawakatsu, S., Hosoya, T., Otani, K., Honma, T., Shibata, A., & Sugai, Y. (2003). Morphology of the inner structure of the hippocampal formation in Alzheimer disease. American Journal of Neuroradiology, 24(8), 1575-1581.

Adler, D. H., Wisse, L. E. M., Ittyerah, R., Pluta, J. B., Ding, S., Xie, L., . . . Yushkevich, P. A. (2018). Characterizing the human hippocampus in aging and Alzheimer’s disease using a computational atlas derived from ex vivo MRI and histology. Proceedings of the National Academy of Sciences of the United States of America, 115(16), 4252-4257. www.nitrc.org/projects/pennhippoatlas/

Arnold, S. E. & Trojanowski, J. Q. (1996). Human fetal hippocampal development: I. Cytoarchitecture, myeloarchitecture, and neuronal morphologic features. Journal of Comparative Neurology, 367(2), 274-292.

Augustinack, J. C., Helmer, K., Huber, K. E., Kakunoori, S., Zöllei, L., & Fischl, B. (2010). Direct visualization of the perforant pathway in the human brain with ex vivo diffusion tensor imaging. Frontiers in Human Neuroscience, 4(42), 1-13.

Bajic, D., Canto Moreira, N., Wikström, J., & Raininko, R. (2012). Asymmetric development of the hippocampal region is common: a fetal MR imaging study. American Journal of Neuroradiology, 33, 513-518.

Bakker, R., Tiesinga, P., & Kötter, R. (2015). The Scalable Brain Atlas: instant web-based access to public brain atlases and related content. Neuroinformatics, 13(3), 353-366. scalablebrainatlas.incf. org/index.php

Beaujoin, J., Palomero-Gallagher, N., Boumezbeur, F., Axer, M., Bernard, J., Poupon, F., . . . Poupon, C. (2018). Post-mortem inference of the human hippocampal connectivity and microstructure using ultra-high feld diffusion MRI at 11.7 T. Brain Structure and Function, 223(5), 2157-2179.

Blender (Version 2.79b) [Computer software]. Retrieved from www.blender.org

Blümcke, I., Thom, M., Aronica, E., Armstrong, D. D., Bartolomei, F., Bernasconi, A., . . . Spreafco, R. (2013). International consensus classifcation of in temporal lobe epilepsy: a task force report from the ILAE commission on diagnostic methods. Epilepsia, 54(7), 1315-1329.

Boutet, C., Chupin, M., Lehéricy, S., Marrakchi-Kacem, L., Epelbaum, S., Poupon, C., . . . Colliot, O. (2014). Detection of volume loss in hippocampal layers in Alzheimer’s disease using 7 T MRI: A feasibility study. Neuroimage: Clinical, 5, 341-348.

Cappaert, N. L. M., & van Strien, N. M. (2016). Temporal-lobe.com. Accessed February 2019 from www.temporal-lobe.com

Cignoni, P., Callieri, M., Corsini, M., Dellepiane, M., Ganovelli, F., & Ranzuglia, G. (2008). MeshLab: an open-source mesh processing tool. Sixth Eurographics Italian Chapter Conference, 129-136. www.meshlab.net

129 Clark, K. R. (2018). Learning theories: Constructivism. Radiologic Technology, 90(2), 180-182.

Comper, S. M., Jardim, A. P., Corso, J. T., Gaça, L. B., Noffs, M. H. S., Lancellotti, C. L. P., . . . Yacubian, E. M. T. (2017). Impact of hippocampal subfeld histopathology in episodic memory impairment in mesial temporal lobe epilepsy and hippocampal sclerosis. Epilepsy & Behavior, 75, 183-189.

Dekeyzer, S., De Kock, I., Nikoubashman, O., Vanden Bossche, S., Van Eetvelde, R., De Groote, J., . . . Achten, E. (2017). “Unforgettable” - a pictorial essay on anatomy and pathology of the hippocampus. Insights into Imaging, 8(2), 199-212.

Duvernoy, H., Cattin, F., & Risold, P.-Y. (2013). The Human Hippocampus: Functional Anatomy, Vascularization and Serial Sections with MRI. Springer, Heidelberg.

Fedorov, A., Beichel, R., Kalpathy-Cramer, J., Finet, J., Fillion-Robin, J.-C., Pujol, S., … Kikinis, R. (2012). 3D Slicer as an image computing platform for the quantitative imaging network. Magnetic Resonance Imaging, 30(9), 1323-1341. www.slicer.org

Frotscher, M., Jonas, P., & Sloviter, R. S. (2006). Synapses formed by normal and abnormal hippocampal mossy fbers. Cell and Tissue Research, 326(2), 361-367.

Ge, X., Shi, Y., Li, J., Zhang, Z., Lin, X., Zhan, J., . . . Liu, S. (2015). Development of the human fetal hippocampal formation during early second trimester. NeuroImage, 119, 33-43.

Gogtay, N., Nugent, T. F., III, Herman, D. H., Ordonez, A., Greenstein, D., Hayashi, K. M., . . . Thompson, P. M. (2006). Dynamic mapping of normal human hippocampal development. Hippocampus, 16(8), 664-672.

Heckers, S., & Konradi, C. (2015). GABAergic mechanisms of hippocampal hyperactivity in schizophrenia. Schizophrenia Research, 167(1-3), 4-11.

Humphrey, T. (1967). The development of the human hippocampal fssure. Journal of Anatomy, 101(4), 655-676.

Insausti, R., Marcos, M. P., Mohedano-Moriano, A., Arroyo-Jiménez, M. M., Córcoles-Parada, M., Artacho-Pérula, E., . . . Muñoz-López, M. (2017). The Nonhuman Primate Hippocampus: Neuroanatomy and Patterns of Cortical Connectivity. In D. E. Hannula & M. C. Duff (2017), The Hippocampus from Cells to Systems: Structure, Connectivity, and Functional Contributions to Memory and Flexible Cognition. Cham: Springer International Publishing.

Kier, E. L., Kim, J. H., Fulbright, R. K., & Bronen, R. A. (1997). Embryology of the human fetal hippocampus: MR imaging, anatomy, and histology. American Journal of Neuroradiology, 18(3), 525-532.

Lavenex, P., & Lavenex, P. B. (2013). Building hippocampal circuits to learn and remember: Insights into the development of human memory. Behavioural Brain Research, 254, 8-21.

130 Lim, C., Blume, H. W., Madsen, J. R., & Saper, C. B. (1997). Connections of the hippocampal formation in humans: I. The mossy fber pathway. Journal of Comparative Neurology, 385(3), 325-351.

Marks, M., Alexander, A., Matsumoto, J., Matsumoto, J., Morris, J., Petersen, R., . . . Jones, D. (2017). Creating three dimensional models of Alzheimer’s disease. 3D Printing in Medicine, 3(13), 1-11.

Martin, E. A., Woodruff, D., Rawson, R. L., & Williams, M. E. (2017). Examining hippocampal mossy fber synapses by 3D electron microscopy in wildtype and Kirrel3 knockout mice. Eneuro, 4(3), 1-13.

Milesi, G., Garbelli, R., Zucca, I., Aronica, E., Spreafco, R., & Frassoni, C. (2014). Assessment of human hippocampal developmental neuroanatomy by means of ex-vivo 7 T magnetic resonance imaging. International Journal of Developmental Neuroscience, 34, 33–41.

Milner, B., & Klein, D. (2016) Loss of recent memory after bilateral hippocampal lesions: Memory and memories—Looking back and looking forward. Journal of Neurology, Neurosurgery & Psychiatry, 87(3), 230.

Mori, S., & Zhang, J. (2006). Principles of diffusion tensor imaging and its applications to basic neuroscience research. Neuron, 51(5), 527-539.

Neural pathways. (n.d.) The University of Bristol. Accessed February 2019 from www.bristol.ac.uk/ synaptic/pathways/

Norton, I., Essayed, W. I., Zhang, F., Pujol, S., Yarmarkovich, A., Golby, A. J., . . . O’Donnell, L. J. (2017). SlicerDMRI: Open source diffusion MRI software for brain cancer research. Cancer Research, 77(21), e101-e103. dmri.slicer.org

Parekh, M. B., Rutt, B. K., Purcell, R., Chen, Y., & Zeineh, M. M. (2015). Ultra-high resolution in-vivo 7.0T structural imaging of the human hippocampus reveals the endfolial pathway. NeuroImage, 112, 1-6.

Pedersen, K., Wilson, T. D., & De Ribaupierre, S. (2013). An interactive program to conceptualize the anatomy of the internal brainstem in 3D. Studies in Health Technology and Informatics, 184, 319-323.

Prinz, A., Bolz, M., & Findl, O. (2005). Advantage of three dimensional animated teaching over traditional surgical videos for teaching ophthalmic surgery: A randomised study. British Journal of Ophthalmology, 89(11), 1495-1499.

Pujol, S. (n.d.). Slicer DTI Tutorial: Diffusion MRI Analysis. Accessed December 2018 from http:// dmri.slicer.org/docs/

Radonjic, V., Malobabic, S., Radonjic, V., Puškaš, L., Stijak, L., Aksic, M., & Filipovic, B. (2014). Hippocampus – why is it studied so frequently? Vojnosanitetski Pregled: Military Medical & Pharmaceutical Journal of Serbia, 71(2), 195-201.

131 Raffan, H., Guevar, J., Poyade, M., & Rea, P. M. (2017). Canine neuroanatomy: Development of a 3D reconstruction and interactive application for undergraduate veterinary education. PLoS ONE, 12(2), e0168911.

Reigeluth, C. M., Merrill, M. D., Wilson, B. G., & Spiller, R. T. (1980). The elaboration theory of instruction: A model for sequencing and synthesizing instruction. Instructional Science, 9(3), 195-219.

Robinson, J. L., Molina-Porcel, L., Corrada, M. M., Raible, K., Lee, E. B., Lee, V. M., . . . Trojanowski, J. Q. (2014). Perforant path synaptic loss correlates with cognitive impairment and Alzheimer’s disease in the oldest-old. Brain: A Journal of Neurology, 137, 2578-2587.

Scoville, W. B. & Milner, B. (1957). Loss of recent memory after bilateral hippocampal lesions. Journal of Neurology, Neurosurgery & Psychiatry, 20, 11-21.

Seladi-Schulman, J., & Han, S. (2018). Brain overview. Accessed October 2018 from https://www. healthline.com/human-body-maps/brain

Shepherd, T. M., Ozarslan, E., Yachnis, A. T., King, M. A., & Blackband, S. J. (2007). Diffusion tensor microscopy indicates the cytoarchitectural basis for diffusion anisotropy in the human hippocampus. American Journal of Neuroradiology, 28(5), 958-964.

Soares, J. M., Marques, P., Alves, V., & Sousa, N. (2013). A hitchhiker’s guide to diffusion tensor imaging. Frontiers in Neuroscience, 7(31), 1-14.

Soler, L., Nicolau, S., Pessaux, P., Mutter, D., & Marescaux, J. (2014). Real-time 3D image reconstruction guidance in liver resection surgery. Hepatobiliary Surgery and Nutrition, 3(2), 73-81.

Stapleton, L., & Stefaniak, J. (2019). Cognitive constructivism: Revisiting Jerome Bruner’s infuence on instructional design practices. TechTrends: Linking Research & Practice to Improve Learning, 63(1), 4-5.

Strange, B. A., Witter, M. P., Lein, E. S., & Moser, E. I. (2014). Functional organization of the hippocampal longitudinal axis. Nature Reviews: Neuroscience, 15(10), 655-669.

Stull, A. T., Hegarty, M., & Mayer, R. E. (2009). Getting a handle on learning anatomy with interactive three-dimensional graphics. Journal of Educational Psychology, 101(4), 803-816.

Swanson, L. (2014). Neuroanatomical Terminology: A Lexicon of Classical Origins and Historical Foundations. Oxford University Press.

Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive Load Theory. New York, NY: Springer New York.

3D Brain. (2017). Society for Neuroscience. Accessed October 2018 from http://www.brainfacts. org/3d-brain#intro=true

WebGL overview. (2019). The Khronos Group Inc. Accessed February 2019 from https://www. khronos.org/webgl/

132 van de Kamp, T., dos Santos Rolo, T., Vagovič, P., Baumbach, T., & Riedel, A. (2014). Three- dimensional reconstructions come to life--interactive 3D PDF animations in functional morphology. PLoS ONE, 9(7), e102355.

Vanderah, T. W., & Gould, D. J. (2016). Nolte’s the human brain: An introduction to its functional anatomy (7th ed.). Philadelphia, PA: Elsevier.

Verge3D (Version 2.10) [Computer software]. Retrieved from www.soft8soft.com/verge3d

Wilke, S. A., Raam, T., Antonios, J. K., Bushong, E. A., Koo, E. H., Ellisman, M. H., & Ghosh, A. (2014). Specifc disruption of hippocampal mossy fber synapses in a mouse model of familial Alzheimer’s disease. PLoS ONE, 9(1), e84349.

Yang, P., Zhang, J., Shi, H., Zhang, J., Xu, X., Xiao, X., & Liu, Y. (2014). Developmental profle of neurogenesis in prenatal human hippocampus: An immunohistochemical study. International Journal of Developmental Neuroscience, 38, 1-9.

Yassa, M. A., & Stark, C. E. L. (2011). Pattern separation in the hippocampus. Trends in Neurosciences, 34(10), 515-525.

Yasutaka, S., Shinohara, H., & Kominami, R. (2013). Gross anatomical tractography (GAT) proposed a change from the ‘two laminae concept’ to the ‘neuronal unit concept’ on the structure of the human hippocampus. Okajimas Folia Anatomica Japonica, 89(4), 147-156.

Zeineh, M. M., Palomero-Gallagher, N., Axer, M., Gräßel, D., Goubran, M., Wree, A., . . . Zilles, K. (2017). Direct visualization and mapping of the spatial course of fber tracts at microscopic resolution in the human hippocampus. Cerebral Cortex, 27(3), 1779-1794.

133 vitA

Alisa Brandt was born in Vancouver on the West Coast of British Columbia, Canada. She has enjoyed storytelling through art since childhood and is constantly inspired by nature and science. She attended a secondary school emphasizing both the fne arts and academics, and developed her visual art skills while also identifying a particular enthusiasm for biological science. In 2012, she decided to pursue her undergraduate Biology degree at McGill University in Montréal, Canada. During her time there, she continued to draw in her spare time and integrate some art into her studies, but found it diffcult to fulfll her passion for both art and science. She also recognized the importance of effective visual communication to improve understanding of complex scientifc information. In her third year, she discovered the existence of the distinguished feld of medical illustration that combines science, medicine, and the visual arts, and decided to pursue further study. She graduated with distinction in

February 2017 with a Bachelor of Science in Biology and a minor in Natural History.

In August 2017, Alisa began the Medical and Biological Illustration graduate program at the

Department of Art as Applied to Medicine, Johns Hopkins University School of Medicine. She greatly enjoyed studying among many inspiring individuals, working with researchers and clinicians, and learning crucial techniques in medical illustration and visual communication. While studying there, Alisa received the Frank H. Netter, MD Memorial Scholarship in Medical Art for her academic performance. At the 2018 Annual Association of Medical Illustrators conference, Alisa received an

Award of Excellence in the student animation category for her 3D animation, “How a Bruise Forms”.

In February 2019, she was honored to be a recipient of a Research Grant from the Vesalius Trust for her thesis project proposal. Moving forward, she hopes to continue conveying complex scientifc information with clear, pleasing visuals, and push the feld of biomedical communication to even greater success by developing her experience in interactive and animated media. Alisa is currently a candidate to receive a Master of Arts in Medical and Biological Illustration in May, 2019.

134