Paper ID #35031

Leveraging for Augmented Structural Mechanics Education

Dr. Mohamad Alipour, University of Virginia

Mohamad Alipour is a postdoctoral researcher with the Department of Engineering Systems and En- vironment at the University of Virginia. His research broadly focuses on data-driven structure and in- frastructure assessment and his specific research interests are in the field of -based information extraction, -based structural health monitoring and inspection, and mixed reality systems for structural analysis, design, and education. Prof. Devin K. Harris, University of Virginia

Dr. Harris is an Associate Professor of Civil Engineering within the Department of Engineering Systems at the University of Virginia (UVA). He is also the Director of the Center for Transportation Studies and a member of the Link Lab. Dr. Harris also holds an appointment as the Faculty Director of the UVA Clark Scholars Program. He joined the UVA as an Assistant Professor in July 2012. He had a prior appointment at Michigan Technological University as the Donald F. and Rose Ann Tomasini Assistant Professor in structural engineering. His research interests focus on large scale civil infrastructure systems with an emphasis on smart cities. Dr. Harris often uses both numerical and experimental techniques for evaluating the performance civil infrastructure systems, both in the laboratory and the field. His work has included studies on image-based measurement techniques, crowd-sourcing, data analytics, condition assessment and structural health monitoring, and the application of innovative materials in civil infrastructure. Dr. Mehrdad Shafiei Dizaji, University of Massachusetts Lowell

I am a postdoctoral researcher at University of Massachusetts Lowell in Structural Dynamics & Acoustic Systems Laboratory working with Dr. Zhu Mao. My recent ongoing research focused on Data-Driven Structural Health Monitoring, Deep Learning, Signal Processing, Time Series, and Phase-Based Video Magnification. I received my PhD in Civil Engineering from the University of Virginia in 2020 under supervision of Dr. Devin Harris. Mr. Zachary Bilmen, University of Virginia

Zachary is a Bachelors of Science in at the University of Virginia. He worked under Professor Devin Harris and Dr. Mohamad Alipour in developing software for Mixed Reality Applications of Civil Engineering education. Zac is interested in the application and development of cutting edge technologies, especially in context of cross-disciplinary projects. Ms. Zijia Zeng, University of Virginia

Zijia graduated from the University of Virginia with a bachelor’s degree in computer science. While there, she joined Professor Devin K. Harris’s research group and contributed to developing applications for non-intrusive infrastructure maintenance and structural visualization. Zijia’s fields of interest include immersive technology and computer vision, and she is currently working as a software engineer.

c American Society for Engineering Education, 2021

Leveraging Mixed Reality for Augmented Structural Mechanics Education

Abstract The field of structural mechanics deals with the behavior of bodies under loads, and a considerable portion of structural mechanics education involves the introduction of theoretical models to describe the behavior of real-world structural elements. However, the gap between abstract theoretical descriptions of the behavior in the classroom versus the experience and of the deformation can be an obstacle to structural mechanics education and learning. This paper presents preliminary results of the use of mixed reality technology to bridge this gap by enabling real-time simulation of structural elements and effective and immersive visualization of their structural response. To that end, this paper introduces a server-client architecture, where user- defined loading is applied to a finite element model of the structure on a computational server, and the computed response is superimposed and visualized in the physical environment. The results can be interactively examined from different viewpoints and the desired level of detail by the engineer under training. The proposed framework was used to create a series of visualization modules for a series of beams and a more complex bridge structure under flexure, torsion, tension, and compression. The system was then deployed in the form of a mobile augmented reality application accessible through smartphones for broad accessibility. Markerless tracking was used to increase the flexibility and ease-of-use of the application and color contours and colorbar displays were overlaid to improve students’ understanding of the deformation and strain results. Preliminary results of the implementation showed its promise as a flexible, interactive, and efficient learning tool. Future work should focus on the evaluation of the application to assess its effectiveness in improving structural mechanics education as well as to identify its potential limitations.

Introduction Effective engineering education relies heavily on the capability of instructors to elicit connections between external representations of physical phenomena and the underlying laws of physics that explain them. For instance, teaching structural mechanics courses which are foundational components of civil, mechanical, aerospace, marine, naval, and offshore engineering sub- disciplines, relies on the ability to demonstrate the physical configuration of a structure (e.g., geometry, connections, supports, loads, etc.), as well as its deformation behavior under external loads. These courses are traditionally delivered by a primary lecture component usually complemented by structural laboratory demonstrations. While the lecture component covers the theoretical concepts and derivations using diagrams and simplified drawings, laboratory demonstrations are known to improve students’ understanding of the concepts through observation and experimentation [1]-[2]. Nevertheless, traditional modes of course delivery leave a gap between classroom depictions of idealized structural diagrams and a first-hand experience and perception of the structural members and their load-deformation behavior. This gap can result in reduced understanding of the physical phenomena and can be an obstacle to structural mechanics education and learning [3]-[6].

An example of classroom drawings of deformation behavior of a simple cantilever beam is shown in Figure 1, where the beam is subjected to different modes of loading and deformation (bending, torsion, tension, and compression). As can be seen, while these depictions help visualize structural deformation patterns, their static and 2D nature and lack of interactivity limits their intuitiveness and capability to fully convey the physical deformation phenomenon. For instance, it is tedious to use such 2D visualizations to demonstrate concepts such as: plane sections remaining plane in simplified bending, the effects of flexural and shear deformation in thin and thick beams, thinning and bulging of tensile and compressive members due to the Poisson effect, and warping of rectangular sections under torsion. Moreover, visualizing the 3D state of stress throughout the member in each one of these cases can be even more difficult.

Figure 1. Sample classroom illustration of deformations under (a) bending, (b) torsion, (c) tension, and (d) compression

Experimental laboratory demonstrations are an effective means of providing students with a physical understanding of engineering theories, but they can be prohibitively expensive and cumbersome [7]. For example, demonstrations involving medium to large structural members in structural mechanics laboratories require sizeable and costly loading machines and reaction frames attached to strong floors, placing smaller engineering programs and their students at a learning disadvantage. Even when such facilities are available, the flexibility and repeatability of such demonstrations are limited, preventing the students from creatively examining scenarios other than the prescribed set up to facilitate experience and discovery in their own time and from the comfort of their homes. Such limitations were even more pronounced during the COVID-19 pandemic considering the challenges of in-person training, which highlight the importance of complementary teaching aids that are not classroom-specific. Practical limitations aside, laboratory sessions usually focus on demonstrating the physical behavior with the expectation that the students establish a connection between the classroom theory and the physical phenomena they are witnessing in the laboratory. In other words, during an experiment, the sensing results can only be shown on a computer screen that is separate from the test specimen under loads. Registering the location of the sensing results on the screen with their corresponding locations on the specimen makes it difficult for students to make the connections and results in added cognitive load [8]. Alternatively, different forms of multimedia such as video demonstrations and computer simulations do not fully capture the experience of interacting with 3D representations of structures.

This study aims to bridge the gap between abstract theoretical models and physical representations of behavior through the use of the emerging Mixed Reality technology. Mixed Reality (MR) aims to blend virtual and physical environments, creating an immersive experience and enabling interaction with the virtual objects that is highly amenable to visualization and examination of complex phenomena [9]. Other highly relevant and widely used technologies in this regard include Augmented and (AR/VR). AR superimposes virtual entities onto the physical world, while VR creates a fully virtual substitute of the real environment. Collectively, these technologies have been shown to provide significant pedagogical benefits in different areas of science and technology by providing immersive 3D experiences through the blending of physical and virtual environments [10]-[12].

Literature Review. The use of MR has been proposed and investigated in a variety of topics and tasks in civil and mechanical engineering, such as in building information modeling (BIM) [13]- [14], progress monitoring [15], safety management [16], structural inspections [17], and analysis and design [18]- [20]. More relevant to the field of structural mechanics, a number of these works have involved structural analysis, finite element simulation or visualization of structural behavior. Fiorentino et al. presented an early interactive “touch and see” application where a FEM simulation of stresses of a real cantilever beam deformed by hand was overlaid on the structure [18]. The displacement of the beam end under manual loading was tracked using a camera and the stresses in the experiment were calculated by the finite element method (FEM). Fiducial markers attached to the beam were used for video-based tracking of the end displacements. Huang et al. studied a real-time finite element analysis framework for enhancing structural analysis with augmented reality and demonstrated its feasibility on a stress monitoring case study [19]. They used wireless load sensors on certain points on the structure and used the concept of pre-computing the inverse of the stiffness matrix of the structure to accelerate the computations of stresses and deformations. The resulting visualization was presented on a computer screen where the stresses were overlaid on the live video feed from a webcam. Li et al. proposed a mobile augmented reality framework for on-site finite element analysis of structures [20]. They used a server-client architecture and pre- computed the inverse of the stiffness matrix to accelerate the computations on the server. They used natural features on a few scenes of interest to replace the conventional high-contrast fiducial markers for positioning and tracking in the outdoor environment.

There have also been a number of studies using virtual and augmented reality for civil engineering education. Diao and Shih presented a review of the literature on using AR for civil and architectural engineering education [11]. Ge and Kuester presented an integrative simulation framework for conceptual structural analysis that converted 2D user sketches into truss or frame models that were solved using a simulation engine and visualized within the same display content as the input sketches [21]. Turkan et al. developed a marker-based mobile AR system to augment the contents of an engineering textbook [7]. A pilot study in a junior level structural analysis class was also conducted but did not report statistically significant improvements on students’ test scores on understanding of load effects. The pilot study also showed that the requirement of the marker- bases system to hold the device in front of the markers was tedious and hindered its ease of use. The small number of experiments was also mentioned as a limiting factor. Chacon et al. presented a case study of a set of AR and VR tools to add structural information and safety guides during laboratory experiments [22]. A series of laboratory tests on steel beams, columns and frames were conducted and the deflection of one point on the specimen was measured via discrete point sensors and used to reconstruct the deformed shape of the specimen under loads using simple analytical formulas of deflection. The majority of these existing examples involve 2D models and the tracking and localization of the object with respect to the user is done using fiducial markers. The disadvantage of marker-based tracking is the need for the marker to be constantly kept in the field of view of the camera which can be tedious and result in added cognitive load for the students and can be considered a limiting factor [7].

This study brings the advances in the mixed reality technology to structural mechanics education by creating immersive markerless MR visualizations including real-time 3D simulations as an effective tool for the illustration of structural behavior. The markerless tracking adopted in this study facilitates the use of the educational tool by the students by eliminating the need for fiducial markers. Furthermore, the 3D visualization capabilities created herein allow the students to closely examine the structure and its deformation behavior from different vantage points. The resulting visualizations are expected to help improve the effectiveness of structural mechanics education by providing the capability for students to visualize structural response in real time, and interactively examine the effect of different parameters on the response. The proposed framework was implemented in the form of a mobile mixed reality application and a case study of using this framework to create educational modules involving a series of simple beams and a more complex bridge structure is also presented.

Proposed Approach This work builds a mixed reality platform for observing, experimenting, and discovering the behavior of structures under loads to improve the quality of structural mechanics education. This is achieved by combining real-time FEM simulation with mixed reality visualization to visualize the response of structural members under loads. This framework complements the visualization power of mixed reality with physics-based simulation from an FEM model to enable engineering educators and students to create inexpensive virtual experiments involving deformable bodies in mixed reality environments. Figure 2 depicts a schematic illustration of this framework that demonstrates the process involved. When a user initiates an experiment, a model of an object (e.g., a beam) is defined with a certain support condition and loading configuration. This model is then transferred to a connected back-end server that runs an FEM simulation of the model and sends back the deformation response to the front-end device for visualization. The resulting model is then visualized in various stages of its deformation as a 3D object that the user can examine and interact with.

Physical Environment

• Loading (P) • Geometry • Boundary • Materials

FEM Results

MR Visualization FEM Simulation Server Figure 2. Schematic of the proposed concept System Architecture. To realize the proposed concept, this work puts forth a system architecture consisting of three primary components as shown in Fig. 3. In this architecture, a computational server running FEM simulations is used to perform the computation-intensive simulations. The results of the simulation are then sent to the client back-end to create the visualization object and display it to the user through the user interface (UI). When the user sets up a new experiment by defining the structural model, loading and support conditions, the computations are performed on the server and the deformation results are sent back to the client. These results are used to create a mesh and visualize the initial and updated states of the structure. The resulting visualization is shown to the user through the user interface and further examination and interaction with the results are then managed through this interface. The following sections lay out the details of each one of these components.

User Interface Client Server MR Dila, Ineacin Vialiain FEM Anali

State 1 ee meh Model Se Reiee/Geneae daa fo Iniialiaion of Oion In Infomaion Sa iniial beam/eeed Meh commnicae objec meh daa

Localiaion Infomaion FEM Tool and Dila Tacking ee meh

Vialiaion Udae Meh Reiee/Geneae daa fo commnicae daed/ained Model Dila beam/eeed objec and Ineacion Add ne meh meh daa Animae o ee in Animaion Loo

Ala nning Ineolae beeen mehe

Animaion Loo Idle

Figure 3. Proposed system architecture

1. Simulation server This step deals with the creation of the computational FEM model that computes the behavior of the structure under external loads. Using a powerful external computational server allows for the real-time calculation of response for relatively large models and enables the seamless display of the results to the user. The Server/Client model allows the visualization client to request data from server upon each new loading scenario conducted by the user. The between the server and client is performed via standard GET requests where contextual information about the data (such as an application of force on the beam being visualized) is saved in the header of the request. The server processes this request using the information in the headers and responds primarily with the relevant output data stored in a CSV format. With the geometry of the member known and assuming a suitable mesh size, a script first initializes the FEM mesh of the model by computing the 3D location of each node. These nodal locations are used to visualize the initial shape of the member before deformation. Using this mesh and the material and boundary conditions of the model defined by the user, the corresponding stiffness matrix for the system (K) is extracted. On the other hand, the location and magnitude of the load defined by the user is used to determine the nodal load vector (F) of the system. The computation of the deformation field is then equivalent to solving the equation F=K*U for the nodal displacement vector (U). The Preconditioned Conjugate Gradients method was used to compute the deformations, which is known to be one of the most efficient methods for solving systems of linear equations with symmetric positive definite matrices [23]. Once the computations are finished, the deformation results are sent to the visualization client to create the deformed model.

2. Visualization Client This component retrieves the nodal coordinates of the mesh and the response variables for the model computed via the server, which is then rendered and displayed in the user’s coordinate system. This component first parses the data returned from the server and extracts the nodal displacement values. These displacements are then added to the initial mesh to calculate the deformed locations of the nodes in the updated mesh of the structure after deformation. For realistic values of load and material stiffness, these displacements can sometimes be very small compared with the dimensions of the structure, making the deformations difficult to notice by the user. To better demonstrate the deformation patterns and improve the student understanding, the x, y, and z components of the displacement are magnified by a scaling coefficient before being visualized. The user can control the level of magnification from the user interface. The response variables (e.g., nodal strains and displacements) returned from the server are also parsed and used to colorize the mesh using color contours following the practice in standard FEM software packages [24]. To that end, the nodal strain values are quadratically interpolated to produce intermediate values and corresponding colors are attributed to the range of values to create the color contour plot.

Finally, the transition between the initial undeformed and the updated deformed shapes of the structure is created by interpolating sufficiently-spaced intermediate states so as to generate a morph blending effect. To that end, the response differentials between the initial and updated states are linearly interpolated for small time increments (∆�). The value of the time increment is set to 0.02 seconds to create a smooth transition rendering the intermediate states not perceivable to the human eye. It should be noted that if at any point during the visualization the user invokes a new loading scenario, computations from step 1 will be repeated and the object is morphed into the updated shape (and response) via appropriate transitional animation.

3. User Interface This step involves obtaining user input on the experiment and the display of results via mixed reality. The user can also interact with the model and closely examine the results both by using touch screen gestures as well as through an information display overlaid on the scene. Figure 4 depicts the details of the UI design, which is composed of four subcomponents.

Mde Da ad Ieac

Vaa (iiialiai ad dae) Mdel

Mdel Elee (a/)

Lcaa Taig Ieaci ad Tac Ia Da

Reiiig Ieaci Cl Ba Piial Deice Tackig Pi If Bad Udae Aeaace Aeaace Gd Plae Ce Oi Deeci

Mdel Seleci Ree Te

Fce I Ue I: Magificai Seleced del D-d eleci Objec Piiig Fce ad agificai Sa, Ree Sa/S/Ree Mde Se O

Figure 4. User Interface design

3.1. Model Setup: This step allows the user to define the model, loading and deformation magnification factor to be used during the visualization and control the execution of the experiment via start/stop controller buttons. Furthermore, the user will be able to choose where to position the model in the surrounding environment via touchscreen gestures (tapping).

3.2. Model Display and Interaction: This step displays the 3D model object and its elements (e.g., supports, arrows showing the applied loads) projected in the scene based on the visualization data received from the back-end client. The user is also able to reposition and realign the model in the environment via touchscreen gestures. These include “dragging” using one finger to translate the object in the environment, “pinching” with two fingers to resize the model, and “twisting” with two fingers to rotate the model. Tapping at a location on the surface of the model will also show the value of deformation and strains at that value which will be shown in the “Information Display” pane.

3.3. Localization and Tracking: Unlike the majority of the works in the literature that use fiducial markers for tracking, this work created a markerless mixed reality application. First, a stage on which the virtual structure model will be placed is determined by the user. The Ground Plane tool in the Vuforia Engine was used for this purpose, which detects the presence of a horizontal surface in the camera feed to anchor the virtual object. Second, the Positional Device Tracker utility is used to track the 3D position of the model with respect to the environment after the model is initially positioned in the environment by the user. This tracking uses both the internal Inertial Measurement Unit (IMU) inside the mobile device as well as the visual details of the environment to calculate the six-degree-of-freedom pose of the device during the experiment. This is achieved via the Visual-Inertial Simultaneous Localization and Mapping (VISLAM) technique [25]. This technique is a combination of the Visual-Inertial Odometry (VIO) and Simultaneous Localization and Mapping (SLAM) as the two prominent localization and tracking techniques available in mixed reality. This combination results in added robustness when either the inertial sensors or the visual features of the environment are lost or are inadequate during the tracking process. This ensures that the model of the structure is correctly located at its fixed position at all times including when the user is moving around the object. The details of the tracking are specific to the hardware and platform used in the mobile device (e.g. Android vs. iOS) and the employed localization engine adjusts itself based on the tracking available in each platform.

3.4. Information Display: This step involves providing further detailed information about the model and the structural response (e.g., deformations and strains) upon the users’ request. The user can add a colorbar to the scene to better interpret the magnitude of the response contours shown in a similar way to the standard FEM software packages. The user will also be able to select the response variable (one of the three deformation components and six strain components) for which the colorbar will be created. Furthermore, the user can examine the value of each load effect at each node of the model by tapping at the corresponding location on the model in the screen and the location and value of the response will be displayed in the information pane. Finally, the user is also able to change the appearance of the model to add or remove the wireframe and color from the surface of the structure.

Implementation The visualization client was implemented in the Unity game engine which provides the required visualization and tracking capabilities [26]. The scripts created in Unity were implemented using the C# programming language and the computational server and the between the server and the client were implemented using the JavaScript language. The FEM script running on the server for the computation of the structural response was implemented in Matlab. The user interface of the system was created in the Unity game engine while the localization and tracking functions were implemented via the Vuforia Engine [27]. Vuforia is a software development package for creating augmented reality applications with libraries for image recognition, tracking and localization [27]. The touch screen repositioning controls were implemented using the Lean Touch input library in the Unity game engine.

The resulting visualization can be experienced through mobile applications (e.g., smartphone and tablet computers) as well as head-mounted augmented reality displays such as the Microsoft Hololens as well as virtual reality headsets where the experiment will occur in the virtual environment. Corresponding adjustments should be made to implement the user interactions and input in VR rather than the touchscreen gestures created in this study. However, this study focused on the mobile application version to maximize its accessibility to students without the need for specialized headsets.

Results and Discussions The proposed framework was used to create a set of educational modules including a simple structural member (beam) and a multi-member structural system (bridge) as described in this section. Figure 5 depicts the user interface designed for the mixed reality app created in this study. Figure 5-a shows the model setup menu which allows the user to select a beam model with cantilever or simply supported boundary conditions and a choice of flexural, torsional, tensile, or compressive loads. The user also inputs the magnitude of the load and a magnification factor to be applied to the deformations. The “detect plane to place model” button then allows the user to point the camera toward a horizontal surface (e.g., ground or desk) and select where to position the beam by tapping at the location. Upon these selections, the initial undeformed and updated deformed meshes are visualized in the scene as shown in Figures 5-b and 5-c, respectively. It should be noted that the colorbar on the right and information display pane at the top of the scene provide information and controls over the appearance of the model. Specifically, the user can examine the value of nodal variables (e.g., displacement and strains) by tapping at a node of interest and see the magnitude of the response in the information pane as highlighted using the red arrow and frame in Figure 5-c.

Information Pane Nodal Value

Select node Colorbar Start/Stop/Help Controls

(a) (b) (c)

Figure 5. User interface of the application: a) model setup menu, b) initial mesh, c) updated mesh

Figure 6 depicts four different loading scenarios (bending, tension, compression, and torsion) each of which can be used when teaching the corresponding topic in the structural mechanics course. The students can select their desired load effect from the drop-down menu in the information pane and examine the distribution of different strain and displacement components overlaid on the specimen. This helps the students get a better understanding of the 3D state of strain (and stress) caused by each load throughout the member.

Bending Tension Compression Torsion

Figure 6. Different loading scenarios

Figure 7 depicts the wireframe view of the beams that can be selected for viewing from the information pane. This view shows the deformation of the longitudinal and transverse sections of the beam under flexural and torsional loads. These visualizations are similar to the classroom illustrations previously shown in Figure 1, with the added benefit of being able to change the value of the load and its magnification factor and examining the deformation patterns from the desired angle and view point. Such visualizations can be helpful in gaining a better understanding of the fundamental assumptions and behaviors of these members such as the “plane sections remain plan” assumption in Euler-Bernoulli bending theory.

(a) (b) Figure 7. Wireframe model showing the deformation of the longitudinal and transverse sections under a) bending, and b) torsion

Figures 8 and 9 present two other examples of models implemented in this work. Figure 8 shows the bending of a simply-supported beam, which is a very common basic example repeatedly referred to in introductory structural mechanics courses. Figure 9 depicts a more complex example of a multi-member structure (a multi-girder bridge under vehicular loads). As can be seen in this figure, the students can closely examine the load effects in the slab, railings and the beams from different view angles above and below the bridge and even compare the load effects in the beams with those in individual simply-supported beams previously seen in Figure 8. The increased complexity of the configuration and load-carrying behavior of this example highlights the advantages of the proposed mixed reality visualization compared with classroom drawings. The highly flexible and user-friendly visualization presented by this framework is highly amenable to teaching and learning structural mechanics especially in more complex cases which can be challenging using traditional means of course delivery and learning.

Figure 8. Simply-supported beam model

Figure 9. Bridge model and examination from different view points

To highlight the advantages and potential of the proposed visualization strategy, Figure 10 compares the mixed reality visualization with the foam specimen concept used in some structural mechanics classrooms. As can be seen, the mixed reality visualization improves the demonstration in several ways including the possibility of showing the strain and displacement contours overlaid on the specimen, the chance to examine the values of the load effects throughout the model, and the wireframe model showing the internal sections of the member. Furthermore, visualizing more complex structures such as the bridge structure shown in Figure 9 is much easier using the proposed visualization framework. As a result, the educational tool developed in this paper can facilitate inductive learning where learning occurs by observing the physical phenomena superimposed with the underlying analytical models used to explain the observed behavior. This is difficult to achieve in the traditional modes of course delivery in structural mechanics which are initiated by the presentation of the fundamental rules governing structural deformation and then applying them to specific sample structures in the form of examples and homework assignment, thus following a deductive learning workflow. The goal of the proposed framework is for the students to be able to see the deformation behavior for themselves and “discover” such abstract concepts as the Bernoulli-Euler Beam Theory’s “plane sections remain plane” by observing it in real-time as they bend a beam in mixed reality. To extend the learning beyond the classroom and in the comfort of the students’ homes, the mixed reality app can be readily used by the students to subject the beam to various imaginative loading scenarios and observe and examine its behavior in a game-based learning workflow.

Figure 10. Comparing the mixed reality model versus foam specimens used in classrooms

Future Work and Considerations This paper presented the preliminary results of the development of a mixed-reality-based visualization framework for teaching and learning introductory structural mechanics. While the developed tool has not yet been deployed in the form of a comprehensive classroom-environment investigation, preliminary examination of the developed tool by the research team shows promise for the following potential advantages, the extent and efficacy of which should be evaluated via experimental user studies within the future work: • The resulting visualizations are entertaining and engaging and improve motivation and creativity for learning. • The experiments are interactive which allows students to “experience” the structural phenomena and lead to more effective learning. • The visualizations facilitate the demonstration of abstract assumptions such as the Euler- Bernoulli Beam assumption thus helping the learning of more complex and abstract concepts. • Unlike expensive laboratory tests, the visualizations can be used as a means for repeatable experimentation, which should lead to sustained and voluntary use by the students at home.

On the other hand, a number of considerations and potential limitations can also be enumerated for the proposed framework. While the mixed reality application developed herein was designed to be intuitive and easy to use, should be paid in future user studies to evaluate the possibility of added cognitive load, extra class time, or instructor familiarity with the technology required for learning and using the mixed reality application itself [12]. Furthermore, it has been previously discussed that replacing real laboratory experiments with virtual simulations can potentially lead to reduced physical skills [28]. As a result, and before this consideration can be objectively assessed, the proposed tool is recommended as a supplementary educational tool that can be used to extend the experiments beyond the laboratory and into the students’ homes. It should also be noted that the purpose of the use of FEM in the proposed tool is solely to calculate the structural response under loads on the back-end server, and that the user will interact with the resulting visualization without the need for any knowledge of the FEM modeling process. As a result, students of introductory courses with minimal knowledge of the computational methods should be able to use the tool. Furthermore, the visualization will be presented in the form of a mobile application that can be installed on touchscreen devices such as smartphones and tablet computers which are widely-used media for transfer of information among students and engineers. This makes the application of the proposed teaching aid intuitive, user-friendly and accessible to engineering students without the need for prior knowledge.

Aside from the structural mechanics application discussed in this paper, the proposed tool is expected to have broader applications in various other areas of engineering education where an analogous gap exists between theories and their real-life manifestations (e.g., mechanical, aerospace, biomedical engineering). Moreover, the immersive and interactive modeling and visualization framework can also be leveraged outside of engineering education and within applications such as collaborative conceptual design and remote and augmented inspections. On a broader scale, the integration of technology into civil engineering education can also help with increased attraction and retention of students in this engineering discipline.

Conclusions This paper discussed a visualization framework that uses the mixed reality technology in conjunction with finite element simulations to improve structural mechanics education. A system architecture having three primary components was designed. First, a computational server running finite element simulations is used to perform real-time computations and provide deformation results for the structural model under user-defined loads. Second, a visualization client communicates with the server to extract the computed deformation results and create a 3D visualization of the structural both with and without the effect of the imposed loads and to visualize the transition between the two states using animation. Finally, a user interface was created to capture user inputs and to display the 3D models in the mixed reality format. A markerless device localization and tracking system was employed to create higher flexibility than similar marker- based examples in the literature. The preliminary results obtained in this work include the implementation of the proposed system as a mobile application with modules for beam members under simple load cases of bending, torsion, tension, and compression, as well as a more complex structural system (a bridge superstructure) under vehicular loads. Examination of these modules show that the system can be used to create useful visualizations with much higher flexibility and interactivity than traditional simplified diagrams used in classrooms. Future work is required to investigate the extent of the advantages and limitations of this tool in engineering education, such as its role in increasing motivation and creativity, improvements in learning outcomes, and the potential for sustained and voluntary use by the students.

References [1] L.D. Feisel, and A.J. Rosa. "The role of the laboratory in undergraduate engineering education." Journal of engineering Education 94(1). 121-130, 2005. [2] J.F. Davalos, C.J. Moran, and S.S. Kodkani. "Neoclassical active learning approach for structural analysis." in: Proceedings of the 2003 American Society for Engineering Education Annual Conference and Exposition, 2003. [3] P.S. Streif, and L.M. Naples. "Design and evaluation of problem solving courseware modules for mechanics of materials." Journal of Engineering Education 92(3). 239-247, 2003. [4] D. Jensen. “From Tootsie Rolls to Composites: Assessing a Spectrum of Active Learning Activities in Engineering Mechanics”. Air Force Academy Colorado Springs Inst For Information Technology Applications, 2009. [5] B. Crawford, and T. Jones. "Teaching mechanical engineering to the highly uninspired." ASEE Annual Conference and Exposition. 2007. [6] S. Ates, and E. Cataloglu. "The effects of students' cognitive styles on conceptual understandings and problem-solving skills in introductory mechanics." Research in Science & Technological Education 25(2). 167-178, 2007. [7] Y. Turkan, R. Radkowski, A. Karabulut-Ilgu, A.H. Behzadan, and A. Chen. “Mobile augmented reality for teaching structural analysis”. Advanced Engineering Informatics, 34, 90-100, 2017. [8] K. Altmeyer, S. Kapp, M. Thees, S. Malone, J. Kuhn, and R. Brünken. “The use of augmented reality to foster conceptual knowledge acquisition in STEM laboratory courses—Theoretical background and empirical results”. British Journal of Educational Technology, 51(3), 611-628, 2020. [9] Y. Ohta, and H. Tamura. Mixed reality: merging real and virtual worlds. Springer Publishing Company, Inc, 2014. [10] P. Wang, P. Wu, J. Wang, H.L. Chi, and X. Wang. “A critical review of the use of virtual reality in construction engineering education and training”. Environmental research and public health, 15(6), 1204, 2018. [11] P.H. Diao, and, N.J. Shih. “Trends and research issues of augmented reality studies in architectural and civil engineering education—A review of academic journal publications”. Applied Sciences, 9(9), 1840, 2019. [12] M.B. Ibáñez, and C. Delgado-Kloos, “Augmented reality for STEM learning: A systematic review”. Computers & Education, 123, 109-123, 2018. [13] X. Wang, P.E. Love, M.J. Kim, C.S. Park, C.P. Sing, and L. Hou. “A conceptual framework for integrating building information modeling with augmented reality”. Automation in construction, 34, 37- 44, 2013. [14] A. Karji, A. Woldesenbet, and S. Rokooei. "Integration of augmented reality, building information modeling, and image processing in construction management: a content analysis." AEI 2017. 983-992, 2017. [15] S. Zollmann, C. Hoppe, S. Kluckner, C. Poglitsch, H. Bischof, G. Reitmayr. "Augmented reality for construction site monitoring and documentation." Proceedings of the IEEE 102(2) 137-154, 2014. [16] X. Li, W. Yi, H.L. Chi, X. Wang, A.P. Chan. "A critical review of virtual and augmented reality (VR/AR) applications in construction safety." Automation in Construction 86, 150-162, 2018. [17] C. Papachristos, and K. Alexis. “Augmented reality-enhanced structural inspection using aerial robots”. In 2016 IEEE international symposium on intelligent control (ISIC) (pp. 1-6). IEEE, 2016. [18] M. Fiorentino, G. Monno, and A. Uva. “Interactive “touch and see” FEM Simulation using Augmented Reality”. Int. J. Eng. Educ, 25(6), 1124-1128, 2009. [19] J.M. Huang, S.K. Ong, and A.Y. Nee. Real-time finite element structural analysis in augmented reality. Advances in Engineering Software, 87, 43-56, 2015. [20] W.K. Li, A.Y. Nee, and S.K. Ong, S.K. “Mobile augmented reality visualization and collaboration techniques for on-site finite element structural analysis”. International Journal of Modeling, Simulation, and Scientific Computing, 9(03), 1840001, 2018. [21] L. Ge, and F. Kuester “Integrative simulation environment for conceptual structural analysis”. Journal of Computing in Civil Engineering, 29(4), B4014004, 2015. [22] R. Chacón, F. Claure, and O. De Coss. “Development of VR/AR Applications for Experimental Tests of Beams, Columns, and Frames”. Journal of Computing in Civil Engineering, 34(5), 05020003, 2020. [23] R. Barrett, M. Berry, T.F. Chan, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo, C. Romine, and H. Van der Vorst. “Templates for the solution of linear systems: building blocks for iterative methods”. Society for Industrial and Applied Mathematics. 1994. [24] A. Khennane. Introduction to finite element analysis using MATLAB and abaqus. CRC Press. 2013. [25] C. Chen, H. Zhu, M. Li, and S. You, S. “A review of visual-inertial simultaneous localization and mapping from filtering-based and optimization-based perspectives”. Robotics, 7(3), 45, 2018. [26] J.K. Haas. A history of the unity game engine. Worcester Polytechnic Institute (WPI) Library, 2014. [27] Vuforia Software Development Kit. “Getting started with Vuforia engine for windows 10 development.” Accessed January 15, 2019. https://library.vuforia.com/articles/Training/Getting-Started-with-Vuforia-for-Windows-10 Development.html, 2018. [28] B. Balamuralithara, and P.C. Woods. Virtual laboratories in engineering education: The simulation lab and remote lab. Computer Applications in Engineering Education, 17(1), 108-118, 2009.