education sciences

Article Distance Learning and Assistance Using Smart Glasses

Michael Spitzer 1,* ID , Ibrahim Nanic 1 and Martin Ebner 2 ID

1 Virtual Vehicle Research Center, Inffeldgasse 21/A, Graz 8010, Austria; [email protected] 2 Department Educational Technology, Graz University of Technology, Münzgrabenstraße 35a, Graz 8010, Austria; [email protected] * Correspondence: [email protected]

Received: 30 November 2017; Accepted: 25 January 2018; Published: 27 January 2018

Abstract: With the everyday growth of technology, new possibilities arise to support activities of everyday life. In education and training, more and more digital learning materials are emerging, but there is still room for improvement. This research study describes the implementation of a smart glasses app and infrastructure to support distance learning with WebRTC. The instructor is connected to the learner by a video streaming session and gets the live video stream from the learner’s smart glasses from the learner’s point of view. Additionally, the instructor can draw on the video to add context-aware information. The drawings are immediately sent to the learner to support him to solve a task. The prototype has been qualitatively evaluated by a test user who performed a fine-motor-skills task and a maintenance task under assistance of the remote instructor.

Keywords: distance learning; smart glasses; WebRTC; maintenance

1. Introduction In recent years, many smart glasses devices emerged on the market with a reasonable price. The potential of such devices was already investigated in various domains. For example, smart glasses were used as an assistance system to guide visitors in a museum [1]. Also, in the educational domain, smart glasses are emerging. Google Glass was used in a context-aware learning use case—a physics experiment. The Google Glass measured the water fill level of a glass (camera) and frequency of the sound created by hitting the glass (microphone) and displayed a graph with the relation of the fill level and the frequency [2]. Also in medical education, smart glasses are used to solve communication and surgical education challenges in the operating room [3]. Smart glasses were already used in STEM education to monitor the students while they are performing a task in cyber security and forensic education. The smart glasses were used to record the field of view and the spoken words of the students. They were instructed to think out loud. The collected data will be used to analyze the learning success [4]. An investigation of promising pilot project studies was already published in a literature review study. The authors investigated smart glasses projects in education in the years 2013–2015. A high amount of the investigated studies focused on theoretical aspects as suggestions and recommendations without significant practical discoveries [5]. This issue motivated us to elaborate on the practical aspects of using smart glasses for distance learning scenarios. Smart glasses provide various input possibilities such as speech, touch and gestures. The main advantage of using smart glasses is that users do not need their hands to hold the device. Therefore, these devices could be used to support participants in performing learning situations in which they

Educ. Sci. 2018, 8, 21; doi:10.3390/educsci8010021 www.mdpi.com/journal/education Educ. Sci. 2018, 8, 21 2 of 18 need their hands such as learning and training of fine-motor-skills. We defined the following research questions: • RQ1: Are live annotations, such as arrows, rectangles or other types of annotations, in video streams helpful to perform different remote assistance and learning tasks? • RQ2: Could the WebRTC infrastructure and software architecture be used to implement remote learning and assistance scenarios of different types? • RQ3: Is a user participatory research suitable to derive software requirements for next iterations of the prototype? The first research question (RQ1) evolved during our previous studies with smart glasses. We faced the issue that it was very difficult to describe locations of certain parts, tools or things which are important to perform the remote learning scenario; hence, we decided to mark the locations of these objects in the transmitted video stream in real-time to help the learner to find these objects. The second research question (RQ2) was defined to evaluate the capability of already developed real-time technology to support our learning scenarios. RQ3 evolved during other experiments where active participation of users was very effective to improve our software artifacts iteratively. We already investigated the usage of smart glasses in several learning situations. For example, knitting while using smart glasses [6]. Additionally, we investigated the learning process how to assemble a Lego® Technic planetary gear while using smart glasses [7]. This study describes how to use video streaming technology to assist the wearer of smart glasses while learning a fine-motor-skill task. The instructor uses the web-based prototype to observe the subject during the learning situation. The instructor can see the learning and training situation from the learner’s point of view because of the video stream coming from the camera of the smart glasses. The stream is also displayed in the display of the smart glasses. The remote user (instructor) can highlight objects by drawing shapes live in the video stream. The audio, video and the drawing shapes are transmitted via WebRTC [8]. We tested the prototype in two different settings. The first setting was a game of skill. Users had to assemble a wooden toy without any given instructions in advance. The second setting was an industrial use case in which the user had to perform a machine maintenance task. We evaluated this setting in our research lab on our 3D printer, which is a small-scale example of a production machine. The user had to clean the printing head of the 3D printer as a maintenance task without any prior knowledge of the cleaning procedure. We did a qualitative evaluation of both scenarios to gather qualified information how to improve the learning scenarios and software prototype. To keep this article clear and easy to read, the following personas were defined: • Instructor: User of the web frontend who guides the smart glasses user remotely; • Subject: Wearer of the smart glasses who uses the smart glasses app while performing a task.

2. Methodology This paper describes the necessary work to develop a first prototype to be used to evaluate distance learning use cases with smart glasses. The prototyping approach is appropriate for scenarios in which the actual user requirements are not clear or standing to reason. The real user requirements are often emerging during experimentation with the prototype. Additionally, if a new approach, feature or technology is introduced which has not been used before, experiments and learning is necessary before designing a full-scale information system [9]. The prototyping approach consists of 4 steps. At first, we identified the basic requirements. In our case it was necessary to communicate in real-time to support the remote learner during the learning scenario. The second step was to develop a working prototype. Then users test the prototype. The last step follows an agile approach. While users were testing the prototype, we identified necessary changes to the system and we provided a new version and tested it again in several iterations [10]. Educ. Sci. 2018, 8, 21 3 of 18

The prototyping approach is not only very effective by involving the end user into the development but also very effective from the developer’s point of view. Since the software development process is very specific to the use case you only know how to build a system when you already have implemented major parts of it, but then it is often too late to change the system architecture fundamentally because of the experiences gained during the development phase [11]. The prototyping approach already prevents major software architecture and implementation flaws in early stage. To test the prototype, two use cases were designed. They will also be used for future evaluation of distance learning scenarios. The first use case reflects a generic fine-motor-skills task, which can be validated with less effort. The second use case bridges the gap between a generic learning situation and a real-world industry use case. The learning transfer from a generic scenario to a real-world situation is a high-priority goal for a learning scenario. The main target of this study is to provide a suitable environment to perform future studies with smart glasses in the educational domain. In future work we will create guidance material for instructors and we will perform comparative studies to evaluate the performance of the distance learning approach with smart glasses compared to established learning and teaching approaches. The main purpose of this distance learning setting and the usage of smart glasses is to address issues with distance learning and assistance scenarios such as the instructor does not have a face-to-face communication to the subject. The expected impact of such a setting is that distance learning will be more effective as it is now without real-time annotations and video/audio streaming.

3. Technological Concept and Methods This chapter elaborates on the details of the used materials. We describe the used server, , smart glasses app and the necessary infrastructure. Afterwards, we point at the method how we evaluated the two already introduced use cases: the fine-motor-skills task and the maintenance task. We implemented an Android-based [12] prototype for the smart glasses Vuzix M100 [13]. Additionally, a web-based prototype was implemented acting as server. Figure1 shows the system architecture and the data flow. The instructor is using the web client which shows the live video stream of the smart glasses camera. Additionally, the audio of the instructor as well as the audio of the subject is transmitted. The server is responsible to transmit the WebRTC data stream to the web client and to the smart glasses Android app. WebRTC is a real-time communications framework for browsers, mobile platforms and IoT devices [8]. With this framework, audio, video and custom data (in our case: drawn shapes) are transmitted in real-time. This framework is used for both, the server and in the client Android application. Figure1 shows the system architecture and infrastructure. All communication is transferred and managed by a NodeJS [14] server. The generic learning scenario workflow consists of several steps:

(1) The instructor logs on to the web frontend on a PC, tablet or any other device; (2) The next step is to start the video broadcast by clicking on “Broadcast”. A separate room is then created and waiting until the wearer of smart glasses joins the session; (3) The subject starts the smart glasses client app and connects to the already created room; (4) The video stream starts immediately after joining the room.

Figure2 shows the screen flow in detail. The following sections describe the Android app and the NodeJS server implementation in detail. Figure3 shows a screenshot of the web application and the smart glasses app while performing the 3D printer maintenance task. Educ. Sci. 2018, 8, 21 4 of 18 Educ. Sci. 2018, 8, x FOR PEER REVIEW 4 of 18

Educ. Sci. 2018, 8, x FOR PEER REVIEW 4 of 18

FigureFigure 1. 1.System System architecturearchitecture and infrastructure.infrastructure. Figure 1. System architecture and infrastructure.

Figure 2. Screen flow. FigureFigure 2.2. ScreenScreen flow.flow. Educ. Sci. 2018, 8, 21 5 of 18 Educ. Sci. 2018,, 8,, xx FORFOR PEERPEER REVIEWREVIEW 5 of 18

Figure 3. Web frontendfrontend andand smartsmart glassesglasses app.app..

3.1. Server and Web App Implementation We used used NodeJS NodeJS to to implement implement the the server. server. The The WebRTC WebRTC multiconnection multiconnection implementation implementation is based is onbased the on implementation thethe implementationimplementation of Muaz ofof Khan, MuazMuaz theKhan,Khan, repository thethe repositoryrepository is available isis availableavailable on GitHub oon GitHub [15]. The [15]. server The is server started is onstarted localhost, on localhost, port 9001. port The 9001. entry The page entry of page the web of the application web application offers two offers features: two features: (1)(1) BroadcastBroadcast from from local local camera: camera: The The internal internal webcam webcam of of the the computer computer is is used used;;; (2)(2) PlayPlay video video directly directly from from server: server: A A previously previously recorded recorded video video will will be be played. played. Figure4 4 shows shows the the start start screen. screen.

Figure 4. Start screen of the web application.. Figure 4. Start screen of the web application. Educ. Sci. 2018, 8, 21 6 of 18 Educ. Sci. 2018, 8, x FOR PEER REVIEW 6 of 18

WhenWhen thethe instructor instructor selects selects the the first first option, option, a video a video stream stream of the of local the local camera camera is shown. is shown. The video The streamvideo willstream be switched will be switched to the camera to the stream camera of the stream smart glasses of the immediatelysmart glasses after immediately the subject connects after the tosubject the system. connects With to no the subject system. present, With the no instructor subject present, can record the a instructor video, for examplecan record a training, a video, how for toexample assemble a or training, disassemble how an to object assemble or any or other disassemble challenging an fine-motor-skills object or any task. other The challenging recording isfine stored-motor on-skills the server; task. additionally, The recording the instructor is stored can on download the server the; additionally, video to the thehard instructor disk. When can a subjectdownload connects the video to the to system, the hard the disk instructor. When can a subjectenable drawing connects as to an the additional system, the feature. instructor With this can featureenable enabled,drawing theas an instructor additional can feature. augment With the videothis feature with several enabled, objects the instructor as colored can strokes, augment arrows, the rectangles,video with text, several image objects overlays as colored and Bezier strokes, curves arrows, [16]. The rectangles, drawn objects text, image and the overlays video streamand Bezier are transferredcurves [16]. in The real-time drawn toobjects the subject. and the The video second stream option are playstransferred videos in already real-time created to the and subject. uploaded The tosecond the server. option Figure plays5 showsvideos aalready video streamed created and by the uploaded smart glasses to the app. server. In theFigure displayed 5 shows web a video app, thestreamed drawing by featurethe smart is enabled. glasses app. In the displayed web app, the drawing feature is enabled.

Figure 5. Broadcast option. Figure 5. Broadcast option.

3.2. Smart Glasses App Implementation 3.2. Smart Glasses App Implementation The smart glasses we used were the Vuzix M100. This device was chosen because it was the The smart glasses we used were the Vuzix M100. This device was chosen because it was the only only one which was available for us to buy as we started the project. Table 1 shows the specifications one which was available for us to buy as we started the project. Table1 shows the specifications of the of the Vuzix M100. Vuzix M100. The conventional way to develop applications for Android-based devices is to write a Java [18] Table 1. Vuzix M100 specifications [17]. application with Android Studio [19] in combination with the Android SDK [20]. The first approach was to implementFeature such an app. An Android WebViewVuzix M100 [21] Sp wasecification used to display the WebRTC client web app, servedDisplay from our server. With an AndroidWidescreen WebView, 16:9 WQVGA a web page can be shown within an Android app. Additionally,CPU JavaScript execution [email protected] enabled to support the WebRTC library. We built 1 GB RAM the app againstMemory API level 15 for the smart glasses. A big issue emerged in this phase; because of the 4 GB flash low API level, the WebView was not able3 to DOF run (degree the used of WebRTCfreedom) gesture Multiconnection engine library. Therefore, we switched our system architecture from an ordinaryAmbient Java light Android application to a hybrid app. Sensors Our new app is based on Cordova [22] and uses a CrosswalkGPS [23] WebView. is an open source mobile development framework. Figure6Proximity shows the architecture of such a Cordova Micro USB Connectivity Wi-Fi 802.11 b/g/n Bluetooth Educ. Sci. 2018, 8, 21 7 of 18 app. The smart glasses app is implemented as a web app and uses Cordova to be able to run on an Android device.

Table 1. Vuzix M100 specifications [17]. Educ. Sci. 2018, 8, x FOR PEER REVIEW 7 of 18

Feature Vuzix M100 Specification 4 control buttons ControlsDisplay Remote control app Widescreen paired via Bluetooth 16:9 WQVGA (available for iOS and CPUAndroid) customizable [email protected] voice control gestures Operating system Android ICS 4.04, API level 15 1 GB RAM Memory 4 GB flash The conventional way to develop applications for Android-based devices is to write a Java [18] 3 DOF (degree of freedom) gesture engine application with Android Studio [19] in combination Ambientwith the lightAndroid SDK [20]. The first approach Sensors was to implement such an app. An Android WebView [21]GPS was used to display the WebRTC client web app, served from our server. With an Android WebView,Proximity a web page can be shown within an Android app. Additionally, JavaScript execution was enabledMicro USB to support the WebRTC library. We built the app againstConnectivity API level 15 for the smart glassesWi-Fi. A 802.11 big issue b/g/n emerged in this phase; because of the low API level, the WebView was not able to run Bluetooth the used WebRTC Multiconnection library. Therefore, we switched our system architecture from4 control an ordinary buttons Java Android application to a hybrid app. OurControls new app is basedRemote on controlCordova app [22] paired and via uses Bluetooth a Crosswalk (available [23] for iOSWebView. and Apache Android) customizable voice control gestures Cordova is an open source mobile development framework. Figure 6 shows the architecture of such a Cordova app.Operating The smart system glasses app is implemented Android ICS as 4.04,a web API app level and 15 uses Cordova to be able to run on an Android device.

FigureFigure 6.6.Cordova Cordova appapp architecturearchitecture [ 24[24]]. .

Crosswalk was started in 2013 by Intel [25] and is a runtime for HTML5 applications [23]. These applications can then be transformed to run on several targets as iOS, Android, Windows desktop and desktop applications. Figure 7 shows the workflow, how to set-up a Cordova app with Crosswalk. Educ. Sci. 2018, 8, 21 8 of 18

Crosswalk was started in 2013 by Intel [25] and is a runtime for HTML5 applications [23]. These applications can then be transformed to run on several targets as iOS, Android, Windows desktop and Linux desktop applications. Figure7 shows the workflow, how to set-up a Cordova app Educ. Sci. 2018, 8, x FOR PEER REVIEW 8 of 18 with Crosswalk.

Figure 7. Crosswalk and Cordova workflow [26]. Figure 7. Crosswalk and Cordova workflow [26].

The main reason why we had to switch to such a hybrid solution is that WebViews from older AndroidThe main systems reason are not why updated we had to to be switch able to to run such state a hybrid-of-the solution art web code.is that Crosswa WebViewslk isfrom now olderdiscontinued Android because systems the are code not and updated features to beof Google able to Chrome run state-of-the are now artshared web with code. newer Crosswalk Android is nowWebView discontinued implementations; because the hence, code the and newer features WebViews of Google are Chrome now kept are now up-to shared-date and with support newer Androidstate-of-the WebView-art web implementations; applications. Additionally, hence, the newerprogressive WebViews web apps are now can kept provide up-to-date native and app supportfeatures state-of-the-art [25]. Therefore, web for applications.the next prototype, Additionally, we can progressive use the implemented web apps WebViewcan provide of native newer appAndroid features systems [25]. Therefore,because they for are the actively next prototype, maintained we canby Google use the and implemented will share WebView current features of newer of Androidthe systems because they are actively maintained by Google and will share current features Google of the GoogleChrome Chrome development. development. ProgrammersProgrammers can can now now rely rely on active on active support support for new for web new features web of features the WebView. of the A WebView. requirement A isrequirement that the Android is that version the Android of newer version smart glassesof newer must smart be atglasses least Android must be Lollipop at least (AndroidAndroid Lollipop 5.0) [27]. (AndroidSince this5.0) is[27]. not the case for the Vuzix M100 as well as for other Android-based smart glasses, our hybridSince solution this is is not still the necessary case for for the these Vuzix devices. M100 as well as for other Android-based smart glasses, our hybridThe UI solution of the Android is still necessary app is optimized for these for devices. small displays as smart glasses’ displays. After the start ofThe the UI app of the a menu Android with app three is options optimized is displayed: for small displays as smart glasses’ displays. After the (1)start Broadcastof the app: a This menu starts with the three WebRTC options connection is displayed: and connects to the server and joins the session of (1) Broadcastthe running: This web starts app the automatically. WebRTC connection and connects to the server and joins the session of (2) theScan running QR code web: This app featureautomatically. enables context-aware information access. Users can scan a QR code, (2) Scane.g., onQR a code machine: Thisto feature access enables certain context videos-aware related information to the machine. access. This Users approach can scan was a QR already code, e.g.used, on in a another machine smart to access glasses certain study v toideos specify related the to context the machine. (the machine This approach to maintain) wasbecause already usedusing in QR another codes tosmart define glasses the context study isto morespecify efficient the context than browsing (the machine information to maintain) and selecting because usingthe appropriate QR codes to machine define inthe the context small displayis more (UI)efficient of the than smart browsing glasses information [28]. and selecting (3) theRecord appropri andate upload machine. Smart in the glasses small usersdisplay can (UI) record of the their smart own glasses videos [28]. in their point-of-view. (3) RecordThe videos and areupload then. uploadedSmart glasses to the users server can and record are thentheir accessibleown videos by in the their web point frontend-of-view. users. The videosFigure 8are shows then theuploaded start screen to the of server the smart and glassesare then app. accessible by the web frontend users. Figure 8 shows the start screen of the smart glasses app. Educ. Sci. 2018, 8, 21 9 of 18 Educ. Sci. 2018, 8, x FOR PEER REVIEW 9 of 18 Educ. Sci. 2018, 8, x FOR PEER REVIEW 9 of 18

Figure 8. Start screen of the smart glasses app. Figure 8. Start screen of the smart glasses app.app. When the user selects the first option, a video stream of his own point of view (the video stream When the user selects the first option, a video stream of his own point of view (the video stream of theWhen camera the of user the selectssmart glasses the first) is option, shown. a videoAdditiona streamlly, of drawn his own objects point of of the view web (the application video stream user of the camera of the smart glasses) is shown. Additionally, drawn objects of the web application user of(instructor) the camera are of the displayed smart glasses) in real is-time. shown. This Additionally, is very helpful drawn objects when theof the subject web application investigates user a (instructor) are displayed in real-time. This is very helpful when the subject investigates a (instructor)complicated are UI displayed or machine. in real-time. Then the This instructor is very helpful can guide when the the subject subject investigates with drawn a complicated arrows or complicated UI or machine. Then the instructor can guide the subject with drawn arrows or UIrectangles or machine. to the Then correct the instructor direction can to guide perform the the subject maintenance with drawn or arrows fine-motor or rectangles-skills task to the correctly. correct rectangles to the correct direction to perform the maintenance or fine-motor-skills task correctly. directionFigure 9 shows to perform the 3D the printer maintenance maintenance or fine-motor-skills UI. Since the subject task correctly. has never Figure used9 shows the UI thebefore, 3D printer he/she Figure 9 shows the 3D printer maintenance UI. Since the subject has never used the UI before, he/she maintenanceneeds guidance UI. on Since how the to subject use the has UI. never The highlighted used the UI area before, (red he/she rectangle) needs shows guidance the buttons on how which to use needs guidance on how to use the UI. The highlighted area (red rectangle) shows the buttons which themust UI. be The used highlighted for the maintenancearea (red rectangle) procedure. shows Along the buttons with which audio must instructions, be used for the the subject maintenance in our must be used for the maintenance procedure. Along with audio instructions, the subject in our procedure.evaluation managed Along with to audio perform instructions, the maintenance the subject task in efficiently. our evaluation The maintenance managed to taskperform will the be evaluation managed to perform the maintenance task efficiently. The maintenance task will be maintenanceexplained in detail task efficiently. in Section The3.2. maintenance task will be explained in detail in Section 3.2. explained in detail in Section 3.2.

Figure 9. Highlighted parts of the maintenance UI displayed on the smart glasses. Figure 9. HigHighlightedhlighted parts of the maintenance UI displayed on the smart glasses.glasses.

3.3. Evaluation Methods 3.3. Evaluation Methods To evaluate the two use cases, an end-user participatory research was performed with one test To evaluate the two use cases,cases, an end-userend-user participatory research was performed with one test person of the target group to get qualitative feedback of the used technology as well as feedback on person of the target group to get qualitative feedback of the used technologytechnology as well as feedback on the design of the learning scenario. the design of the learning scenario. In a user participatory research study, the subject is not only a participant, but also is highly In a user participatory research study, study, the subject is not only a participant, but also is highly involved in the research process [29]. involved in the research process [[29].29]. In our case, the subject gave qualified feedback to the used device and the concept of the In our our case, case, the the subject subject gave gave qualified qualified feedback feedback to the to used the deviceused device and the and concept the concept of the learning of the learning scenarios which are now considered for the next iteration of the prototype. learningscenarios scenarios which are which now consideredare now considered for the next for iterationthe next ofiteration the prototype. of the prototype. A quantitative research with a larger group of users of the target group will be performed after A quantitative research with a larger group of users of the target group will be performed after the qualitative feedback is considered. This follows the approach we used in our previous the qualitative qualitative feedback feedback is considered. is considered. This followsThis follows the approach the approach we used wein our used previous in our studies previous [6,7]. studies [6,7]. studies [6,7]. 4. Evaluation Scenarios 4. Evaluation Scenarios 4. EvaluationWe decided Scenarios to use two different scenarios. The first scenario is a general approach to investigate We decided to use two different scenarios. The first scenario is a general approach to investigate a moreWe generic decided use to use case. two The different introduced scenarios. software The first and scenario infrastructure is a general was used approach to help to investigate subjects to a more generic use case. The introduced software and infrastructure was used to help subjects to a more generic use case. The introduced software and infrastructure was used to help subjects to perform general fine-motor-skills tasks. The second use case is a specific maintenance use case which perform general fine-motor-skills tasks. The second use case is a specific maintenance use case which Educ. Sci. 2018, 8, 21 10 of 18

Educ. Sci. 2018, 8, x FOR PEER REVIEW 10 of 18 perform general fine-motor-skills tasks. The second use case is a specific maintenance use case which isis oftenoften seenseen inin industrialindustrial domain.domain. InIn thethe project FACTS4WORKERSFACTS4WORKERS wewe investigateinvestigate aa useuse casecase howhow toto cleanclean aa lenslens ofof aa laserlaser cuttercutter [[30].30]. TheThe whole procedure can be split into small fine-motor-skillsfine-motor-skills tasks;tasks; hence,hence, ourour useuse casescases oneone andand twotwo fitfit wellwell toto realreal industryindustry useuse cases.cases.

4.1.4.1. Fine-Motor-SkillsFine-Motor-Skills ScenarioScenario SinceSince aa lotlot ofof taskstasks cancan bebe separatedseparated intointo smallersmaller fine-motor-skillsfine-motor-skills tasks,tasks, thethe firstfirst approachapproach waswas toto evaluateevaluate ourour solutionsolution withwith suchsuch aa task.task. We chosechose aa woodenwooden toy whichwhich reflectsreflects an assembly task. TheThe subjectsubject getsgets thethe woodenwooden partsparts andand mustmust assembleassemble thethe toytoy withoutwithout aa manualmanual oror picturespictures ofof thethe finishedfinished assembly. assembly. Only by getting voice assistance and augmentation augmentation of the the video video stream stream with with drawings,drawings, thethe subjectsubject mustmust assembleassemble thethe toy.toy. TheThe toytoy consistsconsists ofof 3030 smallsmall woodenwooden sticks,sticks, 1212 woodenwooden spheresspheres withwith fivefive holesholes eacheach andand oneone spheresphere withoutwithout anyany holes.holes. FigureFigure 10 10 shows shows thethe partsparts ofof thethe toy.toy.

Figure 10. Parts of the wooden toy. Figure 10. Parts of the wooden toy.

The reason why we chose this toy was that it is not likely that the subject is able to assemble the toy withoutThe reason any why help we hence chose the this subject toy was must that rely it ison not an likely assistance that the system subject to isassemble able toassemble the toy with the toyhelp without of a remote any help instructor. hence the Figure subject 11 shows must rely the onassembled an assistance toy. system to assemble the toy with help of a remote instructor. Figure 11 shows the assembled toy. Educ. Sci. 2018, 8, 21 11 of 18 Educ. Sci. 2018, 8, x FOR PEER REVIEW 11 of 18

Figure 11. Assembled toy. Figure 11. Assembled toy.

4.2. Maintenance Scenario 4.2. Maintenance Scenario The second evaluation use case is a maintenance task. We used our 3D printer as an industry The second evaluation use case is a maintenance task. We used our 3D printer as an industry machine. A maintenance task on the 3D printer can be compared to a real-world maintenance use machine. A maintenance task on the 3D printer can be compared to a real-world maintenance use case case in the production domain. Since it is very difficult to test such scenarios on a real industry in the production domain. Since it is very difficult to test such scenarios on a real industry machine, machine, the first approach was to test the scenario with our 3D printer in a safe environment. The the first approach was to test the scenario with our 3D printer in a safe environment. The task was to task was to clean the printhead of the 3D printer. The following steps must be performed: clean the printhead of the 3D printer. The following steps must be performed: (1) Move printhead to the front and to the center; (1)(2) HeatMove the printhead printhead to to the 200 front °C ; and to the center; ◦ (2)(3) UseHeat a thin the printhead needle to toclean 200 theC; printhead; (3)(4) DeactivateUse a thin printhead needle to cleanheating the; printhead; (4)(5) MoveDeactivate printhead printhead back to heating; the initial position. (5) Move printhead back to the initial position. The heating and cooling of the printhead can be achieved by using a web-based UI of the 3D printer.The Additionally, heating and coolingusers can of themove printhead the printhead can be by achieved pressing by position using a adjust web-based buttons UI ofin thethe 3D3D printer.printer Additionally, web UI. Our users 3D canprinter move uses the printheadthe Web UIby pressing provided position by OctoPrint adjust buttons [31]. Two in the screens 3D printer are webnecessary UI. Our to 3Dperform printer the uses maintenance the Web UIoperation. provided Figure by OctoPrint 12 shows [31 the]. Twoprinthead screens movement are necessary UI. The to performarrows in thedicate maintenance the movement operation. directions. Figure 12The shows area theof interest printhead is movementmarked with UI. a The green arrows frame indicate and a thespeech movement balloon. directions. The area of interest is marked with a green frame and a speech balloon. Educ. Sci. 2018, 8, 21 12 of 18 Educ. Sci. 2018, 8, x FOR PEER REVIEW 12 of 18

Figure 12. Printhead movement UI. Figure 12. Printhead movement UI.

This UI is used to move the printhead to the front to access it with the needle. The next step is to heat This it to UI a veisry used high to move temperature the printhead to fluidize to the the front printing to access material it with in the the needle. printhead The to next make step the is tocleaning heat it procedure to a very high possible. temperature Figure 13 to shows fluidize the the printhead printing heating material UI. in Again, the printhead the area toof makeinterest the is cleaningmarked with procedure a green possible. frame and Figure a speech 13 shows balloon. the printhead heating UI. Again, the area of interest is marked with a green frame and a speech balloon. After the printhead is heated to the appropriate temperature, the cleaning process starts. The subject inserts the needle into the material hole to clean the print head. After the cleaning process, the subject sets the temperature of the printhead (tool) back to normal temperature (23 ◦C) and moves the printhead back to the home position. All of these steps are comparable to a real industry use case. At first the machine must be set into a maintenance mode and then some tools must be used to perform a certain maintenance task. Educ. Sci. 2018, 8, x FOR PEER REVIEW 13 of 18

Educ. Sci. 2018, 8, 21 13 of 18 Educ. Sci. 2018, 8, x FOR PEER REVIEW 13 of 18

Figure 13. Heat printhead UI.

After the printhead is heated to the appropriate temperature, the cleaning process starts. The subject inserts the needle into the material hole to clean the print head. After the cleaning process, the subject sets the temperature of the printhead (tool) back to normal temperature (23 °C) and moves the printhead back to the home position. All of these steps are comparable to a real industry use case. At first the machine must be set into a maintenance mode and then some tools must be Figure 13. Heat printhead UI. used to perform a certain maintenanceFigure task. 13. Heat printhead UI. After the printhead is heated to the appropriate temperature, the cleaning process starts. The 5. Results subject inserts the needle into the material hole to clean the print head. After the cleaning process, the subjectThis section sets theelaborates temperature on the of details the printhead of the qualitative (tool) back evaluation to normal of the temperature previously (23 described °C) and evaluationmoves the printhead scenarios. back Both Both to scenarios scenarios the home were wereposition. performed performed All of these by by the the steps same same are subject subjectcomparable without to a knowingreal industry the scenariosuse case. aAt priori. first Atthe first,fir machinest, thethe subjectsubject must performedbeperformed set into thethea maintenance generalgeneral scenarioscenario mode (fine-motor-skills (fine and-motor then -someskills training toolstraining must with with be a atoy).used toy). Figureto Figure perfor 14 14m shows showsa certain screenshots screenshots maintenance of of the the task. assemblyassembly ofof thethe toy.toy. TheThe instructorinstructor firstfirst used the drawing feature ofof thethe appapp toto explain explain the the different different parts. parts. Then Then the the instructor instructor tried tried to drawto draw arrows arrows to explainto explain the theshape5. Results shape and and the assemblythe assembly procedure procedure of the of toy.the toy. It turned It turned out thatout that explaining explaining the assembly the assembly process process only by drawing arrows or other shapes into the screen was not enough because of the challenging assembly only byThis drawing section arrowselaborates or other on the shapes details into of thethe screenqualitative was notevaluation enough ofbecause the previously of the challenging described procedure. Therefore, the instructor switched to verbal instructions which worked considerably better. assemblyevaluation procedure. scenarios. Therefore,Both scenarios the were instructor performed switched by the to verbal same subject instructions without which knowing worked the During the learning scenario, the subject ignored the video stream and the drawings and just listened considerablyscenarios a priori. better. At Duringfirst, the the subject learning performed scenario, the thegeneral subject scenario ignored (fine the-motor video-skills stream training and with the to the voice explanations of the instructor. This behavior justifies our approach, not only to stream drawingsa toy). Figure and 14 just shows listened screenshots to the voice of the explanations assembly of ofthe the toy. instructor. The instructor This firstbehavior used justifiesthe drawing our video and drawings, but also voice. approach,feature of notthe onlyapp toto explainstream videothe different and drawings, parts. Then but alsothe instructorvoice. tried to draw arrows to explain the shape and the assembly procedure of the toy. It turned out that explaining the assembly process only by drawing arrows or other shapes into the screen was not enough because of the challenging assembly procedure. Therefore, the instructor switched to verbal instructions which worked considerably better. During the learning scenario, the subject ignored the video stream and the drawings and just listened to the voice explanations of the instructor. This behavior justifies our approach, not only to stream video and drawings, but also voice.

Figure 14. Screenshots made by the instructor during the assembly of the toy. Figure 14. Screenshots made by the instructor during the assembly of the toy.

Figure 14. Screenshots made by the instructor during the assembly of the toy. Educ. Sci. 2018, 8, 21 14 of 18 Educ. Sci. 2018, 8, x FOR PEER REVIEW 14 of 18

The second second scenario scenario was was performed performed by the by same the subject. same subject. We tested We our tested solution our while solution performing while performinga maintenance a maintenance task. Figure task. 15 shows Figure our 15 3Dshows printer our 3D research printer lab. research The computer lab. The iscomputer used to is remote used tocontrol remote the control 3D printer. the 3D printer.

Figure 15. 3D printer research lab @ Virtual Vehicle Research Centre. Figure 15. 3D printer research lab @ Virtual Vehicle Research Centre.

The subject was not familiar with the web UI of the 3D printer; he had never used the UI before. In thisThe maintenance subject was phase, not familiar the drawing with the feature web UI was of the very 3D helpful printer; to he guide had never the subject used the through UI before. the process.In this maintenance Figure 16 shows phase, the the red drawing rectangle feature drawn was by very the helpful instructor to guide to explain the subject the subject through which the process.buttons he Figure had 16 to shows push theto move red rectangle the printhead. drawn byMarking the instructor the area to explainof interest the subjectof the UI which by using buttons a drawnhe had shape to push was to movevery effective the printhead. because Marking with audio the area it is ofvery interest difficult of the to UIexplain by using which a drawn buttons shape are wasthe correct very effective buttons because for this with task. audio Imagine it is an very industry difficult machine to explain where which pressing buttons the are wrong the correct button couldbuttons cost for a this lot task.of money Imagine or ancould industry be very machine dangerous. where In pressing this case, the it wrong is better button to mark could the cost buttons a lot of clearlymoney by or coulddrawing be verya shape dangerous. (arrow, Inrectangle this case,...). itOne is better issue toofmark the drawing the buttons shapes clearly came by up drawing while testinga shape this (arrow, use case rectangle...).. The shapes One are issue positioned of the drawingin the video shapes stream came related up while to the testing screen thisdimensions. use case. TheThis shapesmeans arethat positioned if the subject in theturns video his streamhead, the related video to stream the screen will dimensions.then have a new This field means of thatview, if but the thesubject rectangle turns hiswill head, be on the the video same stream position will relative then have to the a new screen. field Therefore, of view, but the the rectangle rectangle marks willbe now on antheother same part. position This relativewill be solved to the screen.in future Therefore, iterations the of rectanglethe prototype. marks The now shapes another are part. then This positioned will be correctlysolved in in future space, iterations even if the of thesmart prototype. glasses user The shapesturns his are head, then the positioned shapes will correctly be stick in space,to the correct even if spatialthe smart position. glasses In user this turns maintenance his head, theuse shapes case this will was be stick not ato big the issue correct because spatial ifposition. the subject In this is focusingmaintenance on a usecertain case process this was of not the amaintenance big issue because task (interaction if the subject with is focusingthe web onUI, ainteraction certain process with theof the printhead) maintenance his/her task head (interaction position withis quite the fixed web while UI, interaction concentrating with on the the printhead) current step. his/her In other head positionlearning is scenarios quite fixed or while industry concentrating use cases onthis the could current be step.a bigger In other issue, learning then the scenarios shapes or should industry be placeduse cases with this spatial could awareness. be a bigger issue, then the shapes should be placed with spatial awareness. Educ. Sci. 2018, 8, 21 15 of 18 Educ. Sci. 2018, 8, x FOR PEER REVIEW 15 of 18 Educ. Sci. 2018, 8, x FOR PEER REVIEW 15 of 18

Figure 16. Red rectangle visible in the smart glasses UI. Figure 16. Red rectangle visible in the smart glasses UI. Figure 16. Red rectangle visible in the smart glasses UI. After the subject finished all tasks in the web UI he started to clean the printhead. This was very challengingAfter the for subject him because finished finished he all did tasks not in even the web know UI how he started the printhead to clean andthe printhead. the drawing This pin was looked. very Figurechallenging 17 shows for him the because drawing he he pindid did not of theeven printhead know know how how marked the the printhead printhead by the instructor.and and the the drawing drawing In this pin pin phase looked. looked. the drawingFigure 17 17 feature shows shows theof the our drawing drawing system pin was pin of thevery of printheadthe effective printhead markedbecause marked byto theexplain by instructor. the the instructor. exact In thisposition phase In this of the the phase drawing pin theby voicedrawingfeature was of feature our very system difficultof our wassystem but very by was effectivedrawing very effective becausethe red because rectangle to explain to inexplain the the exact point the position exact-of-view position of of the the of pin subject,the by pin voice by he wasfoundvoice very wasthe difficult drawing very difficult but pin by in drawingbuta very by short drawing the redtime. rectanglethe red rectangle in the point-of-view in the point of-of the-view subject, of the he subject, found the he founddrawing the pin drawing in a very pin short in a very time. short time.

Figure 17. Marked drawing pin of the printhead. Figure 17. Marked drawing pin of the printhead. Figure 17. Marked drawing pin of the printhead. The next step was to use the very small needle to clean the drawing pin. During the cleaning procedureThe next the step subject was hadto use to focusthe very on small the process needle andto clean did notthe drawing use the informationpin. During ofthe the cleaning smart The next step was to use the very small needle to clean the drawing pin. During the cleaning glassesprocedure display. the subject Additionally, had to it focus was onquite the challenging process and for did the not subje usect theto focus information his eyes of alternating the smart procedure the subject had to focus on the process and did not use the information of the smart glasses betweenglasses display. the drawing Additionally, pin and it the was smart quite glasses challenging display. for Thisthe subje problemct to couldfocus his be solvedeyes alternating by using display. Additionally, it was quite challenging for the subject to focus his eyes alternating between seebetween-through the devices drawing such pin as and the theMicrosoft smart HoloLens glasses display. [32]. This problem could be solved by using the drawing pin and the smart glasses display. This problem could be solved by using see-through see-throughEventually, devices the suchsubject as totallythe Microsoft focused HoloLens his eyes on[32]. the drawing pin to solve the task. In such a devices such as the Microsoft HoloLens [32]. high-Eventually,concentration the phase, subject the totally smart focused glasses hisdisplay eyes shouldon the drawingbe turned pin off to to so notlve distract the task. the In subject. such a Eventually, the subject totally focused his eyes on the drawing pin to solve the task. In such a Figurehigh-concentration 18 shows the phase, subject the while smart cleaning glasses the display printhead. should be turned off to not distract the subject. Figurehigh-concentration 18 shows the phase,subject the while smart cleaning glasses the display printhead. should be turned off to not distract the subject. Figure 18 shows the subject while cleaning the printhead. Educ. Sci. 2018, 8, 21 16 of 18 Educ. Sci. 2018, 8, x FOR PEER REVIEW 16 of 18

Figure 18. The subject performed the cleaning procedure procedure..

6. Discussion and Conclusions 6. Discussion and Conclusions A WebRTC server and a smart glasses app were developed to implement remote learning and A WebRTC server and a smart glasses app were developed to implement remote learning and assistance. WebRTC was used to implement the streaming functionality, which fit for both use cases assistance. WebRTC was used to implement the streaming functionality, which fit for both use cases (RQ2). We tested our solution in a user-participatory research by performing a qualitative evaluation (RQ2). We tested our solution in a user-participatory research by performing a qualitative evaluation of two different learning scenarios. The first scenario (toy) was a more generic scenario to evaluate of two different learning scenarios. The first scenario (toy) was a more generic scenario to evaluate fine-motor-skills tasks and the second scenario was more industry-related, a maintenance use case. fine-motor-skills tasks and the second scenario was more industry-related, a maintenance use case. We derived requirements for our solution (Table 2) for the next iteration of our prototype (RQ3). The We derived requirements for our solution (Table2) for the next iteration of our prototype (RQ3). live annotations (drawings) were very helpful, especially during the maintenance task. With The live annotations (drawings) were very helpful, especially during the maintenance task. With drawings, the focus of interest of the subject can be set very effectively (RQ1). After the next drawings, the focus of interest of the subject can be set very effectively (RQ1). After the next prototype prototype is implemented, we will perform a quantitative research study with more users of the is implemented, we will perform a quantitative research study with more users of the target group to target group to validate our system. In future studies, detailed transcriptions and recordings of the validate our system. In future studies, detailed transcriptions and recordings of the whole learning whole learning scenario will be analyzed. scenario will be analyzed.

Table 2. Derived requirements for the next iteration of the prototype. Table 2. Derived requirements for the next iteration of the prototype. Derived Requirements Description Spatial awarenessDerived for drawing Requirements the The shapes should be drawn with spatial Description awareness. When the subject shapes turns his head, theThe drawings shapes should should be stay drawn on the with same spatial object awareness. in space. VoiceSpatial command awareness to switch for off drawing the the shapesThis feature is Whennecessary the subjectin situations turns hisin which head, the drawingssubject needs should to Smart glasses screen totally focus on the task andstay not on on the the same display object of in the space. smart glasses Since smart glassesThis are feature now easily is necessary available in situations we will implement in which thethe Voice commandSee-through to switch devices off the Smart glassesprototype screen on othersubject devices needs with to see totally-through focus displays on the task as the and Microsoft not on theHoloLens display of the smart glasses Since smart glasses are now easily available we will The whole See-throughsoftware artifact devices and setting can now implementbe used for the several prototype educational on other devices scenarios. with The next step is to create coaching material to support the see-throughinstructor to displays get familiar as the Microsoftwith this HoloLensnew way of assisting subjects remotely. Additionally, the effectiveness of this system has to be investigated and statistical data such as how often the instructor had to repeat the voice commands has to be gathered. These issues will be addressed in further studies. Educ. Sci. 2018, 8, 21 17 of 18

The whole software artifact and setting can now be used for several educational scenarios. The next step is to create coaching material to support the instructor to get familiar with this new way of assisting subjects remotely. Additionally, the effectiveness of this system has to be investigated and statistical data such as how often the instructor had to repeat the voice commands has to be gathered. These issues will be addressed in further studies.

Acknowledgments: This study has received funding from FACTS4WORKERS—Worker-centric Workplaces for Smart Factories—the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 636778. The authors acknowledge the financial support of the COMET K2—Competence Centers for Excellent Technologies Programme of the Austrian Federal Ministry for Transport, Innovation and Technology (BMVIT), the Austrian Federal Ministry of Science, Research and Economy (BMWFW), the Austrian Research Promotion Agency (FFG), the Province of Styria and the Styrian Business Promotion Agency (SFG). The project FACTS4WORKERS covers the costs to publish in open access. Author Contributions: Michael Spitzer did the use-cases development, research design, software architecture, software development support and wrote the paper. Ibrahim Nanic was responsible for the software implementation. Martin Ebner reviewed and supervised the research study. Conflicts of Interest: The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

1. Tomiuc, A. Navigating culture. Enhancing visitor museum experience through mobile technologies. From smartphone to google glass. J. Media Res. 2014, 7, 33–47. 2. Kuhn, J.; Lukowicz, P.; Hirth, M.; Poxrucker, A.; Weppner, J.; Younas, J. gPhysics—Using Smart Glasses for Head-Centered, Context-Aware Learning in Physics Experiments. IEEE Trans. Learn. Technol. 2016, 9, 304–317. [CrossRef] 3. Moshtaghi, O.; Kelley, K.S.; Armstrong, W.B.; Ghavami, Y.; Gu, J.; Djalilian, H.R. Using Google Glass to solve communication and surgical education challenges in the operating room. Laryngoscope 2015, 125, 2295–2297. [CrossRef][PubMed] 4. Kommera, N.; Kaleem, F.; Harooni, S.M.S. Smart augmented reality glasses in cybersecurity and forensic education. In Proceedings of the 2016 IEEE Conference on Intelligence and Security Informatics (ISI), Tucson, AZ, USA, 28–30 September 2016; pp. 279–281. [CrossRef] 5. Sapargaliyev, D. Learning with wearable technologies: A case of Google Glass. In Proceedings of the 14th World Conference on Mobile and Contextual Learning, Venice, Italy, 17–24 October 2015; pp. 343–350. [CrossRef] 6. Spitzer, M.; Ebner, M. Use Cases and Architecture of an Information system to integrate smart glasses in educational environments. In Proceedings of the EdMedia—World Conference on Educational Media and Technology, Vancouver, BC, Canada, 28–30 June 2016; pp. 57–64. 7. Spitzer, M.; Ebner, M. Project Based Learning: From the Idea to a Finished LEGO® Technic Artifact, Assembled by Using Smart Glasses. In Proceedings of EdMedia; Johnston, J., Ed.; Association for the Advancement of Computing in Education (AACE): Washington, DC, USA, 2017; pp. 269–282. 8. WebRTC Home|WebRTC. Available online: https://webrtc.org (accessed on 26 November 2017). 9. Alavi, M. An assessment of the prototyping approach to information systems development. Commun. ACM 1984, 27, 556–563. [CrossRef] 10. Larson, O. Information systems prototyping. In Proceedings of the Interez HP 3000 Conference, Madrid, Spain, 10–14 March 1986; pp. 351–364. 11. Floyd, C. A systematic look at prototyping. In Approaches to Prototyping; Budde, R., Kuhlenkamp, K., Mathiassen, L., Züllighoven, H., Eds.; Springer: Berlin/Heidelberg, Germany, 1984; pp. 1–18. 12. Android. Available online: https://www.android.com (accessed on 26 November 2017). 13. Vuzix|View the Future. Available online: https://www.vuzix.com/Products/m100-smart-glasses (accessed on 26 November 2017). 14. Node.js. Available online: https://nodejs.org/en (accessed on 26 November 2017). Educ. Sci. 2018, 8, 21 18 of 18

15. WebRTC JavaScript Library for Peer-to-Peer Applications. Available online: https://github.com/muaz- khan/RTCMulticonnection (accessed on 26 November 2017). 16. De Casteljau, P. Courbes a Poles; National Institute of Industrial Property (INPI): Lisbon, Portugal, 1959. 17. Vuzix M100 Specifications. Available online: https://www.vuzix.com/Products/m100-smart-glasses#specs (accessed on 26 November 2017). 18. Java. Available online: http://www.oracle.com/technetwork/java/index.html (accessed on 26 November 2017). 19. Android Studio. Available online: https://developer.android.com/studio/index.html (accessed on 26 November 2017). 20. Introduction to Android|Android Developers. Available online: https://developer.android.com/guide/ index.html (accessed on 26 November 2017). 21. WebView|Android Developers. Available online: https://developer.android.com/reference/android/ webkit/WebView.html (accessed on 26 November 2017). 22. Apache Cordova. Available online: https://cordova.apache.org (accessed on 28 November 2017). 23. Crosswalk. Available online: http://crosswalk-project.org (accessed on 26 November 2017). 24. Architectural Overview of Cordova Platform—Apache Cordova. Available online: https://cordova.apache. org/docs/en/latest/guide/overview/index.html (accessed on 26 November 2017). 25. Crosswalk 23 to be the Last Crosswalk Release. Available online: https://crosswalk-project.org/blog/ crosswalk-final-release.html (accessed on 26 November 2017). 26. Crosswalk and the Intel XDK. Available online: https://www.slideshare.net/IntelSoftware/crosswalk-and- the-intel-xdk (accessed on 26 November 2017). 27. WebView for Android. Available online: https://developer.chrome.com/multidevice/webview/overview (accessed on 26 November 2017). 28. Quint, F.; Loch, F. Using smart glasses to document maintenance processes. In Mensch & Computer— Tagungsbände/Proceedings (n.d.); Oldenbourg Wissenschaftsverlag: Berlin, Germany, 2015; pp. 203–208. ISBN 978-3-11-044390-5. 29. Cornwall, A.; Jewkes, R. What is participatory research? Soc. Sci. Med. 1995, 41, 1667–1676. [CrossRef] 30. Spitzer, M.; Schafler, M.; Milfelner, M. Seamless Learning in the Production. In Proceedings of the Mensch und Computer 2017—Workshopband. Regensburg: Gesellschaft für Informatik e.V., Regensburg, Germany, 10–13 September 2017; Burghardt, M., Wimmer, R., Wolff, C., Womser-Hacker, C., Eds.; pp. 211–217. [CrossRef] 31. OctoPrint. Available online: http://octoprint.org (accessed on 27 November 2017). 32. Microsoft HoloLens|The Leader in Mixed Reality Technology. Available online: https://www.microsoft. com/en-us/hololens (accessed on 27 November 2017).

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).