electronics

Article The Invisible : A User-Centric Platform for Creating Virtual 3D with VR Support

Emmanouil Zidianakis 1,*, Nikolaos Partarakis 1, Stavroula Ntoa 1 , Antonis Dimopoulos 1, Stella Kopidaki 1, Anastasia Ntagianta 1, Emmanouil Ntafotis 1, Aldo Xhako 1, Zacharias Pervolarakis 1, Eirini Kontaki 1, Ioanna Zidianaki 1, Andreas Michelakis 1, Michalis Foukarakis 1 and Constantine Stephanidis 1,2

1 Institute of Computer Science, Foundation for Research and Technology—Hellas (FORTH), 70013 Heraklion, Crete, Greece; [email protected] (N.P.); [email protected] (S.N.); [email protected] (A.D.); [email protected] (S.K.); [email protected] (A.N.); [email protected] (E.N.); [email protected] (A.X.); [email protected] (Z.P.); [email protected] (E.K.); [email protected] (I.Z.); [email protected] (A.M.); [email protected] (M.F.); [email protected] (C.S.) 2 Computer Science Department, School of Sciences & Engineering, University of Crete, 70013 Heraklion, Crete, Greece * Correspondence: [email protected]

Abstract: With the ever-advancing availability of digitized museum artifacts, the question of how to make the vast of exhibits accessible and explorable beyond what traditionally offer via their websites and exposed databases has recently gained increased attention. This research work introduces the Invisible Museum: a user-centric platform that allows users to create interactive and immersive virtual 3D/VR exhibitions using a unified collaborative authoring environment. The   platform itself was designed following a Human-Centered Design approach, with the active participa- tion of museum and end-users. Content representation adheres to domain standards such as Citation: Zidianakis, E.; Partarakis, International Committee for Documentation of the International Council of Museums (CIDOC-CRM) N.; Ntoa, S.; Dimopoulos, A.; and the Europeana Data and exploits state-of-the-art deep learning technologies to assist Kopidaki, S.; Ntagianta, A.; Ntafotis, the curators by generating ontology bindings for textual data. The platform enables the formu- E.; Xhako, A.; Pervolarakis, Z.; lation and semantic representation of narratives that guide storytelling experiences and bind the Kontaki, E.; et al. The Invisible Museum: A User-Centric Platform for presented artifacts with their socio-historic context. Main contributions are pertinent to the fields of Creating Virtual 3D Exhibitions with (a) user-designed dynamic virtual exhibitions, (b) personalized suggestions and tours, (c) VR Support. Electronics 2021, 10, 363. visualization in web-based 3D/VR technologies, and (d) immersive navigation and interaction. The https://doi.org/10.3390/electronics Invisible Museum has been evaluated using a combination of different methodologies, ensuring the 10030363 delivery of a high-quality user experience, leading to valuable lessons learned, which are discussed in the article. Received: 31 December 2020 Accepted: 28 January 2021 Keywords: virtual exhibition; ; ; web-based 3D/VR exhibitions; natural Published: 2 February 2021 language processing; photorealistic renderings; semantic content representation

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affil- 1. Introduction iations. Virtual Museums (VMs) aim to provide the means to establish access, context, and outreach by using information technology [1]. In the past decade, VMs have evolved from digital duplicates of “real” museums or online museums into complex communication systems, which are strongly connected with narratives in 3D reconstructed scenarios [2]. Copyright: © 2021 by the authors. This evolution has provided a wide variety of VMs instantiations delivered through mul- Licensee MDPI, Basel, Switzerland. tiple platforms and technologies, aiming to visually explain history, , and or This article is an open access article artworks (indoor virtual , embedded virtual reconstructions, etc.). For example, distributed under the terms and on-site interactive installations aim at conserving the collective experience of a typical conditions of the Creative Commons Attribution (CC BY) license (https:// visit to a museum [3]. Web-delivered VMs, on the other hand, provide content through creativecommons.org/licenses/by/ the Web and are facilitated by a wide variety of 3D viewers that have been developed to 4.0/). provide 3D interactive applications “embedded” in browsers. A common characteristic in

Electronics 2021, 10, 363. https://doi.org/10.3390/electronics10030363 https://www.mdpi.com/journal/electronics Electronics 2021, 10, 363 2 of 38

contemporary VMs is the use of 3D models reconstructing monuments, sites, landscapes, etc., which can be explored in most cases in real-time, either directly or through a guide [4]. The evolving technologies of Virtual Reality (VR), Augmented Reality (AR), and the Web have reached a level of maturity that enables them to contribute significantly to the creation of VMs, usually as an extension of physical museums. However, existing VMs are implemented as ad hoc instantiations of physical museums, thus setting various obstacles toward a unified user-centric approach, such as (a) lack of tools for writing stories that use and connect knowledge with digital representations of objects, (b) lack of a unified platform for the presentation of virtual exhibitions on any device, and (c) lack of mechanisms for personalized interaction with knowledge and digital information. In this context, this research work presents the Invisible Museum: a user-centric platform that allows users to create interactive and immersive virtual 3D/VR exhibitions using a unified collaborative authoring environment. The platform constitutes a VM repository incorporating several technologies that allow novice and expert users to create interactive and unique virtual exhibitions collaboratively. It builds upon Web-based 3D/VR technologies, aiming to provide access to the available content to anyone around the world. In this way, digital collections of exhibits can be promoted to different target groups to foster knowledge, promote , and enhance education, history, science, or art in a user-friendly and interactive way. At the same time, by adopting existing standards for the representation of information and museum knowledge, virtual exhibitions become easily accessible on multiple platforms and presentation devices. Furthermore, it supports content personalization facilities that allow adaptation to diverse end-users, thus supporting alternative forms of virtual exhibitions. Last, the platform itself was designed following a Human-Centered Design (HCD) approach with the collaboration of end-users as well as museum curators and personnel. The ambition of this research work is to provide a useful and meaningful medium for creating and experiencing cultural content, setting the foundation for a framework that applies research outcomes in the museum context with direct benefits for museums, Cultural and Creative Industries, as well as the society. As such, the main impact of this work stems from its core scientific objective, which is to develop an integrated environment for the representation and presentation of museum collections in a VM context. In this work, several research challenges are addressed, including the semantic representation of narratives, the generation of ontology bindings from text, and web (co-)authoring of virtual exhibitions that can be experienced both in 3D and VR. The remainder of this article is structured as follows: background and related work in the field of technologies for VMs is discussed in Section2; the methodology followed for the development of the Invisible Museum is presented in Section3; user requirements elici- tation process and analysis are presented in Section4; the iterative design and evaluation results are presented in Section5; the platform itself is presented in Section6 along with the architecture, structure, and implementation details; while results of its evaluation are presented in Section8. Section9 includes a discussion on the impact, lessons learned, and limitations of this work. Finally, the concluding Section 10 summarizes the key points of this work and highlights directions for future work.

2. Background and Related Work 2.1. Virtual Museums One of the earliest definitions [1] of a VM identifies a VM as “a means to establish access, context, and outreach by using information technology. The Internet opens the VM to an interactive dialogue with virtual visitors and invites them to make a VM experience that is related to a real museum experience”. Recently, VMs have been envisioned as virtual places that provide access to structured museum narratives and immersive experiences [2]. This evolution has provided a wide variety of VM instantiations delivered through multiple platforms and technologies. Electronics 2021, 10, 363 3 of 38

Mobile VMs or Micro Museums exploit VR/AR Mobile applications to explain the history, architecture, and or other artworks visually (indoor virtual archaeology, embed- ded virtual reconstructions, etc.). On-site interactive installations focus on augmenting existing museum collections with digital technologies [3]. Web-based VMs provide digi- tized museum content online. Examples include Arts and Culture [5], Inventing Europe [6], MUSEON [7], DynaMus [8], Scan4Reco [9], VIRTUE [10], etc. VMs involve interactive experiences blending video, audio, and interactive technologies, which are usually delivered via CD-ROM or DVD, such as for example the “Medieval Dublin: From Vikings to Tudors” [11]. In parallel with the evolution of virtual museums, Digital are becoming increasingly popular, as the number of digital information to be indexed increases, together with the wish of the public to gain access to information. In the past decade, Digital Archives have evolved and today provide thesauruses of terms, digital repositories considering different metadata schemes, intelligent searching/browsing systems, etc. Existing approaches to the provision of VM experiences can be classified by (a) content, referring to the actual theme and exhibits of a VM (history, archaeology, natural history, art, etc.), (b) interaction technology, related to users’ capability of modifying the environment and receiving a feedback to their actions (both immersion and interaction concur to realize the belief of actually being in a virtual space [12]), (c) duration, referring to the timing of a VM and the consequences in terms of technology, content, installation, sustainability of the projects (periodic, permanent, temporary), (d) communication, referring to the type of communication style used to create a VM (descriptive, narrative or dramatization-based VMs), (e) level of user immersion: immersive, and non-immersive [12], (f) sustainability level, i.e., the extent to which the museum software, digital content, setup, and/or metadata are reusable, portable, maintainable, exchangeable, re-usable, (g) type of distribution, referring to the extent to which the VM can be moved from one location to another and (h) scope of the VM, including educational, entertainment, promotional, and research purposes.

2.2. Technologies for Virtual Museums Considering the multiple definitions of a VM, today, a plethora of technologies are employed to provide access to such applications and services. Without being exhaustive, this background and related work in this article focus on technologies that are relevant to the proposed solution as presented in the following subsections.

2.2.1. Semantic Web Technologies In the Cultural Heritage (CH) domain, the use of ontologies for describing, classifying, and contextualizing objects is now a well-established practice. The Conceptual Reference Model (CRM) of the International Committee for Documentation of the International Council of Museums (ICOM-CIDOC) [13] has emerged as a conceptual basis for reconciling different metadata schemas. CRM is an ISO standard (21127:2014 [14]) that has been integrated with the Functional Requirements for Bibliographic Records (FRBR) and the Europeana Data Model (EDM) [15], which are currently used to describe and contextualize the largest European collection of CH resources, featuring more than 30 million objects. The EDM plays the role of upper ontology for integrating metadata schemes of , archives, and museums. Furthermore, linked data endpoints have been published under the creative commons license such as collections from the , Europeana, gallery, etc. In the same context, the generation of top-level ontologies for narratives on top of the aforementioned semantic models has been proposed to support novel forms of representation of CH resources. Computational narratology [16] studies narratives from a computation perspective. The concept of the event is a core element of narratology theory and the narratives. People conventionally refer to an event as an occurrence taking place at a certain time at a specific location. Various models have been developed for representing Electronics 2021, 10, 363 4 of 38

events on the Semantic Web, e.g., Event Ontology [17], Linking Open Descriptions of Events (LODE) [18], and the F-Model [13]. Some more general models for semantic data organization are (a) the CIDOC-CRM [19], (b) the ABC ontology [20], and (c) the EDM [15]. Narratives have been recently proposed to enhance the information contents and functionality of Digital Libraries, with special emphasis on information discovery and exploration. For example, in the Personalising Access to cultural Heritage Spaces (PATHS) project [21] a system that acts as an interactive personalized tour guide through existing digital collections was created. In the Agora project [22], methods and techniques to support the narrative understanding of objects in VMs collections were developed. The Storyspace system [23] allows describing stories that span museum objects. Recently, the Mingei Craft Ontology (CrO) [24] and the Mingei Online Platform [25] have been proposed to provide a holistic ecosystem for the representation of the socio-historical context of a museum collection and its presentation through multimodal event-based narratives.

2.2.2. Text Analysis for Semantic Web Representation The quality of Automatic Speech Recognition (ASR) and surface-oriented language analysis have been improved by deep learning-(neural network-)based sequence-to-sequence models (such as [26,27]). Deep language analysis to produce content representations out of the analyzed textual input has recently attracted research attention. To date, semantic parsing techniques produce either “abstract meaning representations”, which are abstrac- tions of syntactic dependency trees, or semantic role structures [28,29]. Relevant to this work is the production of abstract semantic structures that can be mapped onto ontological representations. The extraction of predefined domain-dependent concepts and relation instantiations are currently application-specific, such for example relations [30]. Other approaches focus on the identification of generic, schema-level knowledge, again concerning predefined types of information, such as IS-A relations [31] or part–whole relations [32]. Recently, research shifted to open-domain to capture the entirety of textual inputs, in terms of entities and their interrelations as structured Resource Description Framework (RDF) / Web Ontology Language (OWL) representations, e.g., Semantic Web Machine Reading with FRED [33] and JAMR parser [34].

2.2.3. Virtual Reality VR is the total sensory immersion, which includes immersion displays, tracking, and sensing technologies. Common visualization displays include head-mounted displays and 3D polarizing stereoscopic glasses, while inertia and magnetic trackers are the most popular positional and orientation devices. As far as sensing is considered, 3D mouse and gloves can be used to create a feeling of control of an actual space. An example of a high-immersion VR environment is Kivotos, which is a VR environment that uses the CAVE® system, in a room of 3 by 3 meters, where the walls and the floor act as projection screens in which visitors take off on a journey thanks to stereoscopic 3D glasses [35]. As mentioned earlier, virtual exhibitions can be visualized in the web browser in the form of 3D galleries, but they can also be used as a standalone interface. In addition, several commercial VR software tools and libraries exist, such as Cortona [36], which can be used to generate fast and effective VM environments. However, the cost of creating and storing the content (i.e., 3D galleries) is considerably high for the medium and small-sized museums that represent the majority of Cultural Heritage Institutions (CHIs). More recently, the evolution in headsets technology made VR equipment more widely available and less demanding in terms of equipment costs for the museums. This led to the creation of several VR-based VM experiences (e.g., [37]) facilitating mainstream hardware such as the HTC Vive [38] and the Oculus Quest v2 [39].

2.2.4. Visualization Technologies The most popular technology for the (WWW) visualization includes Web3D [40], which offers standards and tools such as VRML and X3D, which can be used Electronics 2021, 10, 363 5 of 38

for the creation of an interactive VM. Many museum applications based on VRML have been developed for the Web [41,42]. However, VRML can be excessively labor-intensive, time-consuming, and expensive. QuickTime VR (QTVR) and panoramas that allow ani- mation and provide dynamic and continuous 360◦ views might represent an alternative solution for museums, such as in Hughes et al. [43]. As with VRML, the image allows panning and high-quality zooming. Furthermore, hotspots that connect the QTVR and panoramas with other files can be added [44]. In contrast, X3D is an Open Standards XML- enabled 3D file format offering real-time communication of 3D data across all applications and network applications. Although X3D is sometimes considered as an Application Pro- gramming Interface (API) or a file format for geometry interchange, its main characteristic is that it combines both geometry and runtime behavioral descriptions into a single file alone. Moreover, X3D is considered to be the next revision of the VRML97 ISO specification, incorporating the latest advances in commercial graphics hardware features, as well as improvements based on years of feedback from the VRML97 development community. One more 3D graphics format is COLLAborative Design Activity (COLLADA) [45], which de- fines an open standard XML schema for exchanging digital assets among various graphics software applications that might otherwise store their assets in incompatible formats. One of the main advantages of COLLADA is that it includes more advanced physics functional- ity such as collision detection and friction (which is not supported by Web3D). A recently published royalty-free specification for the efficient transmission and loading of 3D scenes and models is glTF (derivative short form of Graphics Language Transmission Format or GL Transmission Format) [46]. The file format was conceived in 2012 by members of the COLLADA working group. It is intended to be a streamlined, interoperable format for the delivery of 3D assets while minimizing file size and runtime processing by apps. As such, its creators have described it as the "JPEG of 3D". The binary version of the format is called GLB, where all assets are stored in a single file. Moreover, powerful technologies have been used in museum environments including OpenSceneGraph (OSG) [47] and a variety of 3D game engines [48,49]. OSG is an open-source multi-platform high-performance 3D graphics toolkit, used by museums [35,50] to generate powerful VR applications, especially in terms of immersion and interactivity since it supports text, video, audio, and 3D scenes into a single 3D environment. On the other hand, the game engines are very powerful, and they provide superior visualization and physics support. This work builds on the aforementioned advancements in the state-of-the-art and provides the details of the following main technical contributions in the field of (a) user- designed dynamic virtual exhibitions, (b) personalized suggestions and exhibition tours, (c) visualization in Web-based 3D/VR, and (d) immersive navigation and interaction in photorealistic renderings.

2.3. Content Provision Apart from the technologies used to access a VM, recent research has exploited the concept of content personalization. In this context, content personalization based on user profile information has been proposed to support information provision by multi-user interactive museum exhibits [51]. Such systems detect the user currently interacting with the exhibit and adapt the provision of information based on user profile data such as age, expertise, interests, language, etc. In the same context, to provide access to the widest possible population including people with disabilities, similar approaches have been employed affecting both the personalization of information and the interaction metaphors used for providing access to information [52]. More recently, research has extended the concept of a user profile to include further user-related information such as opinions, interaction behavior, user ratings, and feedbacks. Using these knowledge recommendation systems has increased the accuracy of the content provided in social media [53]. Extensions of this approach in music recommendation systems employ techniques for the identification of personality traits, moods, and emotions to improve the quality of recommendations [54]. In this research work, an alternative approach is exploited focusing on provided alternative Electronics 2021, 10, 363 6 of 39

Electronics 2021, 10, 363 concept of a user profile to include further user-related information such as opinions, in- 6 of 38 teraction behavior, user ratings, and feedbacks. Using these knowledge recommendation systems has increased the accuracy of the content provided in social media [53]. Exten- sions of this approach in music recommendation systems employ techniques for the iden- formstification of curatedof personality knowledge traits, moods, at the narrativeand emotions level, to consideringimprove the quality the need of recommen- for the museum to providedations [54]. curated In this information research work, in all an user alternative groups. approach is exploited focusing on pro- vided alternative forms of curated knowledge at the narrative level, considering the need 3.for Methodology the museum to provide curated information in all user groups. The methodology followed for the development of the Invisible Museum is rooted in 3. Methodology the HCD Process [55], actively involving User Experience (UX) experts, domain experts (archaeologistsThe methodology and museum followed for curators), the development as well asof the representative Invisible Museum end-users. is rooted HCD in is an approachthe HCD Process to the software[55], actively development involving User process Experience that focuses (UX) experts, on the domain user, ratherexperts than the (archaeologists and museum curators), as well as representative end-users. HCD is an ap- technology. The participation of end-users starts from the first stages of the design and proach to the software development process that focuses on the user, rather than the tech- continuesnology. The until participation the end of of the end development.-users starts from In addition, the first stages the method of the design of iterative and con- design and developmenttinues until the is end followed, of the development. which includes In addition, multiple the evaluations method of iterative by experts design and and end-users, atdevelopment various stages is followed, of the applicationwhich includes design multiple and evaluations development, by experts which and may end lead-users, to refining theat various prototypes, stages the of the identified application requirements, design and development, or the specifications which may of thelead context to refining of use. The processthe prototypes, is based the on identified the use requir of techniquesements, or forthe communication,specifications of the interaction, context of use. empathy, The and stimulationprocess is based of participants, on the use of thustechniques gaining for ancommunication, understanding interaction, of their empathy, needs, desires, and and experiences.stimulation of Thus, participants, HCD focuses thus gaining the questions, an understanding ideas, and of their activities needs, on desires, the people and whom theexperiences. system addresses, Thus, HCD rather focuses than the questions, on the designers’ ideas, and creative activities process on the orpeople the characteristicswhom ofthe the system technology addresses, [56 ].rather In particular, than on the four designers' main phasescreative are process involved or the in characteristics the process (central circleof the intechnology Figure1 ):[56]. (a) In understanding particular, four main and phases specifying are involved the context in the ofprocess use, (b)(central specifying circle in Figure 1): (a) understanding and specifying the context of use, (b) specifying user user requirements, (c) producing design solutions and prototypes, and (d) evaluating requirements, (c) producing design solutions and prototypes, and (d) evaluating the so- thelutions. solutions.

Figure 1. The methodology followed for the design and development of the Invisible Museum Figure 1. The methodology followed for the design and development of the Invisible Museum platform. platform.

Invisible Museum applies different methods in the context of each phase, aiming to ensure diversity and elicit the best possible results in terms of quality and validity. In particular: • For the specification of the context of use and user requirements, semi-structured interviews [57] with museum curators were conducted, as well as co-creation work- shops [58] with end-users, archaeologists, and curators. Following the initial require- ments, a use case modeling approach [59] was adopted, during which user types and roles (actors) were identified, and use cases were specified and described in detail. The results of the user requirements methods are presented in Section4. Electronics 2021, 10, 363 7 of 38

• The mockup design phase followed, during which initial mockups were created, which were evaluated following the heuristic evaluation [60] by UX experts. Based on the feedback acquired, an extensive set of mockups for the Web and VR environment were designed, which were in turn evaluated by end-users, as well as by domain and UX experts following a group inspection approach [61]. The evaluation results were addressed by redesigning the mockups, which were used for the implementation of the Invisible Museum. Results from the mockup design and evaluation are reported in Section5. • The implementation phase followed, during which the platform was developed, adopting the designed mockups and fully addressing functional requirements. The platform itself is presented in Section6, and the entailed challenging technical aspects are discussed in Section7. • Finally, the implemented system was evaluated by experts following the cognitive walkthrough approach [62], with the aim to identify if the platform users will be able to identify the correct action to take at all the interaction steps and appropriately interpret system output. The results of this evaluation are described in detail in Section7. It is noted that all the activities involving human subjects, namely interviews, co- creation workshops, and evaluation activities have received the approval of the Ethics Committee of the Foundation for Research and Technology–Hellas (Approval date: 12 April 2019 /Reference number: 40/12-4-2019). Prior to their participation in each research activity, all participants carefully reviewed and signed the user consent forms that they were given. Informed Consent forms have been prepared in accordance with the General Data Protection Regulation of the European Union [63] and have been approved by the Data Protection Officer of the Foundation for Research and Technology–Hellas.

4. User Requirements 4.1. Interviews with Curators To understand the context of use and identify habits, procedures, and preferences of museum archaeologists and curators in the preparation of museum exhibitions and exhibits, semi-structured interviews with five (5) representative participants were carried out, using a questionnaire that was created for this purpose. The questionnaire was initially designed based on the literature research and research of best practices in the field. Then, a pilot interview was conducted, the conclusions of which provided feedback and formulated the final questionnaire that was used for the interviews. In the context of the interview, experts were invited to envision a system for the creation of VMs and to point out their needs and expectations through responding to various questions. A top–down approach was followed for ordering the questions, moving from more general to more specific issues. The questionnaire involved both closed-ended and open-ended questions, to prioritize preferences, but also to allow interviewees to develop spontaneous responses [64]. The questionnaire was structured in three main sections: 1. Background and demographic information of the participant. 2. Digital museum exhibitions, asking participants to describe how the content of a digital museum exhibition should be organized, how digital exhibition rooms should be set up, and how a digital exhibit should be presented. 3. Digital tours, exploring how tours to the digital museum should be organized and delivered to end-users. An unstructured discussion followed each interview, allowing participants to further elaborate on topics of their preference (for example by expressing concerns or additional requirements), as well as on topics that were not fully clear or which raised the interest of the interviewer. The analysis of results, as presented below, identified preferences for the information to be included concerning digital exhibits and digital exhibitions, as well as regarding the creation of digital rooms and pertinent tours. Electronics 2021, 10, 363 8 of 39

2. Digital museum exhibitions, asking participants to describe how the content of a dig- ital museum exhibition should be organized, how digital exhibition rooms should be set up, and how a digital exhibit should be presented. 3. Digital tours, exploring how tours to the digital museum should be organized and delivered to end-users. An unstructured discussion followed each interview, allowing participants to further elaborate on topics of their preference (for example by expressing concerns or additional Electronics 2021, 10, 363 requirements), as well as on topics that were not fully clear or which raised the interest 8of of 38 the interviewer. The analysis of results, as presented below, identified preferences for the information to be included concerning digital exhibits and digital exhibitions, as well as regarding the creation of digital rooms and pertinent tours. The information that all participants identified as necessary for the representation of a The information that all participants identified as necessary for the representation of digital exhibit (Figure2) was its title, material, dimensions of the respective physical object, a digital exhibit (Figure 2) was its title, material, dimensions of the respective physical chronologicalobject, chronological period, peri creator//holderod, creator/artist/holder (depending (depending on theon the exhibit exhibit type), type), multimedia multi- contentmedia content (e.g., images,(e.g., images, videos, videos, audio), audio), as well as well as a as narration. a narration. It isIt worthis worth mentioning mentioning that 80%that 80% of the of participantsthe participants identified identified as importantas important information: information: the the thematic thematic category category ofof the exhibit,the exhib itsit, type, its type, maintenance maintenance status, status, donor/owner, donor/owner, as wellas well as aas representative a representative image image of the exhibitof the exhibit to be used to be as used a cover as a of cover its digital of its presentation.digital presentation. Other useful Other informationuseful information suggested includedsuggested a included description a description of the exhibit of the (60%) exhibit and (60%) the and location the location of the exhibitof the exhibi in thet in physical the museumphysical museum (if any) (20%).(if any) (20%).

Figure 2. DigitalDigital exhibit exhibit-related-related information. information.

As depicted in in Figure Figure 3a,3a, information information to to include include to to appropriately appropriately represent represent a digital a digital exhibition was was unanimously unanimously agreed agreed by by all all participants participants (100%) (100%) to be to beits itstitle title and and an intro- an intro- ductory text, while while a a considerable considerable majority majority (80%) (80%) responded responded that that a digital a digital exhibition exhibition is is also characterized by by one one or or more more thematic thematic categories categories and and time time periods. periods. Other Other suggested suggested informationinformation regarding regarding a a digital digital exhibition exhibition included included curators’ curators’ names names (60%), (60%), multimedia multimedia (20%),(20%), and the image of a representative exhibit exhibit (20%). (20%). As As illustrated illustrated in in Figure Figure 3b,3b, when when it comesit comes to to organizing organizing digital digital exhibitions, exhibitions, participants participants identified identified chronological chronological and and thematic the- asmatic the as two the most two appealingmost appealing options. options. However, However, additional additional options options suggested suggested were were as follows: as accordingfollows: according to the user to the (e.g., user age (e.g., group, age group, interests), interests), according according to a narrative to a narrative that thethat the Electronics 2021, 10, 363 curator would like to present and which may change from time to time, or according9 of 39 to would like to present and which may change from time to time, or according to the exhibits’ locationthe exhibits’ in the location physical in the museum. physical museum.

(a) (b)

FigureFigure 3. 3. (a()a )Basic Basic information information for for the the representation representation of a of digital a digital exhibition; exhibition; (b) Organizing (b) Organizing digital digital exhibitions. exhibitions.

Concerning the creation of digital rooms, all participants identified that the following information should always be included: exhibition title, descriptive text, thematic sec- tions, and a multimedia narrative of the exhibition. Additional information that should be used to describe a digital room was suggested to include chronological periods of the con- tained exhibits (80%), room identifier (60%), as well as a musical background to be repro- duced for visitors of the digital room (20%). Furthermore, participants identified as useful functionality the inclusion of ready-to-use templates of digital rooms, as well as the option for reusing features from existing digital rooms. Finally, the option for creating digital rooms from scratch was also considered as a mandatory functionality. Finally, for digital tours, participants identified that the creation of such tours should support virtual routes per thematic category (100%), per chronological period (100%), se- lected highlights by the curator(s) of a museum (80%), tours according to the available time of the visitor (80%), tours according to the age of the visitor (80%), free tour to the entire digital space (20%), and tours according to the personal interests, cognitive back- ground, and cultural background of the visitor (20%). Further open-ended questions explored how curators would prefer to collaborate to create digital content. Participants’ responses identified the need for supporting different access levels for the various user types and stages of creating digital content. In particular, it turned out that one person should be identified as the main content creator for a digital exhibit and a digital exhibition, having full ownership; however, they could be supported by co-creators who should only have edit rights. Users of the platform should not have access to the pertinent digital contents unless these have been marked by the owner as ready to publish.

4.2. Co-Creation Workshops Co-creation workshops are a participatory design method that involves end-users of the system in the design process [65]. In such workshops, participants do not have the role of the service user but rather that of its designer. Thus, each participant, as an "expert of their expertise" plays an important role in the development of knowledge, in the produc- tion of ideas, and in the creation of concepts [66]. Three co-creation workshops were organized, involving a total of 20 participants, trying to achieve a balanced distribution between ages and genders (see Table 1). All par- ticipants were familiar with the internet and smart mobile devices, while several were familiar with virtual reality applications. In addition, six of the participants were domain experts (historians, archaeologists, and curators). The workshops were facilitated by a UX expert experienced in user research methodologies, as well as participatory design and evaluation with end-users.

Electronics 2021, 10, 363 9 of 38

Concerning the creation of digital rooms, all participants identified that the following information should always be included: exhibition title, descriptive text, thematic sections, and a multimedia narrative of the exhibition. Additional information that should be used to describe a digital room was suggested to include chronological periods of the contained exhibits (80%), room identifier (60%), as well as a musical background to be reproduced for visitors of the digital room (20%). Furthermore, participants identified as useful functionality the inclusion of ready-to-use templates of digital rooms, as well as the option for reusing features from existing digital rooms. Finally, the option for creating digital rooms from scratch was also considered as a mandatory functionality. Finally, for digital tours, participants identified that the creation of such tours should support virtual routes per thematic category (100%), per chronological period (100%), selected highlights by the curator(s) of a museum (80%), tours according to the available time of the visitor (80%), tours according to the age of the visitor (80%), free tour to the entire digital space (20%), and tours according to the personal interests, cognitive background, and cultural background of the visitor (20%). Further open-ended questions explored how curators would prefer to collaborate to create digital content. Participants’ responses identified the need for supporting different access levels for the various user types and stages of creating digital content. In particular, it turned out that one person should be identified as the main content creator for a digital exhibit and a digital exhibition, having full ownership; however, they could be supported by co-creators who should only have edit rights. Users of the platform should not have access to the pertinent digital contents unless these have been marked by the owner as ready to publish.

4.2. Co-Creation Workshops Co-creation workshops are a participatory design method that involves end-users of the system in the design process [65]. In such workshops, participants do not have the role of the service user but rather that of its designer. Thus, each participant, as an “expert of their expertise” plays an important role in the development of knowledge, in the production of ideas, and in the creation of concepts [66]. Three co-creation workshops were organized, involving a total of 20 participants, trying to achieve a balanced distribution between ages and genders (see Table1). All participants were familiar with the internet and smart mobile devices, while several were familiar with virtual reality applications. In addition, six of the participants were domain experts (historians, archaeologists, and curators). The workshops were facilitated by a UX expert experienced in user research methodologies, as well as participatory design and evaluation with end-users.

Table 1. Demographic information of participants in the co-creation workshops.

Gender Age Male Female 20–30 30–40 40–50 50–60 10 10 8 6 4 2

Each workshop was structured into three main sections: 1. Introduction, during which the participants were informed about the objectives of the workshop. 2. Goal Setting, in the context of which the main objectives that the Invisible Museum platform should address were identified by the participants. 3. Invisible Museum Services, during which a detailed discussion on the specific services and functionalities that should be supported was conducted. During the Goal Setting activity, participants were asked to present through short stories the main objectives of the Invisible Museum platform, making sure to discuss whom it addresses and what needs it serves. The activity was individual and each participant was Electronics 2021, 10, 363 10 of 38

given a printed form to keep notes. At the end of the activity, each participant was asked to describe their scenarios, while the facilitator kept notes in the form of keywords on a board so that they would be visible to the entire group. Overall, throughout all workshops, a total of 52 mission statements were collected, which were then clustered into groups by assigning relevant or similar statements into one single group. After their classification, the resulting objectives of the Invisible Museum are as follows: 1. To convey to users the feeling of visiting a real museum. 2. To provide easy and fast access to exhibitions and exhibits, with concise yet sufficient information. 3. To constitute a tool that can be used to make culture and history universally accessible to a wide audience, in a contemporary and enjoyable approach. 4. To support storytelling through the combination of representations of physical objects enhanced with digital information and assets. 5. To keep users engaged by promoting content personalization according to their interests. 6. To make museums accessible to new audiences (e.g., individuals from other countries, individuals with disabilities, etc.). 7. To facilitate museums in making available to the public items that may be currently unavailable even in the physical museum (e.g., exhibits under maintenance, small items potentially kept in museum warehouses). 8. To support a more permanent presentation and access to temporary exhibitions hosted by museums from time to time. 9. To promote local culture, as well as the work of less known , and help them with further promoting it and disseminating to a wider audience. 10. To support educational efforts toward making children and young people more actively involved in cultural undertakings by making them creators themselves. 11. To constitute a tool for multidisciplinary collaboration, between researchers, teachers, curators, and historians. For the Invisible Museum Services activity, participants were asked to think and write down specific functionalities that they would want the system to support: (a) museums, cultural institutions, or individuals who will be users of the system as content providers, that is by creating digital content and namely exhibits, exhibitions, virtual rooms, and tours; and (b) end-users who will act as content consumers, by viewing the provided content. Participants were asked to record their ideas about functionality in post-it notes. Each participant presented their ideas to the group, while the facilitator collected the post-it notes, attached them on a whiteboard, and clustered them to groups, according to how relevant they were with each other. The suggested functionalities from each workshop were recorded, and a final list of functionalities that should be supported by the system was created, making sure to remove duplicates and cluster similar suggestions under one entry. This final list, featuring 37 entries of desired functionality, was used to drive the use case analysis toward recording the functional and non-functional requirements of the system, which is summarized in the following subsection.

4.3. Use Case Analysis: User Groups and Consolidated Requirements Taking into consideration the motivating scenarios, as well as the outcomes of the aforementioned interviews with curators and co-creation workshops, a use case analysis approach was followed to fully describe the functionality of the platform and the target users. A use case model comprises of use cases, which are sequences of actions required of the system; actors, which are users or other systems that exchange information with the system being analyzed; and relationships, which link two elements showing how they interact [59]. The main actors identified for the Invisible Museum are (see also Figure4): 1. Creators, who create interactive and immersive virtual 3D/VR exhibitions in a collab- orative way. Electronics 2021, 10, 363 11 of 39

Each participant presented their ideas to the group, while the facilitator collected the post- it notes, attached them on a whiteboard, and clustered them to groups, according to how relevant they were with each other. The suggested functionalities from each workshop were recorded, and a final list of functionalities that should be supported by the system was created, making sure to remove duplicates and cluster similar suggestions under one entry. This final list, featuring 37 entries of desired functionality, was used to drive the use case analysis toward recording the functional and non-functional requirements of the system, which is summarized in the following subsection.

4.3. Use Case Analysis: User Groups and Consolidated Requirements Taking into consideration the motivating scenarios, as well as the outcomes of the aforementioned interviews with curators and co-creation workshops, a use case analysis approach was followed to fully describe the functionality of the platform and the target users. A use case model comprises of use cases, which are sequences of actions required of the system; actors, which are users or other systems that exchange information with the system being analyzed; and relationships, which link two elements showing how they Electronics 2021, 10, 363 interact [59]. 11 of 38 The main actors identified for the Invisible Museum are (see also Figure 4): 1. Creators, who create interactive and immersive virtual 3D/VR exhibitions in a collab- orative way. 2. Co-creators, who contribute to the creation of digital content

3.2. Visitors,Co-creators who, who browse contribute the content to the creation of the of platform digital content and can be divided into two types: 3. (a)Visitors Web, who visitors browse who the navigate content of in the the platform available and digitalcan be divided exhibitions into two through types: a Web browser,(a) Web visitors and (b) who VR visitors navigate who in the navigate available in digital the available exhibitions virtual through museums a Web using a browser, and (b) VR visitors who navigate in the available virtual museums using a VR headset. VR headset.

FigureFigure 4. 4. ActorsActors (user roles) roles) of of th thee Invisible Invisible Museum Museum platform. platform.

Furthermore, a total of 39 uses cases were developed to describe the functionality supported by the system for all the aforementioned user roles. Use cases were clustered in categories, as summarized in Table2. For each category, a use case model was developed, illustrating how the use cases and Electronics 2021, 10, 363 13 of 39 actors are related (see Figure5 for an example). This detailed use case analysis was used to drive the design of mockups as well as the development of the Invisible Museum.

FigureFigure 5. Use 5. Use case case model model for digital for digital exhibits. exhibits.

4.4. Motivating Scenarios To better illustrate the overall user experience for each one of the identified user roles, motivating scenarios were developed, employing the results of the use case analysis. Two of these scenarios are provided in the following subsections, highlighting the main func- tions of the Invisible Museum for visitors, creators, and co-creators.

4.4.1. Exploring Virtual Exhibition of Historical Museum of Crete Ismini is a 43-year-old craftswoman that likes the traditional art of pottery. Very re- cently, she heard about the Invisible Museum platform that hosts many virtual exhibitions with ceramics. Therefore, she decides to sign up and discover all exhibitions with ceramics provided by individuals or authenticated providers, such as museums and galleries. She purposely applies some filters to see only exhibitions with ceramics, officially provided by museums. She finds some super-interesting exhibitions of pottery provided by the His- torical Museum of Crete. Although she has had the chance to visit the museum multiple times in the past, she notices that one of the available virtual exhibitions consists of arti- facts not displayed in the actual museum. She wonders why and instantly thinks that this may be due to the fact that the museum has limited available space, which cannot host at the same time the numerous collections of artifacts that it has into possession. Taking that into consideration, she is extremely happy for having the chance to browse the unique collection of ceramics of the Historical Museum of Crete through a virtual tour. Firstly, Ismini chooses to navigate through the virtual pottery collection using her web browser. Amongst several available tours, she prefers to freely navigate through the whole collection of exhibits. The platform provides instructions on how to navigate through the virtual exhibition to gain the most out of the provided information. She can focus on the displayed artefacts, view related information and narratives, as well as inter- act with 3D models, where available. In addition, the Web VR interface provides supple- mentary information, such as the actual time she has spent during her tour and the num- ber of exhibits visited. Since Ismini is equipped with a state-of-the-art VR headset, she feels eager to experi- ence an immersive virtual tour through her Oculus Quest VR Headset and interact with the virtual exhibits using the accompanying controllers. While being at home, she travels in the virtual world of pottery that enlightens her on the treasures of tangible and intan- gible cultural heritage.

Electronics 2021, 10, 363 12 of 38

Table 2. List of use cases and pertinent actors, organized in categories.

Category Use Case User Roles Registration Non-registered user Login Registered user User management Forgot password Registered user Change password Registered user View user profile Registered user Edit user profile Registered user (for own profile), System administrator Delete user System administrator View list of exhibits User View exhibit details User Digital exhibits Create new exhibit Creator Edit exhibit details Creator, Co-Creator Delete exhibit Creator Add multimedia content to exhibit Creator, Co-Creator Delete multimedia content from exhibit Creator View exhibition list User View exhibition details User Digital exhibitions Create a new exhibition Creator Edit exhibition Creator, Co-Creator Delete exhibition Creator Cooperative creation of exhibitions Creator, Co-Creator View list of digital tours User View tour details User Digital tours Create a tour Creator Edit a tour Creator, Co-Creator Delete a tour Creator Start a tour User Free navigation to an exhibition User View user groups Group member View the details of a user group Group member User group management (for Create a user group Group administrator content co-creators) Edit a user group Group administrator Delete a user group Group administrator Add a member to a user group Group administrator Remove a member from a user group Group administrator Search for exhibits User Search Search for exhibitions User Search for users User VR navigation via a guided tour User VR navigation VR navigation through free exploration User Electronics 2021, 10, 363 13 of 38

4.4. Motivating Scenarios To better illustrate the overall user experience for each one of the identified user roles, motivating scenarios were developed, employing the results of the use case analysis. Two of these scenarios are provided in the following subsections, highlighting the main functions of the Invisible Museum for visitors, creators, and co-creators.

4.4.1. Exploring Virtual Exhibition of Historical Museum of Crete Ismini is a 43-year-old craftswoman that likes the traditional art of pottery. Very recently, she heard about the Invisible Museum platform that hosts many virtual exhibitions with ceramics. Therefore, she decides to sign up and discover all exhibitions with ceramics provided by individuals or authenticated providers, such as museums and galleries. She purposely applies some filters to see only exhibitions with ceramics, officially provided by museums. She finds some super-interesting exhibitions of pottery provided by the Historical Museum of Crete. Although she has had the chance to visit the museum multiple times in the past, she notices that one of the available virtual exhibitions consists of artifacts not displayed in the actual museum. She wonders why and instantly thinks that this may be due to the fact that the museum has limited available space, which cannot host at the same time the numerous collections of artifacts that it has into possession. Taking that into consideration, she is extremely happy for having the chance to browse the unique collection of ceramics of the Historical Museum of Crete through a virtual tour. Firstly, Ismini chooses to navigate through the virtual pottery collection using her web browser. Amongst several available tours, she prefers to freely navigate through the whole collection of exhibits. The platform provides instructions on how to navigate through the virtual exhibition to gain the most out of the provided information. She can focus on the displayed artefacts, view related information and narratives, as well as interact with 3D models, where available. In addition, the Web VR interface provides supplementary information, such as the actual time she has spent during her tour and the number of exhibits visited. Since Ismini is equipped with a state-of-the-art VR headset, she feels eager to experi- ence an immersive virtual tour through her Oculus Quest VR Headset and interact with the virtual exhibits using the accompanying controllers. While being at home, she travels in the virtual world of pottery that enlightens her on the treasures of tangible and intangible cultural heritage.

4.4.2. A Visual Artist Creates a Virtual Exhibition Manolis is an acknowledged artist who operates a studio aiming to teach art through interactive workshops, allowing his students to freely explore the different perspectives and interpretations of art. However, the restrictive measures against the pandemic of COVID- 19 led him to spend creative time in his studio on his own, creating a large collection of and , using various methods and materials. Having heard of the Invisible Museum platform, he considers taking the occasion to create a virtual exhibition of his artworks to share with his students and to stimulate discussions as part of a remotely held workshop that he plans to organize. After signing up, he starts adding new exhibits in the platform accompanied with descriptive information regarding the methods and materials he used, as well as his interpretation for each work. At this point, he finds out that apart from the ability to upload high-definition images and videos, he also has the chance to upload 3D models of his sculptures. Thus, using a third party mobile application for 3D scanning, he manages very easily and in a very short time to convert his sculptures into detailed 3D models. Then, he uploads the corresponding files of the 3D models in the platform and previews the generated results. Then, he needs to embed them in a virtual exhibition space. In order to do so, he creates a new virtual exhibition room from scratch and selects the option to render the scene in 3D. Then, he activates the tool enabling him to insert virtual exhibits in the room, as well as adjust the surroundings accordingly to highlight Electronics 2021, 10, 363 14 of 38

his artworks. To achieve this, he adds the showcases that will enclose the exhibits, the appropriate lighting to emphasize their details, as well as decorating elements to create a more immersive experience reminding of his atelier. As long as Manolis has formed his virtual exhibition, he is willing to give access to all his students in the project. Thus, he enters their email addresses so that the system sends an invitation authorizing them to access the project. Lastly, Manolis—who is eager to exploit the capabilities offered by the platform—requests his students to add, as co-creators, the works that they have created themselves. Manolis, being owner of the project, will be notified every time a new action takes place and will be able to approve or edit their entries before updating the exhibition in the platform. Eventually, after finalizing the virtual exhibition, they all decide to publish the collectively generated virtual exhibition to the wide public through the Invisible Museum platform, allowing visitors to view all the artworks and gain valuable insights through a virtual reality tour.

5. Iterative Design and Evaluation The iterative design of the Invisible Museum involved two main evaluation methods: (a) heuristic evaluation of the designed mockups, applied iteratively from the first set of mockups that were designed until the final extensive set of mockups, and (b) group inspection of the final implemented mockups. This section presents these preliminary evaluation rounds in more detail.

5.1. Heuristic Evaluation Heuristic evaluation is a usability inspection method, during which evaluators go through a user interface (UI), examining its compliance with usability guidelines, which are known as heuristics [60]. Heuristic evaluation is conducted by usability experts and is very beneficial in the early stages of the design of a UI, leading to interfaces free from major usability problems that can then be tested by users. The combination of heuristic evaluation with user testing has the potential to identify most of the usability problems in an interface [60]. For the evaluation of the mockups developed throughout the design phase of the project, three usability experts carried out a heuristic evaluation, pointing out usability issues that should be addressed. The process was iterative, leading to consecutive refine- ments of the designed mockups. Overall, the design process resulted in 111 final mockups, after producing a total of 284 mockup alternatives during the various design and evalu- ation iterations. Figure6 depicts two versions of the screen for creating a digital exhibit, which went through six design iterations in total. Namely, the first (Figure6a) and the final mockup (Figure6b) illustrate how a creator can add multimedia content and create a narration for the exhibit. Overall, throughout the iterative evaluation process, evaluators focused on suggesting improvements to the UIs about: • Addressing the user requirements and goals, as these had been defined in the previous phases of the methodology that was followed • Achieving an aesthetic and minimalistic design • Minimizing the potential for user error • Ensuring that users would be able to easily understand the current status of the system and the displayed information • Supporting content creators in creating and managing the content of digital exhibits and exhibitions in a flexible manner, adopting practices that they follow in the real world • Following best practices and guidelines for the design of UIs Electronics 2021, 10, 363 15 of 39

after producing a total of 284 mockup alternatives during the various design and evalua- tion iterations. Figure 6 depicts two versions of the screen for creating a digital exhibit, Electronics 2021, 10, 363 which went through six design iterations in total. Namely, the first (Figure 6a) and15 ofthe 38 final mockup (Figure 6b) illustrate how a creator can add multimedia content and create a narration for the exhibit.

(a) (b)

Figure 6. Add digital exhibit mockups. ( a) The first first mockup that was designed; (b)) TheThe finalfinal mockup.mockup.

5.2. GroupOverall, Inspection throughout the iterative evaluation process, evaluators focused on suggest- ing improvementsThe user-based to evaluationthe UIs about: was performed following the method of group usability inspection Addressing [1]. In the particular, user requirements three evaluation and goals sessions, as these were had conducted,been defined during in the whichprevi- mockupsous phases of the systemof the methodology were presented that to was the followed group of participating users who evaluated them through group discussions. Aiming to assess whether the requirements defined  Achieving an aesthetic and minimalistic design during previous phases (with regard to the aims and objectives of the system, as well as the functionalitiesMinimizing the that potential it should for provide)user error were met by the final design, participants in the groupEnsuring inspection that users evaluation would be were able the to sameeasily asunderstand the ones in the the current co-creation status workshops. of the sys- Eachtem evaluation and the session displayed was information facilitated by two usability experts, who were responsible for coordinating Supporting discussions content creators and recording—in in creating theand form managing of handwritten the content notes—the of digita problemsl exhibits identifiedand exhibitions by users. Atin a laterflexible stage, manner, facilitators adopting went practices through that the they entire follow list of in problems the real identifiedworld throughout the three sessions to eliminate duplicates and prioritize findings. In each session, the evaluation was performed using specific usage scenarios, ac-  Following best practices and guidelines for the design of UIs cording to which participants were guided to the functionality of the system through high-fidelity interactive mockups, following a logical flow of steps. The evaluation process 5.2. Group Inspection was organized into three sections steps for each scenario: The user-based evaluation was performed following the method of group usability 1. Introduction: The facilitator presented the context and objectives of the scenario. inspection [1]. In particular, three evaluation sessions were conducted, during which Then, all the mockups/steps of the scenario were presented, asking participants to mockups of the system were presented to the group of participating users who evaluated simply observe, without however taking any further action. them through group discussions. Aiming to assess whether the requirements defined dur- 2. Commenting: The facilitator went through the mockups one by one, asking this time ing previousparticipants phases to keep(with individual regard to notesthe aims for anyand usabilityobjectives problems of the system, and possible as welldesign as the functionalitiesimprovements. that it should provide) were met by the final design, participants in the group3. Discussion: inspection evaluation A final round were of the going same through as the theones mockups in the co- followed,creation workshops. asking this Each time evaluationfrom eachsession participant was facilitated to share by their two observationsusability experts, and commentswho were withresponsible the group, for co- for ordinatingeach mockupdiscussions presented. and recording A short— discussionin the form was of handwritten held for each notes identified—the problems problem, identifiedexploring by users. potential At a solutionslater stage, that facilitators would improve went th therough system. the entire list of problems identified throughout the three sessions to eliminate duplicates and prioritize findings. In total, 5 scenarios were examined, involving 14 focal mockups representing the most In each session, the evaluation was performed using specific usage scenarios, accord- important system screens and functions. More specifically, the following mockups were ing to which participants were guided to the functionality of the system through high- evaluated: • Home page for three different user types: visitor, registered user, and content creator • Digital exhibit screens for visitors and registered users • Digital exhibition screens for visitors and registered users • Screens for creating a digital exhibit addressing content creators Electronics 2021, 10, 363 16 of 38

• Screens for creating a digital exhibition addressing content creators All users were satisfied with regard to the potential of the system to address the aims and objectives that they had identified in the requirements elicitation phase, as well as with the integration of desired functionality. In total, 62 unique usability evaluation problems were identified, pertaining to four main pillars: 1. Participants’ preferences concerning the conveyed aesthetics. 2. Disambiguation of the terms that were used in the UI. 3. Suggestions about the content that the system should promote through the home page. 4. Suggestions for functions that would improve user efficiency (e.g., when creating a digital exhibition).

6. The Invisible Museum Platform The Invisible Museum constitutes an integrated solution providing the potential to extend and enrich the overall museum experience, particularly in the areas of personal en- gagement, participation, and community involvement, as well as the co-creation of cultural values (Figure7, online video: https://youtu.be/5nJ5Cewqngc). The presented platform is a complete framework and suite of tools designed to aid museum curators and individual users with no previous expertise, in delivering customized museum experiences. More specifically, it provides curators with a set of tools to (a) create collections of digital exhibits and narratives, (b) design custom-made virtual exhibitions to display their collections of artifacts, (c) promote participatory content creation, (d) support personalized experiences Electronics 2021, 10, 363 for visitors, and (e) visualize digital museums as immersive VR environments, allowing17 of 39

visitors to navigate and interact with the content.

Figure 7. The Invisible Museum platform Figure 7. The Invisible Museum platform

6.1.6.1. Creating Creating Exhibits Exhibits and and Narratives Narratives TheThe Invisible Invisible Museum Museum aims aims to to attract attract a a wide wide range range of of audiences, audiences, from from museum museum and and gallerygallery curators curators toto any individual individual inspired inspired to to display display their their collection collection of artifacts, of artifacts, and andother otherobjects objects of artistic, of artistic, cultural, cultural, historical, historical, or scientific or scientific importance importance to the to thepublic public through through im- immersivemersive VR VR experiences. experiences. In In this this context, context, to to get get full access toto thethe contentcontent ofof thethe platform, platform, usersusers can can register register as as individuals individuals or or professional professional organizations organizations (Figure (Figure8a). 8a). In In the the latter latter case,case, they they have have to to submit submit appropriate appropriate official official documents, documents, which which are are assessed assessed by by system system administratorsadministrators beforebefore grantinggranting professionalprofessional accounts. As As a aresult, result, all all the the exhibits exhibits and and ex- hibitions uploaded by professional organizations receive a verification badge to assist the platform visitors in identifying the pertinent content.

(a) (b) Figure 8. (a) The sign-up screen of the Invisible Museum platform; (b) User profile screen.

To create an exhibit, a curator has to upload its digital representation, add useful metadata, and create a narrative about it. More specifically, the required fields include the title, a short and long description, associated categories, dimensions of the corresponding physical artifact, and the type of the virtual exhibit (i.e., text, 3D model, image, sound, video, etc.), and so on. In addition to the predefined information categories, users can

Electronics 2021, 10, 363 17 of 39

Figure 7. The Invisible Museum platform

6.1. Creating Exhibits and Narratives The Invisible Museum aims to attract a wide range of audiences, from museum and gallery curators to any individual inspired to display their collection of artifacts, and other objects of artistic, cultural, historical, or scientific importance to the public through im- Electronics 2021, 10, 363 mersive VR experiences. In this context, to get full access to the content of the platform,17 of 38 users can register as individuals or professional organizations (Figure 8a). In the latter case, they have to submit appropriate official documents, which are assessed by system administrators before granting professional accounts. As a result, all the exhibits and ex- exhibitions uploaded by professional organizations receive a verification badge to assist hibitions uploaded by professional organizations receive a verification badge to assist the the platform visitors in identifying the pertinent content. platform visitors in identifying the pertinent content.

(a) (b)

FigureFigure 8. 8.(a ()a The) The sign-up sign-up screen screen of of the the Invisible Invisible Museum Museum platform; platform; (b ()b User) User profile profile screen. screen.

ToTo create create an an exhibit, exhibit, a a curator curator has has to to upload upload its its digital digital representation, representation, add add useful useful metadata,metadata and, and create create a a narrative narrative about about it. it. More More specifically, specifically, the the required required fields fields include include the the title,title, a a short short and and long long description, description, associated associated categories, categories, dimensions dimensions of of the the corresponding corresponding physicalphysical artifact, artifact, and and the the type type of of the the virtual virtual exhibit exhibit (i.e., (i.e., text, text, 3D 3D model, model, image, image, sound, sound, video,video, etc.), etc.), and and so so on. on. In In addition addition to to the the predefined predefined information information categories, categories, users users can can specify custom attributes by defining key-value pairs to describe any information they wish about the exhibit. Users are also able to create a narrative, comprising for example a sequence of historical events, to enrich the original document with contextual information. Curators can also define the visibility of the exhibit, as private or public, in which case it becomes readily visible to all the platform users. In order to facilitate the exhibit creation process, the presented platform offers intel- ligent mechanisms, such as automated text completion and semantic knowledge-based recommendations about existing entries in the platform’s data storage. In addition, the plat- form incorporates all the necessary mechanisms to facilitate content translation based on popular third-party web services (i.e., Google Cloud Translation Services [67]), while at the same time, it allows users to interfere by editing or adding translations manually. Moreover, curators who are native speakers are invited to contribute to content translation, acquiring in return greater visibility and better chances of appearing at the top search results. Through the user profile screen (Figure8b), curators have access to their account settings and profile information along with the profile analytics dashboard that includes graphical representations referring to the exhibitions’ popularity (e.g., number of created exhibitions, views, shares, etc.).

6.2. Designing Dynamic Virtual Exhibitions In addition to the aforementioned features, the Invisible Museum incorporates a flexi- ble set of tools that allows individuals, from non-professionals to professionals, to unfold their creativity to create tailor-made virtual exhibitions without having programming or 3D design skills. For that purpose, the platform provides the Exhibition Designer, which is an integrated tool that enables users to create new virtual spaces in an intuitive and user-friendly way. The Exhibition Designer is a visualization tool of 3D virtual spaces that facilitates curators to design virtual exhibitions based on existing ready-to-use templates, on ex- Electronics 2021, 10, 363 18 of 38

hibition rooms that they have created in the past, or even from scratch. Users have to first design and create the exhibition rooms by adjusting the walls in a two-dimensional floorplan. Then, the system generates a 3D representation of the exhibition, allowing users to add doors, windows, and decorative elements, such as furniture, floor/wall textures, and carpets. To add an exhibit in the 3D virtual space, users select the exhibit of their preference in the given formats (i.e., 2D images, 3D models, and videos) and position it in the exhibition space through the ray casting technique as depicted in Figure9. The available exhibits to select from include one’s own exhibits or exhibits that are publicly shared by other users. Moreover, users may configure the exhibits’ position, rotation, and scaling factors in all three dimensions. It is possible to select among different showcases to enclose the artifacts, such as glass showcases, stands, frames, etc. In addition, they can create the lighting scheme using different types of lights (i.e., ceiling/floor/wall lighting) and customize characteristics, such as color, intensity, distance, and angle of each light. Furthermore, the Exhibition Designer provides free navigation possibilities across the exhibition rooms either by using the keyboard or the mouse, and it also enables creators to Electronics 2021, 10, 363 preview the VR environment of the exhibition as a simulation of the actual VR tour19 that of 39

will be generated after finalizing the virtual exhibition.

FigureFigure 9. 9.Exhibition Exhibition Designer, Designer, a a 3D 3D design design tool tool for for creating creating virtual virtual museums. museums.

6.3.6.3. Co-Creation Co-Creation of of Virtual Virtual Exhibitions Exhibitions TheThe Invisible Invisible Museum Museum facilitates facilitates the the creation creation of of virtual virtual exhibitions exhibitions in in a a collaborative collaborative fashion.fashion. InIn detail,detail, the curator curator that that creates creates a anew new exhibition, exhibition, namely namely the the owner, owner, invites invites po- potentialtential co co-creators-creators in in the the corresponding corresponding field field of the “Create“Create Exhibition”Exhibition” form,form, as as long long as as theythey own own an an account account in in the the platform. platform. If If they they do do not not own own an an account account yet, yet, they they receive receive an an invitationinvitation link link via via email email to to create create one. one. Co-creatorsCo-creators automatically automatically receive receive a notification, a notification, either either in the in formthe form of a pop-upof a pop message-up mes- insage the in web the browser web browser or an or email an email to confirm to confirm their their participation participation in the in development the development of an of exhibitionan exhibition (Figure (Figure 10a). 10a Co-creators). Co-creators have have editing editing permissions permissions in exhibitions; in exhibitions; however, however, their actionstheir actions will not will be not instantly be instantly available available to the to public, the public, unless unless the owner the owner approves approves them. them. The ownerThe owner is responsible is responsible to review to review all the all entries, the entries, suggestions, suggestions, and modifications and modifications made made and eventuallyand eventually decide decide whether whether to approve, to approve, edit, oredit discard, or discard them, them, as illustrated as illustrated in Figure in Figure 10b. This10b. feature This feature works works more as more a safety as a net safety toward net creatingtoward creating consistent consistent and qualitative and qualitative content in the platform. More specifically, a notification mechanism informs the owner when edits content in the platform. More specifically, a notification mechanism informs the owner or adjustments have been submitted by co-creators. Lastly, the platform allows users to when edits or adjustments have been submitted by co-creators. Lastly, the platform allows exchange private messages, thus facilitating the collaboration between two or more parties users to exchange private messages, thus facilitating the collaboration between two or (i.e., museums, organizations, etc.) through integrated communication channels. more parties (i.e., museums, organizations, etc.) through integrated communication chan- nels.

Electronics 2021, 10, 363 19 of 38 Electronics 2021, 10, 363 20 of 39

(a) (b) Figure 10. (a) A review example within the Exhibition Designer; (b) Overview of modifications made by co-creators pending Figure 10. (a) A review example within the Exhibition Designer; (b) Overview of modifications made by co-creators pending approval. approval.

6.4.6.4. Pers Personalizedonalized S Suggestionsuggestions and and E Exhibitionxhibition T Toursours OneOne of of the the features of the platform is content personalization, aiming to attract and engageengage users users by by delivering delivering targeted targeted information information according according to their to their interests. interests. Arming Arming var- iousvarious recommendation recommendation engine engine algorithms algorithms of collaborative of collaborative filtering filtering models, models, members members of the of platformthe platform are provided are provided with withpersonalized personalized content content to reduce to reduce the amount the amount of time of and time frus- and trationfrustration to find to findan exhibition an exhibition of their of their interest interest to visit. to visit. AsAs depicted depicted in in Figure Figure 11 11a,a, users users receive receive personalized personalized suggestion suggestionss based based on their on their ac- tivityactivity in inthe the platform platform and and preferences preferences (i.e., (i.e., content content that that they they mark mark as as favorite). favorite). To To avoid thethe cold cold-start-start problem problem of of such such algorithms, algorithms, the the newly newly registered registered members members are areasked asked to se- to lectselect the the classified classified thematic thematic areas areas of oftheir their interest interest (i.e. (i.e.,, Arts, Arts, Culture, Culture, etc.), etc.), as asdepicted depicted in Figurein Figure 11b 11. Contentb. Content classification classification has hasemerged emerged after after several several interview interview iterations iterations with with do- maindomain experts, experts, resulting resulting in ina small a small set set of ofdominant dominant domains domains namely namely Arts, Arts, Culture, Culture, Sports, Sports, Science,Science, History, History, and Technology. These are further subdivided into categories to allow usersusers to to better better identify identify their their interests; interests; for for instance, instance, the the domain domain of of Culture Culture includes includes content content relatedrelated to to Costumes, Costumes, Diet, Diet, Ethics Ethics & & Customs, Customs, Law, Law, Writing, Writing, Folklore, Folklore, Religion, Religion, Sociology, Sociology, andand Education. Education. Users can select an exhibition in order not only to view relevant information but also to experience a virtual museum tour (Figure 12). A virtual tour of an exhibition can consist of a creator-defined path displaying a subset of the overall exhibits. Multiple tours can be laid out to highlight and focus on different aspects of a specific exhibition. Indicative examples of virtual tours include (a) free tour to the entire digital space displaying all the available exhibits, (b) the exhibition’s selected highlights by the curator(s) of a museum, (c)

Electronics 2021, 10, 363 21 of 39

Electronics 2021, 10, 363 20 of 38

tours according to the age of the visitor, (e) tours per thematic category, per chronological period, and so on. The current version of the platform creates automatically at least one free virtual tour for an exhibition. To create a new virtual tour, the platform provides the option “Create tour”, where curators are called to enter information, such as language, Electronics 2021, 10, 363 title, short description, the starting point of the tour, the sequence of the exhibits, as well21 of as39 upload audio files with music that may accompany users during the tour.

(a) (b) Figure 11. (a) Homepage: Preview of recommended exhibitions; (b) Users initiate recommendation algorithms as they select the classified thematic areas of their interest.

Users can select an exhibition in order not only to view relevant information but also to experience a virtual museum tour (Figure 12). A virtual tour of an exhibition can consist of a creator-defined path displaying a subset of the overall exhibits. Multiple tours can be laid out to highlight and focus on different aspects of a specific exhibition. Indicative ex- amples of virtual tours include (a) free tour to the entire digital space displaying all the available exhibits, (b) the exhibition’s selected highlights by the curator(s) of a museum, (c) tours according to the age of the visitor, (e) tours per thematic category, per chrono- logical period, and so on. The current version of the platform creates automatically at least (a) one free virtual tour for an exhibition. To create a (newb) virtual tour, the platform provides the option “Create tour”, where curators are called to enter information, such as language, Figure 11. (a) Homepage: Preview Preview of of recommended exhibitions exhibitions;; ( (b)) Users initiate recommendation algorithms algorithms as as they title, short description, the starting point of the tour, the sequence of the exhibits, as well select the classified classified thematic areas of their interest.interest. as upload audio files with music that may accompany users during the tour. Users can select an exhibition in order not only to view relevant information but also to experience a virtual museum tour (Figure 12). A virtual tour of an exhibition can consist of a creator-defined path displaying a subset of the overall exhibits. Multiple tours can be laid out to highlight and focus on different aspects of a specific exhibition. Indicative ex- amples of virtual tours include (a) free tour to the entire digital space displaying all the available exhibits, (b) the exhibition’s selected highlights by the curator(s) of a museum, (c) tours according to the age of the visitor, (e) tours per thematic category, per chrono- logical period, and so on. The current version of the platform creates automatically at least one free virtual tour for an exhibition. To create a new virtual tour, the platform provides the option “Create tour”, where curators are called to enter information, such as language, title, short description, the starting point of the tour, the sequence of the exhibits, as well as upload audio files with music that may accompany users during the tour.

Figure 12. Exhibition main screen. Figure 12. Exhibition main screen.

Moreover, the platform facilitates users to search for a specific entry (i.e., exhibit, exhibition, creator, etc.) through a dedicated search field in the header menu of the web interface. Results are automatically provided through a mini popup window that displays entries containing the given keywords grouped by exhibits and exhibitions. Users can view

Figure 12. Exhibition main screen.

Electronics 2021, 10, 363 22 of 39

Moreover, the platform facilitates users to search for a specific entry (i.e., exhibit, ex- Electronics 2021, 10, 363 21 of 38 hibition, creator, etc.) through a dedicated search field in the header menu of the web interface. Results are automatically provided through a mini popup window that displays entries containing the given keywords grouped by exhibits and exhibitions. Users can viewall the all returned the returned results results categorized, categorized, filtered, filtered, and sortedand sorted according according to the to user the user preferences prefer- ences(e.g., date(e.g., of date creation, of creation, views, views, etc.). etc.).

6.5. Visualization in VR, Navigation Navigation and and Interaction In the the context context of of VM VM applications, applications, VR VR technology technology contribution contribution is essential is essential for forthe thees- tablishmentestablishment of ofan an immersive immersive virtual virtual environment. environment. Visitors Visitors of of the the Invisible Invisible Museum can navigate inin 3D3D virtualvirtual exhibitions exhibitions and and interact interact with with the the exhibits exhibits using using different different means means such suchas (a) as a ( weba) a browser,web browser, and (b)and any (b) VRany headsetVR headset that that consists consists of a of head-mounted a head-mounted display, dis- play,stereo stereo sound, sound, and tracking and tracking sensors sensors (e.g., Oculus(e.g., Oculus Quest). Quest). Navigation in VR is readily available upon the se selectionlection of a virtual tour. By initiating a virtual tour, thethe ExhibitionExhibition Viewer,Viewer, aa Web-basedWeb-based 3D/VR3D/VR application,application, provides the 3D construction ofof thethe exhibition exhibition area, area, giving giving prominence prominence to theto the virtual virtual exhibits. exhibits. As illustrated As illus- tratedin Figure in Figure 13a the 13 currenta the current exhibit exhibit of the of virtual the virtual tour is tour indicated is indicated by a green-colored by a green-colored label, labelwhile, while the virtual the virtual exhibits exhibits that belong that belong to the to selected the selected tour aretour highlighted are highlighted as yellow as yellow and andare numberedare numbered according according to their to their order. order. Exhibits Exhibits that that do notdo belongnot belong to a to specific a specific tour tour are indicated with an informative icon. Visitors can interact with any exhibit that they prefer are indicated with an informative icon. Visitors can interact with any exhibit that they and retrieve related information through the corresponding exhibit narrative as it has been prefer and retrieve related information through the corresponding exhibit narrative as it provided by the exhibition creator (Figure 13b). has been provided by the exhibition creator (Figure 13b).

(a) (b)

Figure 13. Virtual Reality ( (VR)VR) tour. (a) VR exhibition tour;tour; (b)) ProvidingProviding informationinformation aboutabout aa selectedselected exhibit.exhibit.

The Exhibition Viewer recognizes if if a VR headset is connected to the computercomputer and allows users users to to navigate navigate through through the the virtual virtual tour tour by by activating activating their their VR VR headset. headset. In c Inase case the VRthe headset VR headset is not is connected, not connected, the application the application loads the loads virtual the exhibition virtual exhibition in the web in browser. the web Inbrowser. case the In VR case headset the VR is connected headset is but connected the visitor but wishes the visitor to proceed wishes through to proceed a web browser, through thea web Exhibition browser, Viewer the Exhibition offers the option Viewer to offers enable the the option VR tour to later enable on. theIn addition VR tour, the later Exhi- on. bitionIn addition, Viewer the provides Exhibition a menu Viewer with providesadditional a information menu with about additional the virtual information tour progress, about i.e.,the the virtual total tour number progress, of exhibits, i.e., the the totalelapsed number time of of the exhibits, tour, the the option elapsed to switch time ofthe the current tour, tourthe optionto VR headset, to switch settings, the current and lastly tour the to VRfull headset,-screen view settings, option. and lastly the full-screen viewThe option. way that visitors can interact during a virtual tour depends on the selected navi- gationThe method way (i.e., that web visitors browser can interactor VR headset). during In a the virtual case tourof web depends browser on navigation, the selected the interactionnavigation methodcan be achieved (i.e., web by browser clicking or on VR each headset). exhibit. In theIn the case case of web of using browser a VR navigation, headset, the interaction canremains be achieved similar, by although clicking in on a eachmore exhibit. immersive In the way, case using of using the atracking VR headset, sen- sorsthe interactionof a VR headset. remains Keeping similar, track although of a user’s in ahands’ more motion immersive in the way, physical using world the tracking has as a resultsensors the of conversion a VR headset. of their Keeping movements track of in a user’sthe virtual hands’ world motion, allowing in the the physical interaction world with has virtualas a result exhibits the conversion and the navigation of their movements in the virtual in theenvironment. virtual world, In some allowing VR headsets, the interaction track- ingwith a virtualuser’s hand exhibits motion and can the navigationbe achieved in even the without virtual environment. the need for a In controller. some VR Addition- headsets, ally,tracking the visitor a user’s can hand interact motion by using can be an achieved illustrating even dot, without which theoperates need forsimilarly a controller. to the mouseAdditionally, pointer. the visitor can interact by using an illustrating dot, which operates similarly to the mouse pointer.

ElectronicsElectronics 20212021, ,1010, ,363 363 2322 of of 39 38

7.7. System System Architecture, Architecture, Structure Structure,, and and Implementation Implementation 7.1.7.1. Implementation Implementation O Overviewverview TheThe Invisible Invisible Museum Museum was was implemented implemented following thethe RepresentationalRepresentational State State Transfer Trans- fer(REST) (REST architectural) architectural style, style, exposing exposing the the necessary necessary endpointsendpoints thatthat any client client application application (web(web or or mobile) mobile) can can consume consume to to exchange exchange information information with with the the system system and and interface interface with with itsits resources. resources. The The back back-end-end API API was was developed developed using using the the NestJS NestJS framew framework,ork, which which is is an an advancedadvanced JavaScript JavaScript back back-end-end framework framework that that combines combines robust robust design design patterns patterns with with best best practicespractices offering the the fundamental fundamental infrastructure infrastructure capable capable of building of building and deploying and deploying highly highlyscalable scalable and performant and performant server-side server applications.-side applications. User authenticationUser authentication and authorization and author- izationservices services were built were upon built the upon OAuth the OAuth 2.0 industry-standard 2.0 industry-standard authorization authorization protocol protocol with the withuse ofthe JavaScript use of JavaScript Web Tokens. Web At Tokens. the deployment At the deployment level, the Web level, Services the Web are Services packaged are as packagedcontainerized as containerized Docker applications. Docker applications. TheThe Invisible Invisible Mus Museumeum is is based based on on a a MongoDB MongoDB [68], [68], aiming aiming to to allow allow developers developers to to workwork with with its its data data naturally naturally and and intuitively, intuitively, mainly mainly due due to to its its JSON JSON data data format. This This approachapproach ensures ensures fast fast application application development development cycles cycles,, lowering lowering the the time time to to develop develop new new featuresfeatures or address address potential potential issues, issues, thus thus making making the platform the platform itself very itself flexible very flexible to changes to changesand upgrades. and upgrades. The main The resourcesmain resources stored stored in the in core the databasecore database are the are Users the Users and theirand theirExhibitions. Exhibitions. Apart Apart from from these, these, resources resources such such as Languages, as Languages, Locations, Locations, and and Files Files are alsoare alsostored stored in thein the database database and and are are referenced referenced by by the the main main models. models. The back back-end-end services services useuse libraries libraries such such as as Mongoose Mongoose,, an an Object Object Data Data Modeling Modeling (ODM) (ODM) library library for for MongoDB MongoDB andand NestJS NestJS,, to to apply apply schemas schemas to to these these entities entities to to provide provide software software constrained constrained resource resource specificspecificationsations that that prevent prevent data data inconsistency inconsistency or or in in some some cases cases even even data data loss. loss. Resources Resources areare modeled modeled after after these these specifications specifications,, and and their their models models hold hold the the attributes attributes that that describe describe them.them. Some Some of of the the primary primary attributes attributes are are shown shown in in Figure Figure 14. 14 .

FigureFigure 14. 14. IndicativIndicativee database database models. models.

7.2.7.2. Analyzing Analyzing Narratives Narratives Using Using Natural Natural Language Language Processing Processing AnalyzingAnalyzing free free-text-text data forfor eacheach exhibit exhibit and and compiling compiling them them in ain database a database along along with withthe informationthe information extracted extracted by other by other entries entries contributes contributes to the to creation the creation of a comprehensive of a compre- hensiveknowledge knowledge repository. repository. In addition, In addition effectively, effectively expressing expressing the semantics the semantics of information of infor- and resources allows data to be shared and reused across application, enterprise, and mation and resources allows data to be shared and reused across application, enterprise, community boundaries. In this context, the Invisible Museum initially focuses on (a) and community boundaries. In this context, the Invisible Museum initially focuses on (a) analyzing the textual input narratives of exhibits and (b) structuring extracted information analyzing the textual input narratives of exhibits and (b) structuring extracted information and insights to represent it in a specific logical way. To this end, a Natural Language

Electronics 2021, 10, 363 24 of 39

Electronics 2021, 10, 363 23 of 38

and insights to represent it in a specific logical way. To this end, a Natural Language Pro- cessor (NLP) was developed partially using spaCy [69], while the information extracted isProcessor structured (NLP) according was developed to the CIDOC partially-CRM. using spaCy [69], while the information extracted is structured according to the CIDOC-CRM. 7.2.1. Architecture and Process 7.2.1. Architecture and Process The NLP consists primarily of two parts: (a) the statistical model and (b) the rule- basedThe algorithms. NLP consists One primarilyof spaCy’s of main two parts:features (a) is the the statistical inclusion model of statistical and (b) themo rule-baseddels used foralgorithms. text analysis. One These of spaCy’s statistical main models features identify is the named inclusion entities of statistical in the text, models i.e., groups used forof text analysis. These statistical models identify named entities in the text, i.e., groups of words that have a specific meaning as a collective, for example, “World War II”. Since the words that have a specific meaning as a collective, for example, “World War II”. Since default models of the library proved to be insufficient for this project, lacking both accu- the default models of the library proved to be insufficient for this project, lacking both racy and terms that could be recognized, a new model had to be developed. Using the accuracy and terms that could be recognized, a new model had to be developed. Using training techniques provided by spaCy, a series of new models are being developed, to the training techniques provided by spaCy, a series of new models are being developed, to identify key phrases, terms, and other entities that fit more accurately to the content of the identify key phrases, terms, and other entities that fit more accurately to the content of the platform. The set of the sample texts on which the models are being trained is comprised platform. The set of the sample texts on which the models are being trained is comprised of pieces and excerpts written both by individuals and professionals to cover as much of of pieces and excerpts written both by individuals and professionals to cover as much of the expected user spectrum as possible. the expected user spectrum as possible. The second part of the NLP is a ruleset on which the extrapolation of the data is The second part of the NLP is a ruleset on which the extrapolation of the data is based. based. The statistical model by itself requires a large sample to be trained in recognizing The statistical model by itself requires a large sample to be trained in recognizing all the allrequired the required entities entities in the textin the accurately, text accurately, and it doesand notit does have not the have infrastructure the infrastructure required re- to quireddraw theto draw relationships the relationships between entities.between Thus, entities. the Thus, rules servethe rules to complement serve to complement the model theand model connect and the connect separate the information separate information annotated byannotated it. The NLP by it. is designedThe NLP accordingis designed to accordingpatterns commonly to patterns observedcommonly in observed similar writtenin similar pieces written and pieces takes and into takes account into multipleaccount multiplefactors, suchfactors, as thesuch syntactic as the syntactic position position of a word/entity, of a word/entity, its lemmatized its lemmatized form, and form, the and part theof speechpart of attributedspeech attributed to it. After to it. a pieceAfter ofa piece information of information has been has successfully been successfully codified, cod- it is ified,added it tois added the platform’s to the platform’s database database from where from it canwher bee retrieved.it can be retrieved.

7.2.2.7.2.2. Training Training a a Statistical Statistical Model Model First,First, a a statistical modelmodel needsneeds to to be be trained trained to to identify identify all all the the different different entities entities in a in text, a text,such such as (a) as PERSON: (a) PERSON: a person, a person, real orreal fictional or fictional (e.g., (e.g., Winston Winston Churchill), Churchill), (b) MONUMENT: (b) MONU- MENT:a physical a physical monument, monument, either constructed either constructed or natural (e.g.,or natural the Parthenon), (e.g., the (c) Parthenon), DATE: a date (c) DATE:or time a period date or of anytime format period (e.g., of any 1 January format 2000, (e.g., 4th 1 centuryJanuary B.C.),2000, (d)4th NORP: century a nationality,B.C.), (d) NORP:a religious a nationality, or a political a religious group (e.g., or a Japanese),political group (e) EVENT: (e.g., Japanese), an “instantaneous” (e) EVENT: change an “in- of stantaneousstate (e.g., the” change birth of of Cleopatra, state (e.g., the the Battle birth ofof Stalingrad,Cleopatra, Worldthe Battle War of II), Stalingrad, (f) SOE: “Start World of Warexistence”, II), (f) SOE: meaning “Start a verbof existence”, or phrase thatmeaning signifies a verb the or birth, phrase construction, that signifies or production the birth, constructionof a person, or or object production (e.g., theof a phraseperson “wasor object born” (e.g., in the the phrase phrase “was “Alexander born” in was the bornphrase in “Alexander1986.”), and was (g) EOE: born “End in 1986.”), of existence”, and (g) meaning EOE: “End a verb of or existence phrase that”, meaning signifies a the verb death, or phrasedestruction, that signifies or dismantling the death, of adestruction, person or object or dismantling (e.g., the verb of a “Jonathanperson or diedobject during (e.g., the the verbSecond “Jonathan World War”).died during the Second World War”). HavingHaving properly properly esta establishedblished the the list list of of entity entity ty typespes in in Invisible Invisible Museum, Museum, the the training training texttext sample sample was was prepared prepared to to train train the the statistical statistical model. model. To To that that end, end, the the text text was was annotated annotated manuallymanually and and was was then then inserted inserted into into the the training program. A A sample sample-annotated-annotated text text is is presentedpresented i inn Figure Figure 1515..

FigureFigure 15. 15. SampleSample-annotated-annotated text. text.

TTrainingraining a amodel model requires requires as asmany many texts texts with with different different expressions expressions and content and content as pos- as siblepossible to cover to cover diverse diverse and extreme and extreme cases casesthat may that be may encountered be encountered in the platform. in the platform. After- wards,Afterwards, the model the modelcan be canexported be exported,, and it is and ready it is for ready use. forThe use.ruleset, The part ruleset, of the part NLP, of there- mainsNLP, remainsconstant constantregardless regardless of the statistical of the statisticalmodel. For model. instance, For when instance, a curator when adds a curator a nar- rativeadds ato narrative an exhibit to ( anFigure exhibit 16), (Figure the textual 16), theinput textual from inputall steps from is allautomatically steps is automatically combined andcombined passed andto the passed NLP for to theanalysis. NLP for analysis.

Electronics 2021, 10, 363 24 of 38 Electronics 2021, 10, 363 25 of 39 Electronics 2021, 10, 363 25 of 39

Figure 16. Natural Language Processor (NLP) of exhibit’s textual narrative. FigureFigure 16. 16.Natural Natural Language Language Processor Processor (NLP) (NLP of) of exhibit’s exhibit’s textual textual narrative. narrative. First,First, the the statistical statistical model model reviews reviews the the input input and and identifies identifies the the various various entities entities con- con- First, the statistical model reviews the input and identifies the various entities con- tainedtained therein. therein. For For this this particular particular case, case, the the results results are are presented presented in in Figure Figure 17 .17. tained therein. For this particular case, the results are presented in Figure 17.

Figure 17. NLP identified entities. Figure 17. NLP identified entities. Figure 17. NLP identified entities. In this particular instance, the model performed perfectly, identifying all the entities in theInIn this text.this particular particularIt is worth instance, instance, mentioning the the model thatmodel the performed performed phrase “The perfectly, perfectly, Greek” identifying identifying normally all signifies theall the entities entities nation- in theinality text.the andtext. It is would It worth is worth be mentioning identifie mentioningd that as NORP,that the phrasethe but phrase “Thein this “The Greek” particular Greek” normally normallycontext, signifies it signifies refers nationality to nation- a nick- andalityname would and given would be to identified Domenikos be identifie as NORP, dTheotokopoulos as NORP, but in but this inand particular this is particulartherefore context, correctlycontext, it refers itidentified refers to a nickname to asa nick- PER- givennameSON. to given After Domenikos tothe Domenikos entities Theotokopoulos have Theotokopoulos been recognized and is therefore and and is correctlyhighlightedtherefore identified correctly in the astext,identified PERSON. the set as of After PER- rules theSON.draw entities After the haverelationshi the entities beenps recognized have between been andthem,recognized highlighted producing and highlighted inthe the information text, thein the set in text, of the rules the form set draw it of will rules the be relationshipsdrawstored. the In relationshi this between example,ps them, between the producing rules them,cover the theproducing information birth and the death in information the of form Domenikos it willin the be Theotokopoulos. stored.form it Inwill this be example,stored.Specifically, In the this rules the example, relevant cover the the rule, rules birth in cover andthis death case,the birth is of the Domenikos and one death covering of Theotokopoulos. Domenikos the text of Theotokopoulos. the Specifically, format PER- theSpecifically,SON relevant (SOE rule, DATEthe relevant in–EOE this case, DATE).rule, isin theInthis this one case, case, covering is thethe oneNLP the covering textis instructed of the the format text that of the PERSONthe most format probable (SOE PER- DATE–EOESONdate (SOof birthE DATE DATE). is the–EOE Infirst this DATE). DATE case, entity,In the this NLP case,and is the the instructed most NLP probableis instructed that the date most that of death probablethe most is the probable date second of birthdateone. is ofIt the birthis worth first is DATEthe mentioning first entity, DATE andthat entity, the the most SOEand probableandthe mostEOE dateprobableentities of death are date optional is of the death second, since is the one.the second format It is worthone.PERSON It mentioning is worth (DATE mentioning– thatDATE) the SOEis thatalso and thecommon EOE SOE entities andamong EOE are relevant entities optional, texts are since optional and theis thus format, since covered PERSONthe format by the (DATE–DATE)PERSONrules. (DATE is also–DATE) common is also among common relevant among texts relevant and is texts thus and covered is thus by covered the rules. by the rules.With With that that information information extracted,extracted, the next next step step is isto tostructure structure it according it according to the to subset the subsetof theWith of CIDOC the that CIDOC information model, model, which extracted, which is presented is the presented next in Figurestep in is Figure to18 structure. In 18detail,. In it detail, foraccording every for pie everyto cethe of piecesubset infor- of information, a new instance of the E1 class is created, since every other entity type is ofmation, the CIDOC a new model, instance which of the is E1 presented class is created, in Figure since 18 .every In detail, other for entity every type pie isce a of subclass infor- a subclass of E1. Afterwards, new inner classes are instantiated following the hierarchy mation,of E1. Afterwards, a new instance new of inner the E1 classes class areis created, instantiated since followingevery other the entity hierarchy type isof a Figure subclass 19 , of Figure 19, until the point is reached when all relevant data of the piece of information ofuntil E1. theAfterwards, point is reached new inner when classes all relevant are instantiated data of the following piece of informationthe hierarchy can of beFigure codified 19, can be codified in the properties of the branch. Going back to the example at hand, the untilin the the properties point is reached of the branch. when all Going relevant back data to the of example the piece at of hand, information the birth can of beDomenikos codified birthin the of properties Domenikos of the Theotokopoulos branch. Going wouldback to bethe an example entity at of hand, type “E67the birth Birth”, of Domenikos with the Theotokopoulos would be an entity of type “E67 Birth”, with the property “P98 brought property “P98 brought into life” having the value “Domenikos Theotokopoulos”. Finally, Theotokopoulosinto life” having would the value be an“Domenikos entity of type Theo “E67tokopoulos”. Birth”, with Finally, the property the date is“P98 stored brought in the the date is stored in the property “P117 occurs during”, which is a property inherited by intoproperty life” having “P117 theoccurs value during”, “Domenikos which Theois a propertytokopoulos”. inherited Finally, by thethe superclassdate is stored “E2 in Tem- the the superclass “E2 Temporal Entity”. The complete structure is sent in JSON format to the propertyporal Entity”. “P117 The occurs complete during”, structure which isis asent property in JSON inherited format byto the superclassback-end, where“E2 Tem- it is back-end, where it is processed and stored in the semantic graph database (see Section 7.3). poralprocessed Entity”. and The stored complete in the structure semantic is graph sent in database JSON format (see 6 to.3 theSemantic back- endGraph, where Database it is). The JSON formatting of the aforementioned example is presented in Figure 19. processedThe JSON and formatting stored inof the aforementionedsemantic graph exampledatabase is(see presented 6.3 Semantic in Figure Graph 19 .Database ). The JSON formatting of the aforementioned example is presented in Figure 19.

Electronics 2021, 10, 363 25 of 38 Electronics 2021, 10, 363 26 of 39 Electronics 2021, 10, 363 26 of 39

Figure 18. Narratives structured according to a subset of CIDOC-CRM (Conceptual Reference Model). Figure 18.18. Narratives structured according to a subset ofof CIDOC-CRMCIDOC-CRM (Conceptual(Conceptual ReferenceReference Model).Model).

Figure 19. Structuring extracted information using the CIDOC (International Committee for Docu- Figure 19. Structuring extracted information using the CIDOC (International Committee for Docu- mentationFigure 19. ofStructuring the International extracted Council information of Museums) using the model. CIDOC (International Committee for Docu- mentation of the International Council of Museums) model.model. As the NLP is still being developed in the context of the Invisible Museum platform, As the NLP is still being developed in the context of the Invisible Museum platform, the scope of the information that can be efficiently extracted and structured is limited and the scope scope of of the the information information that that can can be be efficiently efficiently extracted extracted and and structured structured is limited is limited and has not yet achieved the desired success rate. The models are being improved with new andhas not has yet not achieved yet achieved the desired the desired success success rate. rate.The models The models are being are being improved improved with withnew example texts, and the ruleset is being refined to include more types of information and newexample example texts, texts, and the and ruleset the ruleset is being is being refined refined to include to include more more types types of information of information and to portray the ones already included more accurately. When completed, the NLP’s scope andto portray to portray the ones the onesalready already included included more moreaccurately. accurately. When When completed, completed, the NLP’s the NLP’s scope is expected to cover the model presented in Figure 19. Users of the platform will be able scopeis expected is expected to cover to coverthe model the model presented presented in Figure in Figure 19. Users 19. Users of the of platform the platform will be will able be to enhance exhibits’ narratives by receiving automatic suggestions to include more infor- ableto enhance to enhance exhibits’ exhibits’ narratives narratives by receiving by receiving automatic automatic suggestions suggestions to include to include more infor- more mation relevant to their exhibits, based on the contributions of other users. informationmation relevant relevant to their to theirexhibits, exhibits, based based on the on contributions the contributions of other of other users. users.

ElectronicsElectronics2021 2021,,10 10,, 363 363 2627 ofof 3839

7.3.7.3. Semantic Graph Database TheThe InvisibleInvisible MuseumMuseum platform supports a wide range of user-submitteduser-submitted exhibits. TheseThese exhibits can can be be supplemented supplemented with with important important metadata metadata that that enrich enrich the the overall overall in- informationformation about about them. them. One One of ofthe the aims aims of the of the platform platform is to is support to support and and promote promote the theuse useof Linked of Linked Open Open Data Data (LOD). (LOD). To To address address that that requirement, requirement, data data about about the exhibits areare storedstored followingfollowing the the latest latest cultural cultural object object archival archival standards standards to appear to appear in a meaningfulin a meaning wayful inway a cross-cultural, in a cross-cultural, multilingual multilingual context. context. As illustrated As illustrated in Figure in Figure 20, each 20, exhibit each exhibit is based is onbased the on EDM the [EDM15]. The [15]. accompanying The accompanying narratives narratives are modeled are modeled after theafter CIDOC-CRM the CIDOC-CRM (see Section(see 6.2. 7.2 Analyzing). Narratives Using Natural Language Processing).

FigureFigure 20.20. Exhibits are modeledmodeled accordingaccording toto EuropeanaEuropeana DataData ModelModel (EDM).(EDM).

DataData modeledmodeled afterafter thethe EDM empower the Semantic Web initiativeinitiative andand provideprovide linkslinks toto Europe’sEurope’s culturalcultural heritageheritage datadata withwith referencesreferences toto persons,persons, places,places, subjects,subjects, etc.etc. TheThe EDMEDM cancan alsoalso showshow multiplemultiple viewsviews onon anan objectobject includingincluding informationinformation aboutabout thethe actualactual physicalphysical objectobject and and its its digital digital representations. representations. As As already already mentioned, mentioned the, the exhibits’ exhibits’ narrative narra- istive modeled is modeled based based on theon the CIDOC-CRM, CIDOC-CRM, which which is is a a theoretical theoretical model model for for informationinformation integrationintegration inin thethe fieldfield ofof culturalcultural heritageheritage providingproviding definitionsdefinitions andand aa formalformal structurestructure forfor describingdescribing conceptsconcepts andand theirtheir relationships.relationships. More precisely, narrativesnarratives abideabide byby thethe reduced CRM-compatible form. Both models inherently graph representations of objects reduced CRM-compatible form. Both models inherently graph representations of objects and concepts. This led to the decision of using the most appropriate tool to handle this and concepts. This led to the decision of using the most appropriate tool to handle this kind of information, a graph database, and in particular, the Neo4j graph database [70]. kind of information, a graph database, and in particular, the Neo4j graph database [70]. Leveraging a graph database provides an efficient way of representation of semantic Leveraging a graph database provides an efficient way of representation of semantic data data at an implementation level and contributes to LOD initiatives. Neo4j is an open- at an implementation level and contributes to LOD initiatives. Neo4j is an open-source source NoSQL native graph database that is ACID-compliant, making it suitable for NoSQL native graph database that is ACID-compliant, making it suitable for production production environments requiring high availability. It implements constant-time graph environments requiring high availability. It implements constant-time graph traversals traversals regardless of data size, is highly performant, scalable, and includes a rich regardless of data size, is highly performant, scalable, and includes a rich and expressive and expressive query language (Cypher). Neo4j was specifically designed to store and query language (Cypher). Neo4j was specifically designed to store and manage intercon- manage interconnected data, making it the perfect fit for the interconnected nature of the nected data, making it the perfect fit for the interconnected nature of the Invisible Museum Invisible Museum platform’s exhibit metadata. Exhibits in the database are represented as nodesplatform’s with exhibit attributes metadata. based onExhibits their metadata.in the database Each are exhibit represented has a digital as nodes representation with attrib- implementedutes based on astheir a separatemetadata. node Each that exhibit has has a relationship a digital representation with the root implemented exhibit. Other as a childrenseparate ofnode an that exhibit’s has a rootrelationship node can with be itsthe translations root exhibit. and Other narrations. children of Access an exhibit’s to the graphroot node database can be is doneits translations in real-time and with narrations. fast response Access times to tothe adequately graph database support isthe done NLP in servicesreal-time described with fast below,response which times is ato feat adequately made possible support due the to NLP Neo4j’s services high-performance described be- architecture.low, which is a feat made possible due to Neo4j’s high-performance architecture.

7.4.7.4. High-PerformanceHigh-Performance File StorageStorage ApartApart fromfrom thethe exhibits’exhibits’ metadata,metadata, therethere areare alsoalso digitaldigital representationsrepresentations ofof thethe actualactual physicalphysical exhibits. exhibits. These These representations can be videos (mp4), photos (jpg, png), text, soundssounds (mp3,(mp3, wav),wav), or or 3D 3D files files (fbx, (fbx, glb) glb) that that need need to to be be stored stored effectively, effectively, as theyas they constitute consti- thetute core the core content content of the of platform. the platform. To accommodate To accommodate the multimediathe multimedia file storagefile storage needs, needs, the

Electronics 2021, 10, 363 27 of 38

MinIO object storage suite (MinIO, Inc., Palo Alto, CA, USA) was chosen as the preferred solution of a high-performance, software-defined, open-source distributed object storage system [71]. MinIO protects data from silent data corruption, which is a problem faced by disk drives, using the HighwayHash algorithm, preventing corrupted data reads and writes. This approach ensures the integrity of the data uploaded to the platform’s repositories. Apart from data integrity, MinIO also uses sophisticated server-side encryption schemes to protect data, assuring confidentiality and authenticity. MinIO can run in the form of distributed containers providing an effective solution to scale the system depending on the demand of its data resources. Each file uploaded to MinIO is assigned with an identifier and can be referenced through it from the main database’s models, eliminating the need to rely on fragile file system structures that cannot be easily adapted to future changes and are not developer-friendly. Every file is also accessible by a system-generated URL that is stored in the corresponding database model.

7.5. Delivering Web-Based 3D/VR Experiences The Web portal of the Invisible Museum is its main access point where users can view or create content through various means. The Web application was built using the Angular front-end framework that supports building features quickly with simple, declarative templates [72]. It can achieve maximum speed and performance, as well as meet huge data requirements, offering a scalable web app solution. The front-end architecture implementation takes advantage of the Angular module system to create a maintainable code-base. The main modules are derived for the major resources of the platform, namely its Users, Exhibits, and Exhibitions. However, separate models are used to handle Authentication and User Profiles. Modules implement their business logic in the form of Angular components and their services. These components are reusable by any service via dependency injection minimizing duplicate code. Some of the functions that these services are responsible for are user registration and authentication, exhibit and exhibition creation, and user profile customization (profile picture, name, email, password, etc.). The Web applications delivered by the Invisible Museum support responsive design principles by incorporating the Bootstrap front-end framework and uses Sassy CSS (SCSS), an extension to basic CSS, to style and customize its UI components. Thus, the user experience on both desktop and mobile devices is equally pleasing and intuitive. Taking into consideration that visiting a virtual exhibition would be an experience best fitted for a six DOF (degrees of freedom) headset (where the user can navigate the environment as free as possible while maintaining most of his field of vision), it was crucial to support both desktop and VR headset implementations of the Invisible Museum. To deliver Web-based 3D/VR experiences, the Invisible Museum is based on a Web framework for building 3D/AR/VR applications named A-Frame [73] both for the Exhibition Designer and Exhibition Viewer. To this end, the Invisible Museum supports all WebXR browsers available for both desktops and standalone VR headsets (e.g., Oculus Quest, HTC Vive) [74].

7.6. Enhancing Photorealistic Renderings To deliver reasonable immersive virtual tours, high-fidelity visuals were of the utmost importance. The latter was even more difficult, taking into account the Web-based 3D/VR nature of the platform, which on the one hand enables multi-platform access with minimal requirements, while on the other, it is not as powerful as native 3D applications built with popular game engines. A-Frame [73], the Web framework for building 3D/AR/VR applications based on HTML/Javascript, is useful for multi-platform applications, but it comes at a cost. Perfor- mance tests revealed that there is an upper limit on the number of light sources a scene can have before affecting the performance of the application and aggravating the user experience in VR (e.g., frame drops and laggy hands). Making matters worse, its rendering engine calculates only direct lighting. However, not having any kind of indirect lighting can disengage users from the immersive experience that the presented platform aims to Electronics 2021, 10, 363 29 of 39 Electronics 2021, 10, 363 29 of 39 Electronics 2021, 10, 363 28 of 38

can disengage users from the immersive experience that the presented platform aims to can disengage users from the immersive experience that the presented platform aims to achieve, simply because a scene that tends to be flat-shaded falls into the uncanny valley achieve,achieve, simply because aa scenescene thatthat tendstends to to be be flat-shaded flat-shaded falls falls into into the the uncanny uncanny valley valley of of computer graphics. An indicative example is illustrated in Figure 21a. ofcomputer computer graphics. graphics. An An indicative indicative example example is illustratedis illustrated in in Figure Figure 21 a.21a.

(a) (b) (a) (b) Figure 21. (a) High ambient lighting and a directional light used as a sun; (b) Low ambient lighting with a point light Figure 21. (a) High ambient lighting and a directional light used as a sun; (b) Low ambient lighting with a point light Figurecentered 21. on(a )the High user. ambient lighting and a directional light used as a sun; (b) Low ambient lighting with a point light centered on the user. Nevertheless, pushing the framework to its limits to fully explore its weaknesses Nevertheless, pushing the framework to its limits to fully explore its weaknesses seemedNevertheless, very challenging, pushing so the different framework approaches to its limits on lighting to fully the explore scene itswere weaknesses followed. seemedseemed very very challenging, challenging, so so different different approaches approaches on on lighting lighting the the scene scene were were followed. followed. After some time of experimenting, some basic guidelines emerged aiming to enhance the AfterAfter some some time time of of experimenting, experimenting, some some basic basic guidelines emerged aiming to enhance the visualization of an exhibition room to look more attractive and at the same time save per- visualizationvisualization of of an an exhibition exhibition room room to tolook look more more attractive attractive and and at the at same the same time timesave saveper- formance. These guidelines include the following principles: formance.performance. These These guidelines guidelines include include the thefollowing following principles: principles:  The less ambient lighting, the more realistic the scene looks, • TThehe less less ambient ambient lighting, lighting, the more realistic the scene looks,  Windowless interior spaces are preferable, since windows increase the cognitive ex- • WWindowlessindowless interior interior spaces spaces are are preferable preferable,, since since windows windows increase increase the thecognitive cognitive ex- pectations of lighting with no payoff, pectationsexpectations of oflighting lighting with with no nopayoff, payoff, • UseUse point point lights lights or or spotlights spotlights in in combination combination with with ambient ambient lighting, lighting,  Use point lights or spotlights in combination with ambient lighting, • HavingHaving more more than than 5–6 5-6 lightslights inin thethe casecase ofof standalonestandalone VRVR headsetsheadsets shouldshould bebe avoided,avoided,  Having more than 5-6 lights in the case of standalone VR headsets should be avoided, sincesince it it will will significantly significantly reduce reduce the the performance. performance. since it will significantly reduce the performance. AfterAfter followingfollowing thethe aforementionedaforementioned guidelines,guidelines, thethe scenescene waswas lookinglooking moremore alluring,alluring, After following the aforementioned guidelines, the scene was looking more alluring, asas depicteddepicted in in FigureFigure 21 21b.b. InIn thisthis setup,setup, onlyonly oneone pointpoint lightlight waswas usedused thatthat waswas constantlyconstantly as depicted in Figure 21b. In this setup, only one point light was used that was constantly followingfollowing the the user, user, combined combined with with the the use use of ambientof ambient lighting. lighting. In another In another experimentation, experimenta- following the user, combined with the use of ambient lighting. In another experimenta- onetion, light one sourcelight source and threeand three to four to four point point lights lights were were used used to to further further enhance enhance the the scene scene tion, one light source and three to four point lights were used to further enhance the scene (Figure(Figure 22 22).). (Figure 22).

Figure 22. Low ambient lighting, one-point light per two exhibits, and one centered on the user. FigureFigure 22. 22. LowLow ambient ambient lighting, lighting, one one-point-point light light per per two two exhibits, exhibits, and and one centered on the user. The performance of the latter setup was the ideal outcome given the available re- TheThe performance performance of of the the latter latter setup setup was was the the ideal ideal outcome outcome given given the the available available re- re- sourcessources on on the the real-time real-timelighting lightingpipeline pipeline of of A-Frame. A-Frame. However, However, it it might might entail entail some some sources on the real-time lighting pipeline of A-Frame. However, it might entail some drawbacks,drawbacks,since since itit addsadds limitationslimitations andand increasesincreases thethe complexitycomplexity ofof thethe finalfinal VRVR applica-applica- drawbacks, since it adds limitations and increases the complexity of the final VR applica- tion.tion. For For example,example, it it wouldwould not notbe be feasiblefeasible forfor largelarge exhibitionsexhibitions withwith numerousnumerous point point lights lights tion. For example, it would not be feasible for large exhibitions with numerous point lights forfor each each exhibit. exhibit. In In addition, addition this, this approach approach does does not not scale scale appropriately appropriately for 3D for objects 3D objects (e.g., for each exhibit. In addition, this approach does not scale appropriately for 3D objects ,(e.g., statues, pottery) pottery) that need that to need receive to receive and cast and shadows cast shadows (Figure (Figure 23a). 23a). (e.g., statues, pottery) that need to receive and cast shadows (Figure 23a).

Electronics 2021, 10, 363 30 of 39 Electronics 2021, 10, 363 29 of 38 Electronics 2021, 10, 363 30 of 39

(a) (b) Figure 23. (a) Real-time rendering(a) of A-Frame; (b) A-Frame running with the baked textures(b) with 0 actual lights on the scene. Figure 23. (a(a) )Real Real-time-time rendering rendering of of A A-Frame;-Frame; (b ()b A)- A-FrameFrame running running with with the the baked baked textures textures with with 0 actual 0 actual lights lights on the on scene.the scene. To boost graphics performance, it was important to figure out how to enable baked lightingToTo boost on dynamic graphics A performance, -Frame scenes it , was since important it is common to figure figure practi out ce how to integrate to enable the baked static lightinglightinglighting on onof dynamicthe dynamic scene A A-Frameinto-Frame the texturescenes scenes,, ofsince since static it it is objects common and practi practicethen ceremove to integrate it from the the static light lightinglightingcalculations of of the the (Figure scene scene 23b).into into the theSince, texture texture in most of of static static museum objects exhibitions, and and then then removethe remove objects it it from are static,the light the calculationscalculationslighting and (Figure (Figureshading 23b).23 inb). the Since, Since, scene in inshould most most bemuseum museum baked. exhibitions, exhibitions,In this way, the the onlyobjects objects performance are static, costthe lightinglightingwould be and and the shading shading rendering; in in the thethis scene scene would should should result be beinbake drastically baked.d. In this In thisincreasingway, way, the only the the only performanceframe performance rate. Given cost wouldcostthat wouldthe be Athe-Frame be rendering; the rendering;does this not would support this would result baking resultin drastically functionality, in drastically increasing a new increasing the pipeline, frame the rate. namely frame Given rate. the thatGivenBlender’s the that A - BakeryFrame the A-Frame pipeline,does not does was support not integrated support bakingbaking with functionality, the functionality, provided a new system. a new pipeline, pipeline, Blender namely is namely a freethe Blender’stheopen Blender’s-source Bakery 3D Bakery computer pipeline, pipeline, graphics was was integrated software integrated with [1]. with theIts main provided the provided use in system. the system. industry Blender Blender is is3D a mod- free is a openfreeeling open-source- sourceand animation 3D computer 3D, computer but itgraphics can graphicsdo bakingsoftware software and [1]. scripting, Its [1 main]. Its mainusemaking in use the a inperfectindustry the industry match is 3D for mod- is 3Dthe elingmodelingspecific and case. animation and animation,, but it but can it do can baking do baking and andscripting, scripting, making making a perfect a perfect match match for the for specificthe specificThe case. pipeline case. is being activated automatically at the time a curator finishes the design of a virtualTheThe pipeline pipeline exhibition is being being space a activatedctivated using the automatically Exhibition Designer. at the time The a curator latter serves finishes finishes real the -time design low- ofoffidelity a a virtual virtual graphics exhibition exhibition rendered space space by using usingA-Frame the the Exhibition and Exhibition saves Designer. user Designer.-changes The Thelatter continuously latter serves serves real on-time real-time the data-low- fidelitylow-fidelitybase. On graphics comp graphicsletion, rendered renderedthe Blender’sby A- byFrame A-Frame Bakery and savespipeline and savesuser is- changes user-changestriggered. continuously The continuously baking onprocess the on data- will the base.database.take placeOn comp On on completion,Blenderletion, the, using Blender’s the the Blender’s ray Bakery-tracing Bakery pipeline rendering pipeline is triggered. engine is triggered. of TheBlender baking The called baking process “Cycles”, process will takewillproducing takeplace place on as Blender output on Blender, ,a using .glb using file the that the ray ray-tracing -willtracing be stored rendering rendering back engine to engine the database.of of Blender Blender As called called a result, “Cycles”, “Cycles”, when producingvisitors start as anoutput exhibition a .glb filetour, file that a photorealistic will be stored virtual back to space the database. will be activated. As a result, The whe when lattern visitorsis characterized start an exhibitionexhibition by great performance tour,tour, aa photorealistic photorealistic, since there virtual virtual is no space space need will will to besimulate be activated. activated. illumination The The latter latter isin ischaracterizedthe characterized scene, setting by by great the great number performance, performance of polygons since, since therein therethe is scene nois no need to need be to the simulateto onlsimulatey performance illumination illumination limita- in the in thescene,tion. scene, setting setting the the number number of polygons of polygons in the in scenethe scene to be to the be onlythe onl performancey performance limitation. limita- tion. AsAs illustrated illustrated in in Figure Figure 24 ,24, the theExhibition Exhibition Designer Designer stores stores to the databaseto the database a 2D tilemap, a 2D thetilemap, position,As illustrated the position, rotation, in andFigurerotation, scale 24, and of the each scale Exhibition object of each (i.e., Designerobject exhibit) (i.e., stores and exhibit) information to theand databaseinformation of the a light 2D of tilemap,sources.the light the Thesources. position, tilemap The rotation, istilemap a two-dimensional andis a scaletwo-dimensional of each array object containing arr (i.e.,ay containing exhibit) integers, and integers, and information it is and used it of tois therepresentused light to represent sources. the building Thethe buildingtilemap of the museum. isof athe two museum.-dimensional array containing integers, and it is used to represent the building of the museum.

FigureFigure 24.24. Blender’sBlender’s Bakery pipeline. Figure 24. Blender’s Bakery pipeline. ForFor example,example, thethe wallswalls areare representedrepresented byby thethe valuevalue “1”“1” andand thethe floorfloor andand ceilingceiling areare representedrepresentedFor example, byby thethe the valuevalue walls “0”.“0”. are InInrepresented thethe nextnext stepssteps by the ofo fvalue thethe presentedpresented “1” and the pipeline,pipeline, floor and thethe ceiling Blender’sBlender’s are representedBakeryBakery pipelinepipeline by the serviceservice value receivesreceives “0”. In rawrawthe next datadata steps andand recreatesrecreates of the presented thethe virtualvirtual pipeline, space,space, asasthe depicteddepicted Blender’s inin BakeryFigureFigure 2525a.pipelinea. service receives raw data and recreates the virtual space, as depicted in Figure 25a.

Electronics 2021, 10, 363 30 of 38 Electronics 2021, 10, 363 31 of 39

(a) (b)

Figure 25. (a) The result of “Blender“Blender scenescene recreator”. Every Every wall, wall, floor, floor, the ceiling is a unique objectobject generated by the tilemap; (b) TheThe resultresult ofof “SceneMerger”.“SceneMerger”. AllAll thethe objectsobjects thatthat fitfit thethe samesame categorycategory areare mergedmerged intointo one.one.

Major factors, affecting baking process time, should be considered such as (a) sample count and (b) the number of objects in thethe scene. Sample count is the number of iterations of light light calculations calculations that that Blender Blender should should execute. execute. The The higher higher this this number number is, the is, themore more de- taildetail and and the the less less noise noise is isobserved observed on on the the bakes. bakes. The The aforementioned aforementioned number number of of iterations needs to to repeat repeat for for every every object object in inthe the scene, scene, providing providing a time a timecomplexity complexity order order of O (sam- of O pleCount(sampleCount x objectsCount). x objectsCount). That means That means that the that amount the amount of sample of sample count countand object and objectcount shouldcount should be reduced be reduced as much as muchas possible as possible without without losing losingdetail or detail having or having too much too muchnoise. Takingnoise. Taking the above the aboveinto consideration, into consideration, to reduce to reduce the total the totalobjectCount objectCount,, all the all objects the objects that fallthat in fall the in same the same category category should should be merged be merged as one as oneobject. object. As illustrated As illustrated in Figure in Figure 25. (a)25. The(a) The result result of of“Blender “Blender scene scene recreator”. recreator”. Every Every wall, wall, floor, floor, the the ceiling ceiling is is a a unique object genegeneratedrated by the tilemap; (b) The result of “SceneMerger”. All the objects that fitfit the same category are merged into one. objects that are grouped in the same category include the floorfloor tiles, wall tiles, etc. In general, general, fewer objects means fewer textures an andd thus a smaller filefile size. However, results show that not all walls should be merged into one object. The more objects are merged into one, the less space each face has in the UV UV map map of of the the texture texture;; hence, this results in less detail. It has been estimatedestimated that a balance of 30–4030–40 objects per merge in the same category provides satisfactory results. The baking processprocess was was developed developed in in Python Python [75 []75 using] using the the Blender Blender 2.91 2.91 API API [76]. [ The76]. Theprocess process starts starts by creating by creating a new aUV new map UV (“bakedUV”) map (“bakedUV”) for each forobject each and object uses Blender’s and uses Blender’sSmart UV Smart unwrapping UV unwrapping with an island with an margin island of margin 0.1 to automatically of 0.1 to automatically create its create UV map. its UVSmart map. UV Smart unwrap UV is unwrap now required is now for required exhibits. for Exhibits exhibits. should Exhibits be already should UV be already unwrapped UV unwrappedif they have aif materialthey have applied a material to them. applied To identifyto them. if To an identify object is if an an exhibit, object itis isan properly exhibit, itnamed is properly from stepnamed 1 of from the Blender’sstep 1 of the Bakery Blender’s pipeline. Bakery Afterwards, pipeline. theAfterwards, material ofthe the material object ofshould the object be created, should in be case created, it does in not case yet it exist. does Innot the yet material exist. In node the network,material node for each network, object, foran imageeach object, texture an node image is added,texture andnode a textureis added image, and isa texture created, image which is is created, properly which named is according to the name of the object. For example, if the object is called “Museum_Wall”, an properly named according to the name of the object. For example, if the object is called image called “Museum_Wall.png” will be created, where the baked texture will take place. “Museum_Wall”, an image called “Museum_Wall.png” will be created, where the baked It has been estimated that textures of 2048 pixels width x 2048 pixels height bring a good texture will take place. It has been estimated that textures of 2048 pixels width x 2048 balance in file size, details, and render time. Next in the pipeline, Blender is instructed to pixels height bring a good balance in file size, details, and render time. Next in the pipe- bake the diffuse lighting (direct, indirect, and color) to the texture. When the for-loop is line, Blender is instructed to bake the diffuse lighting (direct, indirect, and color) to the over, the pipeline ends up with one image for each object that includes the appropriate texture. When the for-loop is over, the pipeline ends up with one image for each object lighting baked. Depending on the sample count, the final texture may have little to plenty that includes the appropriate lighting baked. Depending on the sample count, the final of salt-and-pepper noise. As a countermeasure, after the bake operation is complete, the texture may have little to plenty of salt-and-pepper noise. As a countermeasure, after the images are denoised using a medianBlur filter with a kernel size of 3. Median filters are bake operation is complete, the images are denoised using a medianBlur filter with a ker- considered ideal when denoising that kind of noise, while using OpenCV makes it easier

neland size simpler of 3. toMedian use. Then, filters the are textures considered pass ideal from when a second denoising filter called that kind Box of Filter noise for, while finer usingremoval OpenCV of the makes noise ofit easier the artifacts and simpler while to paying use. Then, the cost the textures of blurring pass the from texture a second to a filter called Box Filter for finer removal of the noise of the artifacts while paying the cost of blurring the texture to a small degree. After denoising, the scene is exported as a single

Electronics 2021, 10, 363 31 of 38

Electronics 2021, 10, 363 32 of 39

small degree. After denoising, the scene is exported as a single GLB file. Note that when GLBexporting file. Note the modelthat when from exporting blender, allthe the model UV maps from shouldblender, be all deleted the UV except maps forshould the one be deletedcreated except on which for thethe bakeone created was based. on which the bake was based. TheThe exported exported file file is is stored in the database notifying the curator that the process has beenbeen completed. completed. The The results results are are shown shown in in the the A A-Frame-Frame in in Figure Figure 2626aa.. To To demonstrate demonstrate how how promising this technique is, a comparison of a ray-traced render with the baked version is promising this technique is, a comparison of a ray-traced render with the baked version provided in Figure 26b. is provided in Figure 26b.

(a) (b)

Figure 26. (a) Image renderedrendered usingusing Cycles, Cycles, the the ray ray tracing tracing engine engine of of blender; blender (b; )(b Blenders) Blenders Bakery Bakery Pipeline Pipeline with with 512 512 samples. sam- ples. 8. Evaluation Results 8. Evaluation Results A cognitive walkthrough for the evaluation of the Invisible Museum was conducted withA the cognitive participation walkthrough of three (3) for UX the experts evaluation and oneof the (1) Invisible domain expert. Museum During was aconducted cognitive withwalkthrough the participation evaluation, of three the evaluator(3) UX experts uses theand interface one (1) domain to perform expert. tasks During that aa typicalcogni- tiveinterface walkthrough user will evaluation, need to accomplish, the evaluator aiming uses the to assessinterface system to perform actions tasks and that responses a typ- icalconsidering interface theuser user’s will need goals to andaccomplish, knowledge aiming [62]. toThe assess evaluation system actions is driven and by responses a set of consideringquestions to the which user’s the goals evaluator and knowledge aims to respond [62]. The in each evaluation interaction is driven step. Inby particular,a set of ques- for tionsthe evaluation to which the of the evaluator Invisible aims Museum, to respond the Enhanced in each interaction Cognitive Walkthroughstep. In particular, method for was the evaluationfollowed, according of the Invisible to which Museum, the evaluator the Enhanced has to respond Cognitive to the Walkthrough following questions method was [77]: followed,1. Will according the user know to which that the evaluatedevaluator has function to respond is available? to the following questions [77]: 1.2. WillWill the the user user know try to that achieve the evaluated the right effect? function is available? 2.3. WillWill the the user user try interface to achieve give the clues right that effect? show that the function is available? 4. Will the user be able to notice that the correct action is available? 3. Will the user interface give clues that show that the function is available? 5. Will the user associate the right clue with the desired function? 4.6. WillWill the the user user be associate able to thenotice correct that actionthe correct with action the desired is available? effect? 5.7. WillWill the the user user associate get sufficient the right feedback clue towith understand the desired that fun thection? desired function has been 6. Willchosen? the user associate the correct action with the desired effect? 8. If the correct action is performed, will the user see that progress being made toward 7. Will the user get sufficient feedback to understand that the desired function has been the solution of the task? chosen? 9. Will the user get sufficient feedback to understand that the desired function has been 8. Ifperformed? the correct action is performed, will the user see that progress being made toward the solution of the task? Each question is answered with a grade from 1 to 5, where 1 corresponds to a very 9.small Will number the user of success get sufficient and 5 correspondsfeedback to understand to a very good that chance the desired of success. function If a has problem been is identifiedperformed? with seriousness from 1 to 4, then the problem needs to be classified into one of theEach following question types: is answered with a grade from 1 to 5, where 1 corresponds to a very small• User: number the of problem success is and due 5 tocorresponds the user experience to a very andgood knowledge. chance of success. If a problem is• identifiedHidden: with the seriousness UI gives no from indications 1 to 4, ofthen a function the problem or how needs it should to be classified be used. into one of• theText/Icon: following cantypes: be easily misinterpreted or not understood. • User:Sequence: the problem an unnatural is due sequence to the user of experience functions and and operations knowledge. is provided. • Feedback: insufficient feedback is provided by the system.  Hidden: the UI gives no indications of a function or how it should be used.  Text/Icon: can be easily misinterpreted or not understood.  Sequence: an unnatural sequence of functions and operations is provided.

Electronics 2021, 10, 363 32 of 38

Before each evaluation session, the evaluator was introduced by the facilitator to the aims and objectives of the Invisible Museum, as well as to who its users are. To assist evaluators, a set of predefined tasks were given to them to execute. In particular, the tasks were as follows: • Task 1: Log in to the system and view the home page (user type: visitor). • Task 2: Search for an exhibition (user type: visitor). • Task 3: View an exhibition (user type: visitor). • Task 4: Take a tour to an exhibition (user type: visitor). • Task 5: Add a digital exhibit (user type: curator). • Task 6: Create a digital exhibition (user type: curator). • Task 7: Create a tour for an exhibition (user type: curator). Evaluators carried out the cognitive walkthrough in separate one-hour sessions. A facilitator observed the evaluators while they were interacting and commenting on the system and kept notes regarding whether they tried and achieved the desired outcome. In addition, the facilitator recorded the steps that seemed to trouble the evaluators throughout the experiment, based on their comments and their actions. In total, thirty-five (35) problems were identified for all the tasks that evaluators carried out. The results of the evaluation per task are presented in the tables that follow. Table3 presents the number of problems identified per task according to their seriousness, while Table4 presents the number of problems identified per problem type.

Table 3. Problem seriousness per task.

Problem Seriousness Task 1 2 3 4 5 Task 1 1 2 1 0 0 Task 2 0 2 0 0 0 Task 3 1 2 1 1 0 Task 4 1 4 2 3 1 Task 5 1 1 1 0 0 Task 6 1 2 2 0 0 Task 7 2 2 1 0 0

Table 4. Problem type per task.

Problem Type Task User Hidden Text/Icon Sequence Feedback Task 1 1 0 3 0 0 Task 2 0 0 2 0 0 Task 3 1 2 1 1 0 Task 4 2 3 4 1 1 Task 5 2 1 0 0 0 Task 6 1 2 1 1 0 Task 7 2 1 2 0 0

In brief, the most severe problems identified referred to the lack of adequate infor- mation about the exhibits included in an exhibition and its tours, which were assessed by evaluators as troublesome for users. Lack of more detailed guidance during a tour was also assessed as an issue that may confuse users, particularly those who are novice in such VR environments. For curators, evaluators identified that the functionality for creating and editing a 3D virtual space can present difficulties, especially since some functionality appears to be hidden (e.g., a collection of tools to pan, grab or measure the virtual space are only available by hovering the selected one). Overall, evaluators appraised the clear design, the UI consistency, and provision of feedback at all times, as well as that the functions seemed to be well integrated into a Electronics 2021, 10, 363 33 of 38

logical flow. They pointed out that some improvements could be made concerning the terminology used, which might be particularly beneficial for users who are not domain experts (e.g., non-professional content creators). Finally, they noted that despite the errors that were identified, they expect that users will be able to easily overcome them once they become familiar with the system.

9. Discussion Overall, the design, implementation, and validation of this work have provided enough evidence to validate the appropriateness of the conducted research. The Invisible Museum provides solutions to several existing limitations of state-of-the-art solutions and is based on open standards and technologies popular in the digital CH sector to ensure its applicability and further exploitation. Of course, several issues are still considered open, as this would require research efforts that exceed the scope of this research work and the acquired funding. This section discusses the outcomes of this work and attempts to provide reusable knowledge regarding three directions (a) potential guidelines that come out of the lessons learned and are generalizable for the domain, (b) limitations that should be noted for driving further research endeavors, and (c) technical limitations and the need for new technologies to fully implement the vision of this research work.

9.1. Lessons Learned Through the iterative evaluation involving both experts and end-users, several conclu- sions were drawn concerning the design of virtual museum environments, addressing not only cultural institutions but non-professional content creators, as well. These conclusions, which are based on observations and suggestions that were made both by experts and users, highlight the following lessons learned: • User experience in such environments is heavily content-driven: no matter how good a UI is, it is the content itself that is crucial for delivering a high-quality user experience. • Minimalistic UI is of utmost importance: given that the content to be delivered includes compelling graphics exhibiting a wide variety, UI needs to be minimal to support the content in the most unobtrusive way. • Content personalization is a must-have: for virtual spaces that accumulate content of such a wide variety, it is necessary to support personalization, providing to each user content that is relevant to their interests. • Flexibility in data structures is the way to go: given the diversity of content, it is impossible to provide a fixed classification suitable for every digital exhibit to be added. Supporting content description by user-defined key-value pairs is a good solution to achieve universality. • Presentation through narratives is a win–win solution: given that narratives guide storytelling experiences and bind the presented artefacts with their socio-historic context, they are highly beneficial for end-users; at the same time, analyzing free-text narrative data for each exhibit can contribute to the creation of a comprehensive knowledge repository, which is compliant with standards for content classification and therefore valuable for information discovery and exploration. • Step-by-step functions can be a life-saver for complicated procedures: creating a digital exhibition, featuring virtual tours is a quite complicated procedure, which was the one that required most of the design iterations. Breaking the process to concrete steps turned out to be the most appealing solution, approved by all evaluators. • 3D environments disrupt the sequential nature of tours: this can be highly beneficial, since it allows users to experience tours as in physical spaces (by deviating from predetermined routes); however, it needs to be designed with caution, since the user may get lost in such virtual 3D environments. To address this challenge, shortcuts for returning to the tour, as well as information regarding one’s whereabouts in the virtual environment are useful features. Electronics 2021, 10, 363 34 of 38

Process-wise, the iterative approach followed throughout the process, as well as the emphasis on involving domain experts from the first stages of the project was crucial for achieving a prompt delivery of results, as well as for minimizing the need for major changes at later stages of the platform. Therefore, although a considerable effort was required for the iterative heuristic evaluation of the designed mockups, this resulted in rather small numbers of problems identified at later stages, which were easier to address and did not require a major redesign in terms of UI, system architecture, and software algorithms.

9.2. Limitations With regard to the requirements elicitation, a limitation refers to the participation of curators from one single museum, a historical museum. As such, different museums might require additional functions from such a platform. To address this limitation, the authors have carried out—besides the interviews and workshops—a detailed analysis of related efforts in the field. As a result, the platform that was developed satisfies the requirements identified by users but also provides the necessary infrastructure to dynamically support any type of content, facilitating its proper classification and supporting a personalized user experience for all the Invisible Museum visitors.

9.3. Further Technical Considerations Considering technical limitations, the scope of the information that NLP can efficiently distinguish and codify is limited, and it has not yet achieved the desired success rate. The statistical models and rulesets have to be trained with new example texts in order to include more types of information, and portray the ones already included more accurately. At the end, users can include more information relevant to their exhibits in their narratives, based on the contributions of other users in similar subjects. The adopted web framework for building 3D/AR/VR applications based on HTML/ Javascript is useful for multiplatform applications but sets a limitation regarding the amount of light sources a scene can have, before affecting the performance of the application and aggravating the user experience in VR. To address this limitation, a baking service was developed as presented in Section 7.6. As a result, reasonable immersive virtual tours with high-fidelity visuals can be delivered to the museum visitors.

10. Conclusions and Future Work This work presented the Invisible Museum, a user-centric platform that allows users to create interactive and immersive virtual 3D/VR exhibitions using a unified collabora- tive authoring environment. In summary, the Invisible Museum offers (a) user-designed dynamic virtual exhibitions, (b) personalized suggestions and exhibition tours, (c) visual- ization in Web-based 3D/VR technologies, and (d) immersive navigation and interaction in photorealistic renderings. The platform differentiates from all previous similar works in the sense that it is a generic technological framework not paired to a specific real-world museum. Thus, its main ambition is to act as a generic platform to support the representation and presentation of virtual exhibitions. The representation adheres to domain standards such as CIDOC-CRM and EDM and exploits state-of-the-art deep learning technologies to assist the curators by generating ontology bindings for textual data. At the same time, the virtual museum authoring environment was co-designed with museum experts and provides the entire toolchain for moving from the knowledge and exhibit representation to the authoring of collection and virtual museum exhibitions. An important aspect of the authoring part is the semantic representation of narratives that guide storytelling experiences and bind the presented artifacts with their socio-historic context. The platform transforms this rich representation of knowledge to multimodal representations by supporting currently Web-based 3D/VR immersive visiting experiences. The platform itself was designed following a Human-Centered Design approach with the collaboration of museum curators and personnel of the Historical Museum of Electronics 2021, 10, 363 35 of 38

Crete, as well as a number of end-users. Different evaluation methodologies throughout the development cycle of the Invisible Museum were applied, thus ensuring that the platform will serve user needs in the best possible way and provide an engaging experience both for content creators and content consumers. In particular, the following evaluation iterations were carried out: (a) heuristic evaluation of the designed mockups, applied iteratively from the first set of mockups that were designed until the final extensive set of mockups, (b) group inspection of the final implemented mockups, ensuring that they are usable for the target users, and (c) cognitive walkthrough carried out on the implemented system, to assess whether users will know what to do at each step of the interaction. Several conclusions were drawn concerning the design of virtual museum environments, addressing not only cultural institutions but individual content creators as well. Future evaluation efforts will target larger numbers of end-users, including professional and non-professional content creators, as well as museum visitors. With regard to future improvements, further emphasis will be placed on the experien- tial part of the visit focusing both on AR augmentation of physical exhibitions and more immersive representations empowered by multichannel audio and interactive narrations. Emphasis will also be given on the design and development of hybrid exhibition tours that combine physical exhibitions with AR augmentation enhanced with digital artifacts from virtual ones. The outcomes of this research work will be exploited in the context of the reformu- lation of the virtual exhibitions provided by the Historical Museum of Crete such as the Ethnographic Collection with representative items mainly from the 19th and the 20th century, collected from villages across the island, particularly East and Central Crete.

Author Contributions: Conceptualization, N.P., E.Z. and C.S.; methodology, S.N.; software, A.D., S.K., E.N., A.X., Z.P., A.M. and M.F.; visualization, validation, S.N. and A.N.; investigation, N.P.; writing—original draft preparation, E.Z., E.K., I.Z., Z.P., A.D. and E.N.; writing—review and editing, N.P., S.N. and E.Z.; supervision, E.Z.; project administration, E.Z.; All authors have read and agreed to the published version of the manuscript. Funding: This work has been conducted in the context of the Unveiling the Invisible Museum research project (http://invisible-museum.gr), and has been co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH–CREATE–INNOVATE (project code: T1EDK-02725). Institutional Review Board Statement: The study was approved by the Ethics Committee of the Foundation for Research and Technology–Hellas (Approval date: 12 April 2019/Reference number: 40/12-4-2019). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Data is contained within the article. Acknowledgments: The authors would like to thank: (a) Eleftherios Fthenos, Undergraduate Student | University of Crete, for his contribution in the design process of the Exhibition Designer and the Exhibition Viewer, (b) Michalis Sifakis, Content Editor | FORTH, for his contribution in thematic areas classification, (c) Argyrw Petraki, Philologist | FORTH, for her contribution in content translation, (d) Manolis Apostolakis, Visual Artist | Alfa3 art workshop, for his kind offer of 3D reconstructed artworks, and, (e) Aggeliki Mpaltatzi, Curator, Ethnographic Collections | Head of Communications in Historical Museum of Crete, for her contribution in the realization of the virtual exhibition «Traditional Cretan House», part of the platform’s demo video presentation https://youtu.be/5nJ5 Cewqngc. The authors also would like to thank all the employees of the Historical Museum of Crete, as well as all end-users who participated in the co-creation and evaluation of the Invisible Museum platform, providing valuable feedback and insights. Conflicts of Interest: The authors declare no conflict of interest. Electronics 2021, 10, 363 36 of 38

References 1. Schweibenz, W. The “Virtual Museum”: New Perspectives for Museums to Present Objects and Information Using the Internet as a Knowledge Base and Communication System. In Proceedings of the Knowledge Management und Kommunikationssysteme, Workflow Management, Multimedia, Knowledge Transfer, Prague, Czech Republic, 3–7 November 1998; pp. 185–200. 2. Ferdani, D.; Pagano, A.; Farouk, M. Terminology, Definitions and Types for Virtual Museums. V-Must.net del. Collections. 2014. Available online: https://www.academia.edu/6090456/Terminology_definitions_and_types_of_Virtual_Museums (accessed on 28 October 2018). 3. Partarakis, N.; Grammenos, D.; Margetis, G.; Zidianakis, E.; Drossis, G.; Leonidis, A.; Metaxakis, G.; Antona, M.; Stephanidis, C. Digital Cultural Heritage Experience in Ambient Intelligence. In Mixed Reality and Gamification for Cultural Heritage; Ioannides, M., Magnenat-Thalmann, N., Papagiannakis, G., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 473–505. 4. Forte, M.; Siliotti, A. Virtual Archaeology: Great Discoveries Brought to Life through Virtual Reality; Thames and Hudson: , UK, 1997. 5. Google Arts & Culture. Available online: https://artsandculture.google.com/ (accessed on 21 December 2020). 6. Inventing Europe: European Digital Science & Technology Museum. Available online: http://www.inventingeurope.eu/ (accessed on 21 December 2020). 7. Ontdek de wereld in het Museon. Available online: https://www.museon.nl/nl (accessed on 21 December 2020). 8. Kiourt, C.; Koutsoudis, A.; Pavlidis, G. DynaMus: A fully dynamic 3D virtual museum framework. J. Cult. Herit. 2016, 22, 984–991. [CrossRef] 9. Tsita, C.; Drosou, A.; Karageorgopoulou, A.; Tzovaras, D. The Scan4Reco Virtual Museum. In Proceedings of the 14th annual EuroVR conference, Laval, France, 12–14 December 2017. 10. Giangreco, I.; Sauter, L.; Parian, M.A.; Gasser, R.; Heller, S.; Rossetto, L.; Schuldt, H. VIRTUE: A virtual reality museum Experience. In Proceedings of the 24th International Conference on Intelligent User Interfaces: Companion (IUI 19), Marina del Ray, CA, USA, 17–20 March 2019; pp. 119–120. 11. O’hOisin, N.; O’Malley, B. The Medieval Dublin Project: A Case Study. Virtual Archaeol. Rev. 2010, 1, 45–49. [CrossRef] 12. Carrozzino, M.; Bergamasco, M. Beyond virtual museums: Experiencing immersive virtual reality in real museums. J. Cult. Herit. 2010, 11, 452–458. [CrossRef] 13. Scherp, A.; Franz, T.; Saathoff, C.; Staab, S. F—A model of events based on the foundational ontology dolce+DnS ultralight. In Proceedings of the fifth international conference on Knowledge capture (K-CAP 09), Redondo Beach, CA, USA, 1–4 September 2009; pp. 137–144. 14. ISO 21127:2014: Information and documentation—A reference ontology for the interchange of cultural heritage information. Available online: https://www.iso.org/cms/render/live/en/sites/isoorg/contents/data/standard/05/78/57832.html (accessed on 21 December 2020). 15. Doerr, M.; Gradmann, S.; Hennicke, S.; Isaac, A.; Meghini, C.; van de Sompel, H. The Europeana Data Model (EDM). In Proceedings of the In World Library and Information Congress: 76th IFLA general conference and assembly, Gothenburg, Sweden, 10–15 August 2010; pp. 10–15. 16. Mani, I. Computational Modeling of Narrative. Synth. Lect. Hum. Lang. Technol. 2012, 5, 1–142. [CrossRef] 17. Raimond, Y.; Abdallah, S. The Event Ontology. Available online: http://motools.sourceforge.net/event/event.html (accessed on 24 December 2020). 18. Shaw, R.; Troncy, R.; Hardman, L. LODE: Linking Open Descriptions of Events. In Proceedings of the Fourth Asian Semantic Web Conference (ASWC 2009), Shanghai, China, 6–9 December 2009; pp. 153–167. 19. Doerr, M. The CIDOC Conceptual Reference Module: An Ontological Approach to Semantic Interoperability of Metadata. AIMag 2003, 24, 75. [CrossRef] 20. Lagoze, C.; Hunter, J. The ABC Ontology and Model. In Proceedings of the International Conference on Dublin Core and Metadata Applications, Tokyo, Japan, 24–26 October 2001; pp. 160–176. 21. Fernie, K.; Griffiths, J.; Stevenson, M.; Clough, P.; Goodale, P.; Hall, M.; Archer, P.; Chandrinos, K.; Agirre, E.; de Lacalle, O.L.; et al. PATHS: Personalising access to cultural heritage spaces. In Proceedings of the 18th International Conference on Virtual Systems and Multimedia, Milan, Italy, 2–5 September 2012; pp. 469–474. 22. van den Akker, C.M.; van Erp, M.G.J.; Aroyo, L.M.; Segers, R.; van der Meij, L.; Schreiber, G.; Legêne, S. Understanding Objects in Online Museum Collections by Means of Narratives. In Proceedings of the Third Workshop on Computational Models of Narrative, Instanbul, Turkey, 26–27 May 2012. 23. Wolff, A.; Mulholland, P.; Collins, T. Storyspace: A story-driven approach for creating museum narratives. In Proceedings of the 23rd ACM conference on Hypertext and social media (HT 12), Milwaukee, WI, USA, 25–28 June 2012; pp. 89–98. 24. Meghini, C.; Bartalesi, V.; Metilli, D.; Partarakis, N.; Zabulis, X. Mingei Ontology. Available online: https://zenodo.org/record/ 3742829#.YBK2ExZS9PY (accessed on 2 February 2021). 25. Zabulis, X.; Meghini, C.; Partarakis, N.; Beisswenger, C.; Dubois, A.; Fasoula, M.; Nitti, V.; Ntoa, S.; Adami, I.; Chatziantoniou, A.; et al. Representation and Preservation of Heritage Crafts. Sustainability 2020, 12, 1461. [CrossRef] 26. Chiu, C.; Sainath, T.N.; Wu, Y.; Prabhavalkar, R.; Nguyen, P.; Chen, Z.; Kannan, A.; Weiss, R.J.; Rao, K.; Gonina, E.; et al. State-of- the-Art Speech Recognition with Sequence-to-Sequence Models. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 4774–4778. Electronics 2021, 10, 363 37 of 38

27. Jaf, S.; Calder, C. Deep Learning for Natural Language Parsing. IEEE Access 2019, 7, 131363–131373. [CrossRef] 28. Lyu, C.; Titov, I. AMR Parsing as Graph Prediction with Latent Alignment. arXiv 2018, arXiv:1805.05286. Available online: https://arxiv.org/abs/1805.05286 (accessed on 2 February 2021). 29. Gildea, D.; Jurafsky, D. Automatic Labeling of Semantic Roles. Comput. Linguist. 2002, 28, 245–288. [CrossRef] 30. Exner, P.; Nugues, P. Using Semantic Role Labeling to Extract Events from Wikipedia. In Proceedings of the Detection, Represen- tation, and Exploitation of Events in the Semantic Web (DeRiVE 2011), Bonn, Germany, 23 October 2011; pp. 38–47. 31. Choi, D.; Kim, E.-K.; Shim, S.-A.; Choi, K.-S. Intrinsic Property-based Taxonomic Relation Extraction from Category Structure. In Proceedings of the 6th Workshop on Ontologies and Lexical Resources, Beijing, China, 23–27 August 2010; pp. 48–57. 32. Girju, R.; Badulescu, A.; Moldovan, D. Learning Semantic Constraints for the Automatic Discovery of Part-Whole Relations. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 03), Edmonton, AB, Canada, 27 May–1 June 2003; pp. 80–87. 33. Gangemi, A.; Presutti, V.; Reforgiato Recupero, D.; Nuzzolese, A.G.; Draicchio, F.; Mongiovì, M. Semantic Web Machine Reading with FRED. Semant. Web 2017, 8, 873–893. [CrossRef] 34. Flanigan, J.; Dyer, C.; Smith, N.A.; Carbonell, J. CMU at SemEval-2016 Task 8: Graph-based AMR Parsing with Infinite Ramp Loss. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), San Diego, CA, USA, 16–17 June 2016; pp. 1202–1206. 35. Foundation of the Hellenic World. Available online: http://www.ime.gr/ (accessed on 21 December 2020). 36. Cortona3D Viewers. Available online: http://www.cortona3d.com/en/products/authoring-publishing-solutions/cortona3d- viewers (accessed on 21 December 2020). 37. Kersten, T.P.; Tschirschwitz, F.; Deggim, S. Development of a virtual museum including a 4D presentation of building history in virtual reality. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, XLII-2/W3, 361–367. [CrossRef] 38. VIVE United States: Discover Virtual Reality Beyond Imagination. Available online: https://www.vive.com/us/ (accessed on 21 December 2020). 39. Oculus Quest 2. Available online: https://www.oculus.com/quest-2/ (accessed on 21 December 2020). 40. Web3D Consortium: Open Standards for Real-Time 3D Communication. Available online: https://www.web3d.org/ (accessed on 21 December 2020). 41. Sinclair, P.A.S.; Martinez, K.; Millard, D.E.; Weal, M.J. Augmented reality as an interface to adaptive hypermedia systems. New Rev. Hypermedia Multimed. 2003, 9, 117–136. [CrossRef] 42. Goodall, S.; Lewis, P.; Martinez, K.; Sinclair, P.; Addis, M.; Lahanier, C.; Stevenson, J. Knowledge-based exploration of multimedia museum collections. In Proceedings of the European Workshop on the Integration of Knowledge, Semantics and Digital Media Technology (EWIMT), London, UK, 25–26 November 2004. 43. Hughes, C.E.; Stapleton, C.B.; Hughes, D.E.; Smith, E.M. Mixed reality in education, entertainment, and training. IEEE Comput. Graph. Appl. 2005, 25, 24–30. [CrossRef][PubMed] 44. Museum het Rembrandthuis. Available online: https://www.rembrandthuis.nl/ (accessed on 21 December 2020). 45. Barnes, M.; Levy Finch, E. COLLADA–Digital Asset Schema Release 1.5.0. Available online: https://www.khronos.org/files/ collada_spec_1_5.pdf (accessed on 27 December 2020). 46. glTF Overview: The Khronos Group Inc. Available online: https://www.khronos.org/gltf/ (accessed on 27 December 2020). 47. OpenSceneGraph-3.6.5 Released. Available online: http://www.openscenegraph.org/index.php/8-news/238-openscenegraph- 3-6-5-released (accessed on 21 December 2020). 48. Technologies, U. Unity Real-Time Development Platform: 3D, 2D VR & AR Engine. Available online: https://unity.com/ (accessed on 21 December 2020). 49. Second Life: Virtual Worlds, Virtual Reality, VR, Avatars, Free 3D Chat. Available online: https://www.secondlife.com/ (accessed on 21 December 2020). 50. Looser, J.; Grasset, R.; Seichter, H.; Billinghurst, M. OSGART–A Pragmatic Approach to MR. In Proceedings of the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR 06), Santa Barbara, CA, USA, 22–25 October 2006. 51. Partarakis, N.; Antona, M.; Zidianakis, E.; Stephanidis, C. Adaptation and Content Personalization in the Context of Multi User Museum Exhibits. In Proceedings of the International Working Conference On Advanced Visual Interfaces (AVI 2016), Bari, Italy, 7–10 June 2016. 52. Partarakis, N.; Klironomos, I.; Antona, M.; Margetis, G.; Grammenos, D.; Stephanidis, C. Accessibility of Cultural Heritage Exhibits. In Proceedings of the International Conference on Universal Access in Human-Computer Interaction (UAHCI 2016), Toronto, ON, Canada, 17–22 July 2016; pp. 444–455. 53. Amato, F.; Moscato, V.; Picariello, A.; Sperlì, G. Recommendation in Social Media Networks. In Proceedings of the 2017 IEEE Third International Conference on Multimedia Big Data (BigMM), Laguna Hills, CA, USA, 19–21 April 2017. [CrossRef] 54. Moscato, V.; Picariello, A.; Sperli, G. An emotional recommender system for music. IEEE Intell. Syst. 2020, 1. [CrossRef] 55. ISO 9241-210:2019: Ergonomics of Human-System Interaction—Part 210: Human-Centred Design for Interactive Systems. Available online: https://www.iso.org/cms/render/live/en/sites/isoorg/contents/data/standard/07/75/77520.html (accessed on 18 December 2020). 56. Giacomin, J. What Is Human Centred Design? Des. J. 2014, 17, 606–623. [CrossRef] Electronics 2021, 10, 363 38 of 38

57. Schmidt, C. The analysis of semi-structured interviews. In A companion to qualitative research, Flick, U.; von Kardoff, E., Steinke, I., Eds.; SAGE Publications Ltd.: Newcastle upon Tyne, UK, 2004. 58. Prahalad, C.K.; Ramaswamy, V. Co-creation experiences: The next practice in value creation. J. Interact. Mark. 2004, 18, 5–14. [CrossRef] 59. Armour, F.; Miller, G. Advanced Use Case Modeling: Software Systems; Pearson Education: Upper Saddle River, NJ, USA, 2000. 60. Nielsen, J.; Mack, R.L. Heuristic Evaluation. In Usability Inspection Methods; John Wiley & Sons: Hoboken, NJ, USA, 1994. 61. Følstad, A. The effect of group discussions in usability inspection: A pilot study. In Proceedings of the 5th Nordic conference on Human-computer interaction: Building bridges (NordiCHI 08), Lund, Sweden, 20–22 October 2008; pp. 467–470. 62. Mahatody, T.; Sagar, M.; Kolski, C. State of the Art on the Cognitive Walkthrough Method, Its Variants and Evolutions. Int. J. Hum. Comput. Interact. 2010, 26, 741–785. [CrossRef] 63. General Data Protection Regulation–EU 2016/679. Available online: https://gdpr-info.eu/ (accessed on 18 December 2020). 64. Pohl, K. Requirements Engineering: Fundamentals, Principles, and Techniques, 1st ed.; Springer Publishing Company: New York, NY, USA, 2010. 65. Spikol, D.; Milrad, M.; Maldonado, H.; Pea, R. Integrating Co-design Practices into the Development of Mobile Science Collaboratories. In Proceedings of the 2009 Ninth IEEE International Conference on Advanced Learning Technologies, Riga, Latvia, 15–17 July 2009; pp. 393–397. 66. Sanders, E.B.-N.; Stappers, P.J. Co-creation and the new landscapes of design. CoDesign 2008, 4, 5–18. [CrossRef] 67. Cloud Translation. Available online: https://cloud.google.com/translate?hl=el (accessed on 28 December 2020). 68. The Most Popular Database for Modern Apps. Available online: https://www.mongodb.com (accessed on 28 December 2020). 69. spaCy: Industrial-strength Natural Language Processing in Python. Available online: https://spacy.io/ (accessed on 28 December 2020). 70. Neo4j Graph Platform–The Leader in Graph Databases. Available online: https://neo4j.com/ (accessed on 28 December 2020). 71. MinIO: Kubernetes Native, High Performance Object Storage. Available online: https://min.io/ (accessed on 28 December 2020). 72. Angular. Available online: https://angular.io/ (accessed on 30 December 2020). 73. A-Frame–Make WebVR. Available online: https://aframe.io (accessed on 28 December 2020). 74. Immersive Web Developer Home. Available online: https://immersiveweb.dev/ (accessed on 28 December 2020). 75. Welcome to Python.org. Available online: https://www.python.org/ (accessed on 1 February 2021). 76. Foundation, B. blender.org–Home of the Blender project–Free and Open 3D Creation Software. Available online: https://www. blender.org/ (accessed on 1 February 2021). 77. Bligård, L.-O.; Osvalder, A.-L. An Analytical Approach for Predicting and Identifying Use Error and Usability Problem. In Proceedings of the Symposium of the Austrian HCI and Usability Engineering Group (USAB 2007), Graz, Austria, 22 November 2007; pp. 427–440.