A Language for Modelling Contextual Augmented Reality Environments Dariusz Rumiński, Krzysztof Walczak
Total Page:16
File Type:pdf, Size:1020Kb
CARL: A Language for Modelling Contextual Augmented Reality Environments Dariusz Rumiński, Krzysztof Walczak To cite this version: Dariusz Rumiński, Krzysztof Walczak. CARL: A Language for Modelling Contextual Augmented Reality Environments. 5th Doctoral Conference on Computing, Electrical and Industrial Systems (DoCEIS), Apr 2014, Costa de Caparica, Portugal. pp.183-190, 10.1007/978-3-642-54734-8_21. hal- 01274770 HAL Id: hal-01274770 https://hal.inria.fr/hal-01274770 Submitted on 16 Feb 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributed under a Creative Commons Attribution| 4.0 International License CARL: A Language for Modelling Contextual Augmented Reality Environments Dariusz Rumi ński, Krzysztof Walczak Pozna ń University of Economics Niepodległo ści 10, 61-875 Pozna ń, Poland {ruminski, walczak}@kti.ue.poznan.pl Abstract. The paper describes a novel, declarative language that enables modelling ubiquitous, contextual and interactive augmented reality environments. The language, called CARL – Contextual Augmented Reality Language, is highly componentised with regards to both the structure of AR scenes as well as the presented AR content. This enables dynamic composition of CARL presentations based on various data sources and depending on the context. CARL separates specification of three categories of entities constituting an AR environment – trackable markers, content objects and interfaces, which makes the language more flexible and particularly well suited to building collective awareness systems based on ubiquitous AR-based information visualisation. Keywords: Augmented reality, AR, AR services, contextual services, CARL 1 Introduction Augmented reality (AR) technology enables superimposing rich computer generated content, such as interactive 3D animated objects and multimedia objects, in the real time on a view of a real environment. Augmented reality enables building advanced localised information visualisation systems and therefore forms a solid basis for the development of collective awareness systems. Widespread use of the AR technology has been enabled in the recent years by remarkable progress in consumer-level hardware performance, in particular, in the computational and graphical performance of mobile devices and quickly growing mobile network bandwidth. Education, entertainment, e-commerce and tourism are notable examples of application domains in which AR-based systems become increasingly used. There are a variety of tools available for authoring AR content and applications. These tools range from general purpose computer vision and graphics libraries, requiring advanced programming skills to develop applications, to easy-to-use point- and-click packages for mobile devices, enabling creation of simple AR presentations. However, the existing tools are designed for manual authoring of AR presentations – either through programming or visual design. To enable more widespread use of AR technology for visualisation of different kinds of up-to-date data, needed in collective 184 D. Rumi ński and K. Walczak awareness systems, a different paradigm is required. The interactive presentations must be created automatically based on the available data sources, through selection of data and automatic composition of AR scenes. A key element enabling selection of information for visualisation is context, which incorporates aspects such as user location, time, privileges, preferences, device capabilities, environmental conditions and others. To enable meaningful contextual selection and automatic composition of non-trivial interactive AR interfaces, semantic web technologies may be used. In this paper, we present foundations of a novel high-level programming language, called CARL – Contextual Augmented Reality Language , which enables building ubiquitous, contextual and interactive AR presentations. In the presented approach, the AR content that is presented to users is created by selecting and merging content and rules originating from different service providers, without the necessity to explicitly switch from one to another. Moreover, in CARL, the rules for tracked markers (where the information will appear), content objects (what will be presented) and interfaces (how the information will be accessible) are separated, enabling flexible composition of presentations meeting real business requirements. The reminder of this paper is organised as follows. Section 2 presents related works in the context of building AR environments, in particular focusing on the use of AR for building collective awareness systems. Later, in Section 3, the concept of building ubiquitous contextual AR presentations is explained. In Section 4, the CARL language is presented. In Section 5, the current implementation of CARL is outlined. Finally, in the last section, conclusions and directions for future research are presented. 2 AR Environments and Collective Awareness Systems A number of research works have been devoted to the problem of modelling and building AR environments combining the view of real-world objects with multimedia content. One of well-known toolkits that supports rapid design and implementation of AR applications is DART – Designer's Augmented Reality Toolkit [1]. Many AR systems are based on the ARToolKit library, which however requires advanced programming skills and technical knowledge to create AR applications [2]. GUI-based authoring applications that enable mixing virtual scenes with the real world have been presented in [3-8]. These systems provide user-friendly functionality for placing virtual objects in mixed reality scenes without coding and can be used for building AR environments, but they target only desktop computer environments and do not provide functionality to build open, interoperable and dynamic AR systems. Since mobile devices have gained popularity, a number of mobile AR toolkits and applications have been developed. For example, ComposAR-Mobile is a cross- platform content authoring tool for mobile phones and PCs based on an XML scene description [9]. Lee and Seo have presented the U-CAFÉ framework, which can be used to build ubiquitous AR environments [10]. It supports contextual presentation and collaboration between users on various devices. However, it does not provide means for combining content from different services and modelling user interactions within the AR presentations. A Language for Modelling Contextual Augmented Reality Environments 185 Stiktu is a mobile AR authoring application that enables attaching (“sticking”) simple content, such as text messages and images, to particular views of real places [11]. It enables users to express and exchange opinions with other users who have created virtual objects. The main limitation of this application is the lack of a possibility of creating complex objects, such as interactive 3D models. Aurasma is an application that enables augmenting arbitrary real-world views using a mobile device's camera and then sharing the created AR content with other users [12]. Layar, Junaio and Wikitude are other examples of AR applications that enable users to share AR content [13-15]. These platforms are good examples of collective awareness systems, where users contribute to “augmenting” the real world with virtual objects. Growing popularity of such applications indicates that there is public interest in new forms of information visualisation, even if the functionality of these systems is still highly limited. Several studies have been performed in the domain of declarative languages for building virtual environments. InTml is an XML-based language for describing 3D interaction techniques and hardware platforms in VR applications [16]. MPML-VR enables describing multimodal presentations in 3D virtual spaces [17]. VR-BML is a high-level XML-based behaviour programming language developed as a part of the Beh-VR approach [18]. These languages, however, have been designed for 3D/VR applications without taking into account AR-specific functionality. A number of high-level languages have been also designed specifically for building AR applications. APRIL is an XML-based scripting language for authoring AR presentations within the Studierstube framework [19]. SSIML/AR is a language for structured specification and development of AR applications [20]. AREL is a JavaScript binding of the Metaio/Junaio SDK API allowing development of AR applications based on web technologies, such as HTML5, XML and JavaScript [21]. MR-ISL is a high-level language for defining mixed reality interaction scenarios [22]. A common motivation for developing novel platforms and declarative AR languages is to simplify the process of programming AR applications. However, the existing languages are not intended for building dynamic contextual AR information visualisation systems. In such systems, different types of data must be retrieved in the real time from distributed sources and automatically composed into interactive AR scenes. 3 Contextual Augmented Reality Environments Existing AR platforms