ISMAR 2019 The 18th IEEE International Symposium on Mixed and Augmented Reality

Oct 14-18, 2019

ISMAR 2019 Sponsors | 01 SUMMARY Welcome to IEEE ISMAR 2019 | 02 Space Layout | 02 Public Transportation | 03 Conference Committee | 04 Program | 06 Keynote Speakers | 08 Paper Session | 11 Poster Teasers Group | 14 Demonstrations | 18 Workshops | 19 Tutorial | 22 Doctoral Consortium | 23 SLAM-for-AR Challenge | 24 Industry Session | 24 Olympic Village Night | 25

ISMAR 2019 ISMAR 2019 Sponsors THANKS TO OUR SPONSORS

DIAMOND

PLATINUM

GOLD

SILVER

BRONZE

SUPPORT UNITS

01 Welcome to IEEE ISMAR 2019

It is our great pleasure to welcome you to the 18th IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2019). ISMAR 2019 will be held on October 14-18 in Beijing, the capital of China and a city with great history, outstanding beauty and delicious cuisines, and this is the first time that ISMAR comes to China in its history of more than 20 years. With the increasing interest and significance of the research on mixed and augmented reality, there is a growing demand for international exchange and collaboration. Over the years, ISMAR has established itself as the premier academic conference in the field. Cutting-edge technologies and applications will be discussed in ISMAR 2019, which will include 50 oral presentations, 109 poster presentations, 24 demonstrations, 1 SLAM challenge, 1 doctoral consortium, 6 workshops, 3 tutorials, 9 exhibitors, 12 sponsors and more than 400 participants. We are very excited to have invited 3 visionary speakers to give keynote talks. They are Academician Wen Gao of Peking University, Prof. Xiaoou Tang of the Chinese University of , and Dr. techn. Dieter Schmalstieg of Graz University of Technology. We are extremely grateful to numerous volunteers and sponsors, including the organizing and program committee members and reviewers, IEEE, Beijing Society of Image and Graphics, Beijing Institute of Technology, Beihang University and many corporate supporters, who have made this conference possible. We would also like to express our sincere gratitude to the National Natural Science Foundation of China and Beijing Association for Science and Technology for their invaluable support. We hope that each and every participant will find ISMAR 2019 to be an engaging, inspiring, insightful, informative, and, last but not least, enjoyable conference! Qinping Zhao, Beihang University Yongtian , Beijing Institute of Technology Henry B.L. Duh, La Trobe University Space Layout

ISMAR 2019 02 Public Transportation

Beijing Capital International Beijing Daxing International Beijing West Railway Beijing South Railway airport (35 km) Airport (59 km) Station (12 km) Station (20 km) By airport Take airport bus line 4 Take airport bus Beijing bus (Gongzhufen line, between south railway station line, 06:00-last 02:00) about transfer to Metro Line 4 30 minutes / 7 stops. and follow the direction Then walk 720 metres under Beijing South to the Friendship Hotel Railway Station, or take a (about 11 minutes). taxi. By taxi Take about 50 minutes Take about 80 minutes Take about 25 minutes Take about 35 minutes to arrive and cost about to arrive and cost about to arrive and cost to arrive and cost 120 RMB. 260 RMB. about 50 RMB. about 75 RMB. By Take the airport subway Take the new airport Take Metro Line 9 to Take Metro Line 4 subway line to Sanyuanqiao, subway line to Caoqiao, National Library, and (Anheqiao North transfer to Metro transfer to Metro Line 10 transfer to Metro Line direction). Get off at Line 10 to Haidian to Haidian Huangzhuang, 4 (Anheqiao North Renmin University and Huangzhuang, and then and transfer to Metro direction). Get off at take the southwest exit transfer to Metro Line 4 Line 4 ((Tiangongyuan Renmin University and D. Walk 287 meters (Tiangongyuan direction). direction). Get off at take the southwest exit to the Friendship Hotel Get off at Renmin Renmin University and D. Walk 287 meters (about 5 minutes). University and take the take the southwest exit to the Friendship Hotel Cost 5 RMB. southwest exit D. Walk D. Walk 287 meters (about 5 minutes). 287 meters to the to the Friendship Hotel Cost 4 RMB. Friendship Hotel (about 5 (about 5 minutes). Cost minutes). Cost 30 RMB. 35 RMB.

03 SLAM-for-AR Challenge Chairs Conference Committee Guofeng Zhang University, China Jing Chen Beijing Institute of General Chairs Technology, China Qinping Zhao Beihang University, China Guoquan Huang University of Delaware, Yongtian Wang Beijing Institute of USA Technology, China Publications Chair Henry B.L.Duh La Trobe Lili Wang Beihang University, China University, Australia TVCG Liaison Science & Technology Chairs Dieter Schmalstieg Graz University of Shimin Hu Tsinghua University, Technology, Austria China Student Volunteer Chair Han Jiang Beihang University, China Joseph L. Gabbard Virginia Tech, USA Pitch-Your-Lab Session Chair Jens Grubert Coburg University of Weitao Song Nanyang Technological Applied Sciences and University, Singapore Arts, Germany Sponsorship Chairs Stefanie Zollmann University of Otago, New Yue Liu Beijing Institute of Zealand Technology, China Exhibition & Demos Chairs Christine Perey PEREY Research & Dongdong Weng Beijing Institute of Consulting Technology, China Zhi Wang CVRVT, China Yong Hu Beihang University, China Mark Billinghurst University of South Yihua Bao Beijing Film Australia, Australia Acadamy, China Publicity Chairs Finance Chair Junxiao Xue Zhengzhou Yue Liu Beijing Institute of University, China Technology, China Xiaohui Yang University, China Registration Chair Student-Fair Session Chair Jinyuan Jia Tongji University, China Mark Billinghurst University of South Local Organization Chair Australia, Australia Xun Luo University Olympic Night Chairs of Technology, China Dewen Cheng Beijing Institute of Technology, China Workshops/Tutorials Chairs Miao Wang Beihang University, China Bin Zhou Beihang University, China Yunji Chen Institute of Computing Xubo Yang Jiao Tong Technology Chinese University, China Academy of Science, China Maki Sugimoto Keio University, Japan Video Chairs Doctoral Forum Chairs Ling Zou Beijing Film Kai Xu National University of Academy, China Defense Technology, China Edwin Lobo Tianjin University of Feng Xu Tsinghua University, China Techonology, China Christian Richardt University of Bath, UK Web Chair Posters Chairs Xingce Wang Beijing Normal Yanwen Guo Nanjing University, China University, China Xueying Qin University, China Accessibility Chair Andres Navarro ICESI University, Colombia Fengjun Zhang Institute of Software Stephanie Maxwell Rochester Institute of Chinese Academy of Technology, USA Sciences, China ISMAR 2019 04 Steering Committee Members Science & Technology Committee Members Ron Azuma Guillaume Moreau America Mark Billinghurst Hideo Saito Evan Barba Georgetown Wolfgang Broll Christian Sandor University, USA Henry Fuchs Julian Stadon Gerd Bruder University of Central Hirokazu Kato Dieter Schmalstieg Florida, USA Gudrun Klinker J.Edward Swan II Adam Jones University of Mississippi, USA Anatole Lecuyer Haruo Takemura Ryan McMahan UT Dallas, USA Rob Lindeman Bruce Thomas Ann McNamara Texas A&M University, USA Walterio Mayol-Cuevas Tabitha Peck Davidson College, USA Rafael Radkowski Iowa State University, USA Evan Suma University of Minnesota, Emeritus Members Rosenberg USA Reinhold Behringer Stefan Mueller Oliver Bimber Nassir Navab Asia-Pacific Dewen Cheng Beijing Institute of Jay Bolter Jun Park Technology, China Henry B.L.Duh Holger Regenbrecht Kai Kunze Keio University, Japan Tom Drummond Gerhard Reitmayr Robert Lindeman Human Interface Steve Feiner Jannick Rolland Technology Lab, New Zealand Raphael Grasset Didier Stricker Alexander Plopski Nara Institute of Science Tobias Hoellerer Hideyuki Tamura and Technology, Japan Simon Julier Greg Welch Hideo Saito Keio University, Japan Kiyoshi Kiyokawa Sean White Lili Wang Beihang University, China Heedong Ko Martin Wiedmer Yoshihiro Watanabe Tokyo Institute of Vincent Lepetit Hiroyuki Yamamoto Technology, Japan Mark Livingston Naokazu Yokoya Guofeng Zhang Zhejiang University, China David Mizell Europe Klen Čopič Pucihar University of Primorska, Doctoral Consortium Mentors Slovenia Yebin Liu Tsinghua University, China Ulrich Eck Technische Universität Dieter Schmalstieg Graz University of München, Germany Technolgy, Austria Michele Fiorentino Politecnico di Bari, Italy Yuta Itoh Tokyo Institute of Per Ola Kristensson University of Cambridge, UK Technology, Japan Guillaume Moreau Université de Nantes, Jens Grubert Coburg University,Germany France Stefanie Zollmann University of Otago, New Alain Pagani Technical University of Zealand Darmstadt, Germany Xiang Cao Xiaoxiaoniu Creative Marc Stamminger Friedrich-Alexander- Technologies, China Universität, Germany Junjun Pan Beihang University & Frank Steinicke Universität Hamburg, Bournemouth University, Germany China Ian Williams Birmingham City University, UK

05 Program Day 1 Monday - Oct 14th (Friendship Palace, Friendship Hotel) Time Meeting Room 3 Meeting Room 5 Meeting Room 7 Meeting Room 9 W3: Mixed W4: Augmenting Cities and W6: AR & MR Technology for 09:00- Doctoral Reality and Architecture with Ubiquitous Educational 10:30 Consortium Accessibility Immersive Technologies Learning Experience 10:30 - 11:00 Coffee Break W3: Mixed W4: Augmenting Cities and W6: AR & MR Technology for 11:00- Doctoral Reality and Architecture with Ubiquitous Educational 12:30 Consortium Accessibility Immersive Technologies Learning Experience 12:30 - 13:30 Lunch Break T1. OpenARK — Tackling W3: Mixed Augmented Reality W6: AR & MR Technology for SLAM-for-AR 13:30- Reality and Challenges via an Open- Ubiquitous Educational Challenge 15:00 Accessibility Source Software Learning Experience Development Kit 15:00 - 15:30 Coffee Break T1. OpenARK — Tackling W3: Mixed Augmented Reality W6: AR & MR Technology for 15:30- SLAM-for-AR Reality and Challenges via an Open- Ubiquitous Educational 17:00 Challenge Accessibility Source Software Learning Experience Development Kit ISMAR 2019 06 Day 2 Day 3 Day 4 Tuesday - Oct 15th Wednesday - Oct 16th Thursday - Oct 17th (Friendship Palace, Friendship Hotel) (Friendship Palace, Friendship Hotel) (Friendship Palace, Friendship Hotel) Time Juying Ballroom Time Juying Ballroom Time Juying Ballroom S4: Spatial Augmented 09:00- 08:30- 08:30- S7: Multimodal and Long Opening Session Reality and Near Eye 09:30 09:30 09:45 Term Usage Displays 09:30- Keynote 1: 09:45- Coffee Break 10:30 Academician Wen Gao 09:30- Coffee Break 10:15 10:00 Poster Group B 10:30- 10:15- Keynote 3: Coffee Break 11:00 10:00- Keynote 2: 11:15 Dr.techn. Dieter Schmalstieg 11:00 Prof. Xiaoou Tang 11:00- S1: Tracking and 11:15- S8: Collaboration and 11:00- 12:15 Reconstruction Industry Session 12:30 Entertainment 12:30 12:15- 12:30- Lunch Break 12:30- Lunch Break Lunch Break 13:15 13:30 Poster Group B 13:30 13:15- S2: Modeling and 13:30- S5: Perception and 13:30- S9: Selection and Text 14:45 Rendering 14:45 Presence 14:45 Entry 14:45- 14:45- Coffee Break 14:45- Coffee Break Coffee Break 15:15 15:15 Poster Group B 15:15 15:15- S3: Acquisition and 15:15- 15:15- S6: Locomotion S10: Training and Learning 16:30 Manipulation 16:30 16:45 16:30- 16:30- 16:45- Posters Teasers Group Pitch your Lab Closing Session 17:30 17:15 17:30 Banquet(Seafood buffet) 17:30- 18:30- 10:00- Poster Group A Olympic Village night Demo-Meeting Room 3 19:30 20:30 15:30 Bird's nest 19:30- 10:00- 10:00- EXHIBITION Welcome Reception Demo-Meeting Room 3 21:00 17:30 15:30 JUXIAN BALLROOM 10:00- EXHIBITION 10:00- EXHIBITION 19:30 JUXIAN BALLROOM 17:30 JUXIAN BALLROOM

Day 5 Friday - Oct 18th (Friendship Palace, Friendship Hotel) Time Meeting Room3 Meeting Room5 Meeting Room7 09:00 - W1: Mixed/Augmented Reality W2: Extended Reality for Good T3: Bridging the gap between 10:30 and Mental Health (XR4Good) research and practice in AR 10:30 - 11:00 Coffee Break 11:00 - W1: Mixed/Augmented Reality W2: Extended Reality for Good T3: Bridging the gap between 12:30 and Mental Health (XR4Good) research and practice in AR 12:30 - 13:30 Lunch Break 13:30 - T2: Interaction Paradigms in MR W2: Extended Reality for Good W5: XR-aided Design (XRAD): next 15:00 – Lessons from Art (XR4Good) generation of CAD tools 15:00 - 15:30 Coffee Break 15:30 - T2: Interaction Paradigms in MR W2: Extended Reality for Good W5: XR-aided Design (XRAD): next 17:00 – Lessons from Art (XR4Good) generation of CAD tools (lunches on Oct. 14-18 will be provided by the organizer on the first floor) 07 Keynote 1: AVS3 -- A New Generation of Video Coding Standard for Super High Vision and VR/AR (Juying Ballroom, Oct 15th, 09:30 - 10:30)

Compared to the previous AVS2 and HEVC/H.265 standards, AVS3 can achieve about 30% bitrate saving. Recently, Hisilicon announced the world first AVS3 8K video decoder chip at IBC2019, which supports 8K and 120P(fps) real time decoding. That indicates the opening of a new era of 8K and immersive video experience. This talk will give a Academician Wen Gao brief introduction to the AVS3 standard, including the development process, key techniques, and Peking university the applications. This talk will report the recent developments of video coding standard, specifically Abstract on new algorithms of coding tools, VR/AR, and deep 4K video format is becoming a majority in TV learning. device market, and 8K video format is in coming, by two major driving forces from industries. The Biography first is the flat panel industry which makes the Wen Gao now is a Boya Chair Professor at Peking panel resolution higher and higher, and the second university. He also serves as the president of China is the video broadcasting industry which makes Computer Federation (CCF) from 2016. content more and more rich, like super high vision, He received his Ph.D. degree in electronics HDR, VR and AR. Whatever when content becomes engineering from the University of Tokyo in 1991. rich, it result a sharp increase in data. So, a more He joined with Harbin Institute of Technology efficient video coding standard will be the key in from 1991 to 1995, and Institute of Computing this case. Actually video coding standards have play Technology (ICT), Chinese Academy of Sciences the key role in TV industry in last three decades, (CAS) from 1996 to 2005. He joined the Peking such as MPEG1/2 created digital TV industry from University in 2006. 1993, MPEG4 AVC/H.264 and AVS+ supported Prof. Gao works in the areas of multimedia and HDTV industry from 2003, HEVC/H.265 and AVS2 computer vision, topics including video coding, supported 4KUHD TV from 2016. Now, 8KUHD is video analysis, multimedia retrieval, face recognition, coming, which supposes to support VR/AR, with multimodal interfaces, and virtual reality. His most 4-10 times data size compared to last generation. cited contributions are model-based video coding Therefore a new generation of video coding standard and face recognition. He published seven books, is expected to be created for this new demand. over 220 papers in refereed journals, and over 600 AVS3 is the third generation video coding papers in selected international conferences. He is standard developed by China Audio and Video a fellow of IEEE, a fellow of ACM, and a member of Coding Standard Workgroup (AVS), which targets Chinese Academy of Engineering. to the emerging 8KUHD and VR/AR applications. And the first phase of AVS3 was released in March 2019.

ISMAR 2019 08 Keynote 2: AI + AR: Magic in the AIR (Juying Ballroom, Oct 16th, 10:00 - 11:00)

Prof. Xiaoou Tang Biography The Chinese University of Hong Kong Prof. Xiaoou Tang is the founder of SenseTime, a leading artificial intelligence (AI) company fo- cused on computer vision and deep learning. Prof. Abstract Tang is also a professor at the Department of With the rapid development of hardware and Information Engineering at the Chinese Universi- software technologies, Augmented Reality (AR) is ty of Hong Kong, the Associate Director of the maturing and getting more involved in a variety Shenzhen Institute of Advanced Technology of of aspects in people’s lives and work. Essential the Chinese Academy of Science. goal of AR is to fuse the digital elements into real Professor Tang is an IEEE fellow, a general world in a seamless way, and to make this happen, chair of the ICCV in 2019 and the Editor-in- understanding then structuralizing the physical Chief of the International Journal of Computer world becomes the prerequisites task. Today’s AI Vision (IJCV), one of the two leading journals on technology make it possible for us to accurately computer vision. Professor Tang received the Best comprehend the real world. With the combination of Paper Award at the IEEE Conference on Computer AI and AR, brand-new perceptual enriched experience Vision and Pattern Recognition (CVPR) in 2009 will be created. and the Outstanding Student Paper Award at the In this talk, we take an ordinary working day of AAAI in 2015. From 2005 to 2008, Prof. Tang SenseTime employee as an example, to showcase was Director of Visual Computing at Microsoft how AI+AR is changing the way we live and Research Asia. work, from daily morning makeups, to human- Professor Tang received a Ph.D. degree from morphic digital receptionist; from AR enabled route the Massachusetts Institute of Technology navigation, to multi-player interactive gaming in 1996. He holds an M.S. degree from the experiences. AI is enlarging the boundary of AR University of Rochester and a B.S. degree from application scenarios, while laying solid cornerstones the University of Science and Technology of to the further commercialization of AR. China.

09 Keynote 3: A Research Agenda for Situated Visualization (Juying Ballroom, Oct 17th, 10:15 - 11:15)

Dr.techn. Dieter Schmalstieg Biography Institute of Computer Graphics Dieter Schmalstieg is full professor and head of Vision at Graz University of Technology the Institute of Computer Graphics and Vision at Graz University of Technology, Austria. His current Abstract research interests are augmented reality, virtual What could be the killer application for augmented reality, computer graphics, visualization and human- reality? On the one hand, tech giants such as computer interaction. He received Dipl.-Ing. (1993), Microsoft and Apple, make huge investments into Dr. techn. (1997) and Habilitation (2001) degrees augmented reality technology. On the other hand, from Vienna University of Technology. He is author there are very few commercial success stories about and co-author of over 300 peer-reviewed scientific augmented reality application beyond Pokemon Go. publications with over 17,000 citations, with over What are we missing? This talk will investigate the twenty best paper awards and nominations. His potential of situated visualization as a new use organizational roles include associate editor in chief case for augmented reality technology. Visualization of IEEE Transactions on Visualization and Computer is now an established field and distinct from its Graphics, associate editor of Frontiers in Robotics enabling technology, computer graphics. It deals with and AI, member of the steering committee of the problem of helping the user in understanding the IEEE International Symposium on Mixed and vast and complex data, building on findings from Augmented Reality, chair of the EUROGRAPHICS perception and cognition to achieve its goals. working group on Virtual Environments (1999-2010), However, visualization has largely been confined to key researcher of the K-Plus Competence Center for desktop computing and not really entered the realm Virtual Reality and Visualization in Vienna and key of mobile computing. With augmented reality, we researcher of the Know-Center in Graz. In 2002, can situate visualizations in a real-world context. he received the START career award presented by This talk will describe some initial explorations the Austrian Science Fund. In 2012, he received into situated visualization and attempt to define a the IEEE Virtual Reality technical achievement award research agenda for this important new direction. for seminal contributions to the field of Augmented Reality. He was elected as a senior member of IEEE, as a member of the Austrian Academy of Sciences and as a member of the Academia Europaea. In 2008, he founded the Christian Doppler Laboratory for Handheld Augmented Reality.

ISMAR 2019 10 Paper Session

S1. Tracking and Reconstruction S3.Acquisition and Manipulation (Oct 15th, 11:00 - 12:15, Juying Ballroom) (Oct 15th, 15:15 - 16:30, Juying Ballroom) Session Chair: Rafael Radkowski Session Chair:Feng Xu

11:00 - 11:15 Hierarchical Topic Model Based Object 15:15 - 15:30 AR HMD Guidance for Controlled Hand- Association for Semantic SLAM., Jianhua Zhang, Held 3D Acquisition., Daniel Andersen, Peter Villano, Mengping Gui, Qichao Wang, Ruyu Liu, Junzhe Xu, and Shengyong Chen. (TVCG) and Voicu Popescu. (TVCG) 11:15 - 11:30 Towards SLAM-based Outdoor Localization 15:30 - 15:45 VR Props: An End to End Pipeline for using Poor GPS and 2.5D Building Models., Ruyu Liu, Transporting Real Objects into Virtual and Augmented Jianhua Zhang, Shengyong Chen, and Clemens Arth. Environments., Catherine Taylor, Darren Cosker, Robin 11:30 - 11:45 Camera Relocalization with Ellipsoidal McNicholas, and Chris Mullany. Abstraction of Objects., Vincent Gaudillière, Gilles 15:45 - 16:00 Manipulating 3D Anatomic Models in Simon, and Marie-Odile Berger. Augmented Reality: Comparing a Hands-Free Approach 11:45 - 12:00 Efficient 3D Reconstruction and and a Manual Approach, Shirin Sadri, Shalva A. Kohen, Streaming for Group-Scale Multi-Client Live Carmine Elvezio, Shawn H. Sun, Alon Grinshpoon, Telepresence., Patrick Stotko, Stefan Krumpen, Michael Gabrielle Loeb, Naomi Basu, and Steven Feiner. Weinmann, and Reinhard Klein. 16:00 - 16:15 DepthMove: Leveraging Head Motions 12:00 - 12:15 Tangible and Visible 3D Object in the Depth Dimension to Interact with Virtual Reality Reconstruction in Augmented Reality., Yi-Chin Wu, Head-Worn Displays., Difeng Yu, Hai-Ning Liang, Xueshi Liwei Chan, and Wen-Chieh Lin. Lu, Tianyu Zhang, and Wenge Xu. 16:15 - 16:30 VPModel: High-Fidelity Product S2. Simulation in a Virtual-Physical Environment., Xin Min, Modeling and Rendering Wenqiao Zhang, Shouqian Sun, Nan Zhao, Siliang Tang, and Yueting Zhuang. (TVCG) (Oct 15th, 13:15 - 14:45, Juying Ballroom) Session Chair: Lili Wang

13:15 - 13:30 Real-Time View Planning for Unstructured Lumigraph Modeling., Okan Erat, Markus Höll, Karl Haubenwallner, Christian Pirchheim, and Dieter Schmalstieg. (TVCG) S4. Spatial Augmented Reality and 13:30 - 13:45 3D Virtual Garment Modeling from RGB Near Eye Displays Images., Yi Xu, Shanglin Yang, Wei Sun, Li Tan, Kefeng Li, and Hui Zhou. (Oct 16th, 08:30 - 08:45, Juying Ballroom) 13:45 - 14:00 Spatially-Varying Diffuse Reflectance Session Chair: Bruce Thomas Capture Using Irradiance Map Rendering for Image- Based Modeling Applications., Kasper Skou Ladefoged, 08:30 - 08:45 Animated Stickies: Fast Video and Claus B. Madsen. Projection Mapping onto a Markerless Plane through 14:00 - 14:15 Augmented Environment Mapping a Direct Closed-Loop Alignment., Shingo Kagami, and for Appearance Editing of Glossy Surfaces., Takumi Koichi Hashimoto. (TVCG) Kaminokado, Daisuke Iwai, and Kosuke Sato. 08:45 - 09:00 Projection Distortion-based Object 14:15 - 14:30 Coherent rendering of virtual smile Tracking in Shader Lamp Scenarios., Niklas Gard, Anna previews with fast neural style transfer., Valentin Hilsmann, and Peter Eisert. (TVCG) Vasiliu, and Gábor Sörös. 09:00 - 09:15 Towards a Switchable AR/VR Near- 14:30 - 14:45 Real-Time Mixed Reality Rendering for eye Display with Accommodation-Vergence and Underwater 360° Videos., Stephen Thompson, Andrew Eyeglass Prescription Support., Xinxing Xia, Yunqing Chalmers, and Taehyun James Rhee. Guan, Andrei State, Praneeth Chakravarthula, Kishore Rathinavel, Tat-Jen Cham, and Henry Fuchs. (TVCG) 11 09:15 - 09:30 Varifocal Occlusion-Capable Optical See- 15:45 - 16:00 Prediction of Discomfort due to through Augmented Reality Display based on Focus- Egomotion in Immersive Videos for Virtual Reality., tunable Optics., Kishore Rathinavel, Gordon Wetzstein, Suprith Balasubramanian, and Rajiv Soundararajan. and Henry Fuchs. (TVCG) 16:00 - 16:15 Accurate and Fast Classification of Foot Gestures for Virtual Locomotion., Xinyu Shi, Junjun Pan, Zeyong Hu, Juncong Lin, Shihui Guo, S5. Perception & Presence Minghong Liao, Ye Pan, and Ligang Liu. 16:15 - 16:30 Estimation of Rotation Gain Thresholds (Oct 16th, 13:30 - 14:45, Juying Ballroom) Considering FOV, Gender, and Distractors., Niall L Williams, and Tabitha C. Peck. (TVCG) Session Chair: Steven Feiner

13:30 - 13:45 FVA: Modeling Perceived Friendliness of Virtual Agents Using Movement Characteristics., Tanmay Randhavane, Aniket Bera, Kyra Kapsaskis, Kurt Gray, and Dinesh Manocha. (TVCG) 13:45 - 14:00 Studying Exocentric Distance S7. Multimodal and Long Perception in Optical See-Through Augmented Reality., Etienne Peillard, Ferran Argelaguet Sanz, Jean-Marie Term Usage Normand, Anatole Lécuyer, and Guillaume Moreau. (Oct 17th, 08:30 - 09:45, Juying Ballroom) 14:00 - 14:15 Influence of Personality Traits and Session Chair: Daisuke Iwai Body Awareness on the Sense of Embodiment in Virtual Reality., Diane Dewez, Rebecca Fribourg, Ferran 08:30 - 08:45 Face/On: Multi-Modal Haptic Feedback Argelaguet Sanz, Ludovic Hoyet, Daniel R Mestre, Mel for Head-Mounted Displays in Virtual Reality., Dennis Slater, and Anatole Lécuyer. Wolf, Michael Rietzler, Leo Hnatek, and Enrico Rukzio. 14:15 - 14:30 Is Any Room Really OK? The Effect (TVCG) of Room Size and Furniture on Presence, Narrative 08:45 - 09:00 Non-Visual Cues for View Management Engagement, and Usability During a Space-Adaptive in Narrow Field of View Augmented Reality Displays., Augmented Reality Game., Jae-eun Shin, Hayun Kim, Alexander Marquardt, Christina Trepkowski, Tom David Callum Parker, Hyung-il Kim, Seo Young Oh, and Eibich, Jens Maiero, and Ernst Kruijff. Woontack Woo. 09:00 - 09:15 Is It Cold in Here or Is It Just 14:30 - 14:45 Effects of “Real-World” Visual Fidelity Me? Analysis of Augmented Reality Temperature on AR Interface Assessment: A Case Study Using Visualization for Computer-Mediated Thermoception., AR Head-up Display Graphics in Driving., Coleman J Austin Erickson, Ryan Schubert, Kangsoo Kim, Gerd Merenda, Joseph L Gabbard, Chihiro Suga Sug, and Bruder, and Greg Welch. Teruhisa Misu. 09:15 - 09:30 DeepTaste: Augmented Reality Gustatory Manipulation with GAN-based Real-time S6. Locomotion Food-to-Food Translation., Kizashi Nakano, Daichi Horita, Nobuchika Sakata, Kiyoshi Kiyokawa, Keiji Yanai, (Oct 16th, 15:15 - 16:30, Juying Ballroom) and Takuji Narumi. 09:30 - 09:45 Mixed Reality Office System Based on Session Chair: Guofeng Zhang Maslow’s Hierarchy of Needs: Towards the Long-Term Immersion in Virtual Environments., Jie Guo, Dongdong Weng, Zhenliang Zhang, Haiyan Jiang, Yue Liu, Been- 15:15 - 15:30 Sick Moves! Motion Parameters as Lirn Duh, and Yongtian Wang. Indicators of Simulator Sickness (TVCG) 15:30 - 15:45 Walking Your Virtual Dog: Analysis of Awareness and Proxemics with Simulated Support Animals in Augmented Reality., Nahal Norouzi, Kangsoo Kim, Myungho Lee, Ryan Schubert, Austin Erickson, Jeremy Bailenson, Gerd Bruder, and Greg Welch.

ISMAR 2019 12 14:15 - 14:30 Enhanced Geometric Techniques for S8. Collaboration and Point Marking in Model-Free Augmented Reality., Wallace S Lages, Yuan Li, Lee Lisle, Tobias Höllerer, Entertainment and Doug Bowman. (Oct 17th, 11:15 - 12:30, Juying Ballroom) 14:30 - 14:45 The Importance of Intersection Session Chair: Kiyoshi Kiyokawa Disambiguation for Virtual Hand Techniques., Alec G Moore, Marwan Kodeih, Anoushka Singhania, Angelina Wu, Tassneen Bashir, and Ryan P. McMahan. 11:15 - 11:30 Conveying spatial awareness cues in xR collaborations., Andrew Irlitti, Thammathip Piumsomboon, Daniel Jackson, and Bruce H Thomas. (TVCG) S10. Training and Learning 11:30 - 11:45 Improving Information Sharing and Collaborative Analysis for Remote GeoSpatial (Oct 17th,15:15 - 16:45, Juying Ballroom) Visualization Using Mixed Reality., Tahir Mahmood, Willis Fulmer, Neelesh Mungoli, Jian Huang, and Aidong Session Chair: Michele Fiorentino Lu. 11:45 - 12:00 Sharing Manipulated Heart Rate 15:15 - 15:30 Annotation vs. Virtual Tutor: Feedback in Collaborative Virtual Environments., Comparative Analysis on the Effectiveness of Visual Arindam Dey, Hao Chen, Ashkan F. Hayati, Mark Instructions in Immersive Virtual Reality., Hyeopwoo Billinghurst, and Robert W. Lindeman. Lee, Hyejin Kim, Diego Vilela Monteiro, Youngnoh Goh, 12:00 - 12:15 ObserVAR: Visualization System for Daseong Han, Hai-Ning Liang, Hyun Seung Yang, and Observing Virtual Reality Users using Augmented Jinki Jung. Reality., Santawat Thanyadit, Parinya Punpongsanon, 15:30 - 15:45 Investigating Cyclical Stereoscopy and Ting-Chuen Pong. Effects over Visual Discomfort and Fatigue in Virtual 12:15 - 12:30 Understanding Users’ Preferences for Reality while Learning., Alexis D. Souchet, Stéphanie Augmented Reality Television., Irina Popovici, and Philippe, Floriane Ober, Aurélien Lévêque, and Laure Radu-Daniel Vatavu. Leroy. 15:45 - 16:00 A Comparison of Desktop and S9. Selection and Text Entry Augmented Reality Scenario Based Training Authoring Tools., Andrés N Vargas González, Senglee Koh, Katelynn Kapalo, Patrick Garrity, Robert Sottilare, Mark (Oct 17th,13:30 - 14:45, Juying Ballroom) Billinghurst, and Joseph LaViola. Session Chair: Ian Williams 16:00 - 16:15 Measuring Cognitive Load and Insight: A Methodology Exemplified in a Virtual Reality Learning Context., Jonny Collins, Holger Regenbrecht, Tobias 13:30 - 13:45 ReconViguRation: Reconfiguring Langlotz, Yekta Said Can, Cem Ersoy, and Russell Physical Keyboards in Virtual Reality., Daniel Schneider, Butson. Alexander Otte, Travis Gesslein, Philipp Gagel, Bastian 16:15 - 16:45 Acceptance and Effectiveness of Kuth, Mohamad Shahm Damlakhi, Oliver Dietz, Eyal a Virtual Reality Public Speaking Training., Fabrizio Ofek, Michel Pahud, Per Ola Kristensson, Jörg Müller, Palmas, Jakub Cichor, David A. Plecher, and Gudrun and Jens Grubert. (TVCG) Klinker. 13:45 - 14:00 Pointing and Selection Methods for Text Entry in Augmented Reality Head Mounted Displays., Wenge Xu, Hai-Ning Liang, Anqi He, and Zifan Wang. 14:00 - 14:15 Performance Envelopes of Virtual Keyboard Text Input Strategies in Virtual Reality., John J Dudley, Hrvoje Benko, Daniel Wigdor, and Per Ola Kristensson.

13 Poster Teasers Group (Oct 15th, 16:30 - 17:30, Juxian Ballroom) Poster Group A (Oct 15th, 17:30 - 19:30, Juxian Ballroom)

Board ID Paper title and authors New System to Measure Motion-to-Photon Latency of Low-Cost Real-Time Mental Load Adaptation for A15 Virtual Reality Head Mounted Display. Hang Xun, Yongtian A01 Augmented Reality Instructions - A Feasibility Study. Wang, and Dongdong Weng. Dennis Wolf, Tobias Wagner, and Enrico Rukzio. Hololens AR - Using Vuforia-based Marker Tracking together with Text Recognition in an Assembly Scenario. A Scalable and Long-term Wearable Augmented Reality A16 A02 System for Order Picking. Wei Fang, Siyao Zheng, and Sebastian Knopp, Philipp Klimant, Robert Schaffrath, Eric Zhen Liu. Voigt, Rayk Fritzsche, and Christoph Allmacher. Augmented Reality-based Peephole Interaction Using Real A Preliminary Exploration of Montage Transitions in A03 Cinematic Virtual Reality. Ruochen Cao, James A. Walsh, Space Information. Masashi Miyazaki, and Takashi Komuro. A17 Andrew Cunningham, Carolin Reichherzer, Subrata Dey, Exploring the use of Augmented Reality in a Kinesthetic and Bruce H Thomas. Learning Application Integrated with an Intelligent Virtual A04 Embodied Agent. Muhammad Zahid Iqbal, Eleni Mangina, WARP: Contributional Tracking Architecture towards a and Abraham G. Campbell. A18 World Wide Augmented Reality Platform. Alexander Michael Sosin, and Yuta Itoh. Filtering Mechanisms of Shared Social Surrounding A05 Environments in AR. Alaeddin Nassani, Gun Lee, Mark Consolidating the Research Agenda of Augmented Reality Billinghurst, and Robert W. Lindeman. A19 Television with Insights from Potential End-Users. Irina Popovici, and Radu-Daniel Vatavu. Design of an AR based System for Group Piano Learning. A06 Minya Cai, Muhammad Alfian Amrizal, Toru Abe, and A Low-Cost Drift-Free Optical-Inertial Hybrid Motion Takuo Suganuma. A20 Capture System for High-Precision Human Pose Detection. Yue Li, Dongdong Weng, Dong Li, and Yihan Wang. Merging Live and Static 360 Panoramas inside 3D Scene A07 for Mixed Reality Remote Collaboration. Theophilus Hua SafeAR: AR Alert System Assisting Obstacle Avoidance Lid Teo, Gun Lee, Mark Billinghurst, and Matt Adcock. A21 for Pedestrians. HyeongYeop Kang, Geonsun Lee, and JungHyun Han. Kuroko Paradigm: Implications Augmenting Physical A08 Interaction with AR Avatars. Tianyang Gao, and Yuta Itoh. Easy Extrinsic Calibration of VR System and Multi-Camera based Marker-less Motion Capture System. Kosuke SceneCam: Improving Multi-Camera Remote Collaboration A22 Takahashi, Dan Mikami, Mariko Isogawa, Sun Siqi, and A09 Using Augmented Reality. Troels Ammitsb l Rasmussen, ø Yoshinori Kusachi. and Weidong Huang. Automatic Viewpoint Switching for Multi-view Surgical AR Tips: Augmented First-Person View Task Instruction A23 Videos. Tomohiro Shimizu, Kei Oishi, Hideo Saito, Hiroki A10 Videos. Gun Lee, Seungjun Ahn, William Hoff, and Mark Kajita, and Yoshifumi Takatsume. Billinghurst. An MR Remote Collaborative Platform based on 3D CAD A High-Precision Localization Device for Outdoor Models for Training in Industry. Peng Wang, Xiaoliang Bai, A11 Augmented Reality. Marco Stranner, Philipp Fleck, Dieter A24 Mark Billinghurst, Shusheng Zhang, Dechuan Han, Hao Lv, Schmalstieg, and Clemens Arth. Weiping He, Yuxiang Yan, Xiangyu Zhang, and Haitao Min. Smart Haproxy: A Novel Vibrotactile Feedback Prototype Location-based Augmented Reality In-situ Visualisation A12 Combining Passive and Active Haptic in AR Interaction. A25 Applied for Agricultural Fieldwork Navigation. Mengya Mengmeng Sun, Weiping He, Li Zhang, and Peng Wang. Zheng, and Abraham G. Campbell. A User Experience Study of Locomotion Design in Virtual Food Talks: Evaluating Visual and Interaction Principles Reality Between Adult and Minor Users. Zhijiong Huang, A13 for Representing Environmental and Nutritional Food Yu Zhang, Kathryn C. Quigley, Ramya Sankar, and Allen Y A26 Information in Augmented Reality. Emily Groves, Andreas Yang. Sonderegger, Delphine Ribes, and Nicolas Henchoz. A Deformation Method in a Wrapping Manner for Virtual A14 Gingiva Based on Mass-Spring Model. Tian Ma, Yun Li, Jiaojiao Li, and Yuancheng Li.

ISMAR 2019 14 Poster Teasers Group (Oct 15th, 16:30 - 17:30, Juxian Ballroom) Poster Group A (Oct 15th, 17:30 - 19:30, Juxian Ballroom)

Integrating AR and VR for Mobile Remote Collaboration. InvisibleRobot: Facilitating Robot Manipulationthrough A27 Hao Tang, Jeremy Venerella, Lakpa W Sherpa, Tyler J A42 Diminished Reality. Alexander Plopski, Ada Virginia Franklin, and Zhigang Zhu. Taylor, Elizabeth Jeanne Carter, and Henny Admoni. Visual and Proprioceptive Evaluation for Virtual Bicycle Ride. DroneCamo: Modifying Human-Drone Comfort via A43 A28 Xinli Wu, Qiang Zhou, Xin Li, Wenzhen Yang, and Zhigeng Augmented Reality. Atsushi Mori, and Yuta Itoh. Pan. Evaluating IVR in Primary School Classrooms. Yvonne PostAR: Design A Responsive Reading System with Multiple A44 Chua, Priyashri Kamlesh Sridhar, Haimo Zhang, Vipula A29 Interactions for Campus Augmented Poster. Shuo Liu, Dissanayake, and Suranga Nanayakkara. Seogsung Jang, and Woontack Woo. 3DUITK: An Opensource Toolkit for Thirty Years of Enhancing Rock Painting Tour Experience with Outdoor Three-Dimensional Interaction Research. Kieran William A45 A30 Augmented Reality. Qi Zhang, Xiaoyang Zhu, Haitao Yu, and May, Lan Hanan, Andrew Cunningham, and Bruce H Yongshi Jiang. Thomas. VesARlius: An Augmented Reality System for Large- Compiling VR/AR-Trainings from Business Process Group Co-Located Anatomy Learning. Felix Bork, Alexander A46 Models. Lucas Thies, Christoph Strohmeyer, Jens Ebert, A31 Lehner, Daniela Kugelmann, Ulrich Eck, Jens Waschke, and Marc Stamminger, and Frank Bauer. Nassir Navab. Towards a Switchable AR/VR Near-eye Display with Mental Fatigue of Long-term Office Tasks in Virtual Accommodation-Vergence and Eyeglass Prescription A32 Environment. Ruiying Shen, Dongdong Weng, Shanshan A47 Support. Xinxing Xia, Yunqing Guan, Andrei State, Chen, Jie Guo, and Hui Fang. Praneeth Chakravarthula, Kishore Rathinavel, Tat-Jen Multi-Vehicle Cooperative Military Training Simulation Cham, and Henry Fuchs. A33 System Based on Augmented Reality. Lei Fan, Jing Chen, Is It Cold in Here or Is It Just Me? Analysis of and Yuandong Miao. Augmented Reality Temperature Visualization for Industrial Use Case - AR Guidance Using Hololens for A48 Computer-Mediated Thermoception. Austin Erickson, Assembly and Disassembly of a Modular Mold, with Live Ryan Schubert, Kangsoo Kim, Gerd Bruder, and Greg A34 Streaming for Collaborative support. Sebastian Knopp, Welch. Philipp Klimant, and Christoph Allmacher. DepthMove: Leveraging Head Motions in the Depth A Two-point Map-based Interface for Architectural Dimension to Interact with Virtual Reality Head-Worn A35 A49 Walkthrough. Kan Chen, and Eugene Lee. Displays. Difeng Yu, Hai-Ning Liang, Xueshi Lu, Tianyu Zhang, and Wenge Xu. Why Don't We See More of Augmented Reality in Schools?. A36 Manoela Milena Oliveira da Silva, Rafael Roberto, Iulian Radu, A Shape Completion Component for Non-Rigid SLAM. Patricia Smith Cavalcante, and Veronica Teichrieb. A50 Yongzhi Su, Vladislav Golyanik, Narek Minaskan, Sk Aziz Ali, and Didier Stricker. Hand ControlAR: An Augmented Reality Application for A37 Learning 3D Geometry. Rui Cao, and Yue Liu. VPModel: High-Fidelity Product Simulation in a Virtual- A51 Physical Environment. Xin Min, Wenqiao Zhang, Shouqian Words In Kitchen: An Instance of Leveraging Virtual Reality A38 Sun, Nan Zhao, Siliang Tang, and Yueting Zhuang. Technology to Learn Vocabulary. Tianyu Jia, and Yue Liu. ReconViguRation: Reconfiguring Physical Keyboards in Holding Virtual Objects Using a Tablet for Tangible 3D Virtual Reality. Daniel Schneider, Alexander Otte, Travis A39 Sketching. Shouxia Wang, Weiping He, Bokai Zheng, Shuo A52 Gesslein, Philipp Gagel, Bastian Kuth, Mohamad Shahm Feng, Xiaoliang Bai, and Mark Billinghurst. Damlakhi, Oliver Dietz, Eyal Ofek, Michel Pahud, Per Ola Tie-Brake: Tie-based Wearable Device for Navigation with A40 Kristensson, Jörg Müller, and Jens Grubert. Brake Function. Yuan Yue, and Hiroaki Tobita. Camera Relocalization with Ellipsoidal Abstraction of Augmenting a Psoriasis-patient Doctor-dialogue through A53 Objects. Vincent Gaudillière, Gilles Simon, and Marie- A41 Intergrating Real Face and Maps of Psoriasis Pathology. Odile Berger. Yiping Jiang, and Dongdong Weng. Efficient 3D Reconstruction and Streaming for Group- A54 Scale Multi-Client Live Telepresence. Patrick Stotko, Stefan Krumpen, Michael Weinmann, and Reinhard Klein.

15 Poster Teasers Group (Oct 15th, 16:30 - 17:30, Juxian Ballroom) Poster Group B (Oct 16th, 09:30 - 10:00, 12:15 - 13:15, 14:45 - 15:15)

Board ID Paper title and authors Compact Light Field Augmented Reality Display with Volumetric Representation of Human Body Parts Using B15 Eliminated Stray Light Using Discrete Structures. Cheng Yao, B01 Superquadrics. Ryo Hachiuma, and Hideo Saito. Yue Liu, Dewen Cheng, and Yongtian Wang. Deep Consistent Illumination in Augmented Reality. Xiang Faithful Face Image Completion for HMD Occlusion Removal. B02 B16 Wang, Kai Wang, and Shiguo Lian. Miao Wang, Xin Wen, and Shi-Min Hu. Improving Hybrid Tracking System for First-Person Reconstructing HDR Image from a Single Filtered LDR Image Interaction in Immersive CAVE Environment. Yi Lyu, XU B17 Based on a Deep Hybrid HDR Merger Network. Bin Liang, B03 Shuhong, Wei Fang, ChengCheng Wu, and Tianzhuang Dongdong Weng, Yihua Bao, Ziqi Tu, and Le Luo. Cheng. OSTNet: Calibration Method for Optical See-Through Head- Mounted Displays via Non-Parametric Distortion Map Video Synthesis of Human Upper Body with Realistic B18 B04 Face. Zhaoxiang Liu, Huan Hu, Zipeng Wang, Kai Wang, Generation. Kiyosato Someya, Yuichi Hiroi, Makoto Yamada, Jinqiang Bai, and Shiguo Lian. and Yuta Itoh. Joint Inpainting of RGB and Depth Images by Generative A Projector Calibration Method Using a Mobile Camera for B05 Adversarial Network with a Late Fusion approach. Ryo B19 Projection Mapping System. Chun Xie, Hidehiko Shishido, Fujii, Ryo Hachiuma, and Hideo Saito. Yoshinari Kameda, and Itaru Kitahara. InteractionGAN: Image-Level Interaction Using Generative Li-Fi for Augmented Reality Glasses: A Proof of Concept. B06 Adversarial Networks. Minjung Son, and Hyun Sung B20 Rene Kirrbach, Michael Faulwaßer, Benjamin Jakob, Tobias Chang. Schneider, and Alexander Noack. Blended-Keyframes for Mobile Mediated Reality Real-Time Hand Model Estimation from Depth Images for B07 Applications. Yu Xue, Diego Thomas, Frédéric Rayar, B21 Wearable Augmented Reality Glasses. Bill Zhou, Alex Yu, Hideaki Uchiyama, Rin-ichiro Taniguchi, and Baocai Yin. Joseph Menke, and Allen Y Yang. The Effect of Two Different Types of Human-computer LE-HGR: A Lightweight and Efficient RGB-based Online Interactions on User's Emotion in Virtual Counseling Gesture Recognition Network for Embedded AR Devices. B08 B22 Environment. Ziqi Tu, Dongdong Weng, Dewen Cheng, Hongwei Xie, Jiafang Wang, Shao Baitao, Mingyang Li, and Ruiying Shen, Hui Fang, and Yihua Bao. Jian Gu. Object Manipulation: Interaction for Virtual Reality on Multi- Deep Multi-State Object Pose Estimation for Augmented B23 B09 Reality Assembly. Yongzhi Su, Jason Rambach, Narek touch Screen. Jiafei Pan, and Dongdong Weng. Minaskan, Paul Lesur, Alain Pagani, and Didier Stricker. Birds vs. Fish: Visualizing Out-Of-View Objects in Augmented Real-time 3D Hand Gesture Based Mobile Interactions B24 Reality Using 3D Minimaps. Felix Bork, Ulrich Eck, and Nassir B10 Interface. Yunlong Che, Yuxiang Song, and Yue Qi. Navab. A Neural Virtual Anchor Synthesizer based on Seq2Seq Realtime Water-hazard Detection and Visualisation for B11 and GAN Models. Zipeng Wang, Zhaoxiang Liu, Zezhou B25 Autonomous Navigation and Advanced Driving Assistance. Chen, Huan Hu, and Shiguo Lian. Juntao Li, and Chuong V Nguyen. Setforge - Synthetic RGB-D Training Data Generation Online Gesture Recognition Algorithm Applied to HUD Based B12 to Support CNN-based Pose Estimation for Augmented B26 Smart Driving System. Jingyao Wang, Jing Chen, Yuanyuan Reality. Shu Zhang, Cheng Song, and Rafael Radkowski. Qiao, Junyan Zhou, and Yongtian Wang. Real-time Texturing for 6D Object Instance Detection from Improving Color Discrimination for Color Vision Deficiency B27 (CVD) withTemporal-domain Modulation. Silviya Hasana, RGB Images. Pavel Rojtberg, and Arjan Kuijper. B14 Yuichiro Fujimoto, Alexander Plopski, Masayuki Kanbara, and Hirokazu Kato.

ISMAR 2019 16 Poster Teasers Group (Oct 15th, 16:30 - 17:30, Juxian Ballroom) Poster Group B (Oct 16th, 09:30 - 10:00, 12:15 - 13:15, 14:45 - 15:15)

Dual-Model Approach for Engineering Collision Detection in NEAR: The NetEase AR Oriented Visual Inertial Dataset. B28 the CAVE Environment. Yang Xue, Xu Shuhong, Lijun Wang, B42 Cheng Wang, Yu Zhao, Jiabin Guo, Ling Pei, Yue Wang, Chaofan Dai, and Yufen Wu. and Haiwei Liu. Barrier Detection and Tracking from Parameterized Lidar Visualization-Guided Attention Direction in Dynamic B29 Data. Wen Xing, Lifeng Zhu, and Aiguo Song. B43 Control Tasks. Jason Orlosky, Chang Liu, Denis Kalkofen, Multi-Level Scene Modeling and Matching for Smartphone- and Kiyoshi Kiyokawa. B30 Based Indoor Localization. Lidong Chen, Yin Zou, Yaohua Large-Scale Optical Tracking System. Dong Li., Dongdong B44 Chang, Jinyun Liu, Benjamin Lin, and Zhigang Zhu. Weng, Yue Li, and Hang Xun. Indoor Scene Reconstruction: From Panorama Images to CAD HIGS: Hand Interaction Guidance System. Yao Lu, and B45 B31 Models. Chongyang Luo, Bochao Zou, Xiangwen Lyu, and Walterio Mayol-Cuevas. Haiyong Xie. FrictionHaptics : Encountered-type Haptic Device for A Fast Method for Large-scale Scene Data Acquisition and Tangential Friction Emulation. Meguro Ryo, Photchara B46 B32 3D Reconstruction. Yao Li, Yang Xie, Xijing Wang, Xun Luo, Ratsamee, Haruo Takemura, Tomohiro Mashita, and Yuki and Yue Qi. Uranishi. Optimization for RGB-D SLAM based on plane geometrical Utilizing Multiple Calibrated IMUs for Enhanced Mixed B33 constraint. Ningsheng Huang, Jing Chen, and Yuandong Miao. B47 Reality Tracking. Adnane Jadid, Linda Rudolph, Frieder Inter-Brain Connectivity: Comparisons between Real and Pankratz, and Gudrun Klinker. Virtual Environments Using Hyperscanning. Amit Barde, Evaluating Text Entry in Virtual Reality Using a Touch- B35 Nastaran Saffaryazdi, Pawan Withana, Nakul Patel, and Mark sensitive Physical Keyboard. Alexander Otte, Daniel B48 Billinghurst. Schneider, Tim Menzner, Travis Gesslein, Philipp Gagel, Less is More: Using Spatialised Auditory and Visual Cues for and Jens Grubert. B36 Target Acquisition in a Real-World Search Task. Amit Barde, Wearable RemoteFusion: A Mixed Reality Remote Matt Ward, Robert W. Lindeman, and Mark Billinghurst. Collaboration System with Local Eye Gaze and Remote B49 FragmentFusion: A light-weight SLAM pipeline for dense Hand Gesture Sharing. Prasanth Sasikumar, Lei Gao, B37 reconstruction. Darius Rueckert, Matthias Innmann, and Marc Huidong Bai, and Mark Billinghurst. Stamminger. Estimation of Rotation Gain Thresholds Considering FOV, Mid-Air Haptic Bio-Holograms in Mixed Reality. Teodor B50 Gender, and Distractors. Niall L Williams, and Tabitha C. B38 Romanus, Sam Frish, Mykola Maksymenko, Loïc Corenthy, Peck. William Frier, and Orestis Georgiou. Prediction of Discomfort due to Egomotion in Immersive Perceptual MR Space: Interactive Toolkit for Efficient B51 Videos for Virtual Reality. Suprith Balasubramanian, and B39 Environment Reconstruction in Mobile Mixed Reality. Chong Rajiv Soundararajan. Cao, and Jiayi Sun. Projection Distortion-based Object Tracking in Shader Integrating Peripheral Interaction into Augmented Reality B52 Lamp Scenarios. Niklas Gard, Anna Hilsmann, and Peter B40 Applications. Ovidiu Andrei Schipor, Radu-Daniel Vatavu, and Eisert. Wenjun Wu. FVA: Modeling Perceived Friendliness of Virtual Agents Using Movement Characteristics. Tanmay Randhavane, 6DoF Pose Estimation with Object Cutout based on a Deep B53 B41 Autoencoder. Xin Liu, Jichao Zhang, Xian He, Xiuqiang Song, Aniket Bera, Kyra Kapsaskis, Kurt Gray, and Dinesh and Xueying Qin. Manocha. Sick Moves! Motion Parameters as Indicators of Simulator Sickness. Tobias Feigl, Daniel Roth, Stefan Gradl, Markus B54 Gerhard Wirth, Michael Philippsen, Marc Erich Latoschik, Bjoern M Eskofier, and Christopher Mutschler.

17 Demonstrations (Oct 16th/17th, 10:00 - 17:30, Meeting Room 3)

Number All authors 1 Dennis Wolf, Michael Rietzler,Leo Hnatek, and Enrico Rukzio 2 Zhiyu Huo, Lingyu Wang, Yi Han, Yigang Fang, and Cheng Lu Wallace S. Lages, Yuan Li, Lee Lisle, Tobias Höllerer, and Doug 3 A. Bowman Yu RiJi, Ye Shishu, Shi Yuhan, Liu Shun, Jiang Shuai, and Guo 4 Xiaofang Stephen Thompson, Andrew Chalmers, Daniel Medeiros, and 5 Taehyun Rhee 6 Guillaume Quiniou, Frederic Rayar, and Diego Thomas 7 Yu Riji, Liu Shun, and Wang Mengjie João Paulo Lima, João Otávio de Lucena, Diego Thomas, and 8 Veronica Teichrieb’ Yongzhi Su, Jason Rambach, Nareg Minaskan, Paul Lesur, Alain 9 Pagani, and Didier Stricker 10 Zhixiong Lu, Yongtao Hu, and Jingwen Dai 11 Margaret Cook, Amber Ackley, Karla Chang Gonzalez, Austin Payne Caleb Kicklighter, Michelle Pine, Timothy McLaughlin, and Jinsil Hwaryoung Seo Ted Romanus, Sam Frish, Mykola Maksymenko, William Frier, 12 Loïc Corenthy, and Orestis Georgiou 13 Shun Odajima, and Takashi Komuro 14 Oleh Gavrilyuk, and Mykola Ursaty Kieran W. May, Ian Hanan, Andrew Cunningham, and Bruce H. 15 Thomas 16 Ken Moteki, and Takashi Komuro Yasuhira Chiba, JongMoon Choi, Takeo Hamada, and Noboru 17 Koshizuka 18 Kan Chen, and Eugene Lee 19 Valentin Vasiliu, Gábor Sörös 20 Shirin Sadri, Shalva A. Kohen, Carmine Elvezio, Shawn H. Sun, Alon Grinshpoon, Gabrielle Loeb, Naomi Basu, and Steven Feiner 21 Shingo Kagami, and Koichi Hashimoto Alexander Otte, Daniel Schneider, Tim Menzner, Travis Gesslein, 22 Philipp Gagel, and Jens Grubert. Felix Bork, Alexander Lehner, Daniela Kugelmann, ULrich Eck, 23 Jens Waschke, and Nassir Navab”

ISMAR 2019 18

Workshops

W3. Mixed Reality and W4. Augmenting Cities and Architecture with Immersive Accessibility Technologies (Oct 14th, 09:00 - 17:00, (Oct 14th, 09:00 - 12:30, Meeting Room3) Meeting Room7)

Mixed Reality (MR) refers to technology in which We invite interested parties from a variety of real and virtual world objects are presented together backgrounds to participate and join our workshop in one environment. This technology holds great to discuss how Immersive Technologies can best potential for accessibility, for example, facilitating contribute Augmenting Cities and Architecture. different daily tasks by incorporating virtual Immersive technologies such as augmented reality assistive information into the real environment, and (AR), virtual reality (VR), and mixed reality (MR) creating MR environments to support training and have the potential to augment experiences within rehabilitation for people with different needs and cities and the process of designing architecture. disabilities. Despite its potential, as a new form of Moreover, the 5G network which has low-latency and technology with hardware and software limitations, fast speed has begun. However, more work is needed MR itself could pose unique challenges to people to understand specific applications within these areas with disabilities, involving technical, privacy, ethical, and how they can be designed. Therefore, the main and accessibility concerns. As more and more MR aim of the workshop is to discuss and ideate use- products and research prototypes are designed to cases for creating situated immersive AR, VR, and address accessibility issues.i MR applications for the purpose of making cities It is time to discuss opportunities and challenges more engaging and to help design the cities of the for MR applications for people with disabilities, as future over 5G network. well as to derive accessibility standards for MR Website: https://augmentingcities.wordpress.com/ technology itself. Organizers: Website: https://www.mr-accessibility.com/ Callum Parker, The University of Sydney Organizers: Soojeong Yoo, The University of Sydney Yuhang Zhao, Cornell Tech Youngho Lee, Mokpo National University Shiri Azenkot, Cornell Tech Waldemar Jenek, Queensland University of Steven Feiner, Columbia University Technology Leah Findlater, University of Washington Junseong Bang, University of Science & Meredith Ringel Morris, Microsoft Research Technology(UST) Holger Regenbrecht, University of Otago Martez Mott, Microsoft Research Yuanchun Shi, Tsinghua University Chun Yu, Tsinghua University

19

Workshop Workshop

W6. AR and MR Technology W1. Mixed/Augmented for Ubiquitous Educational Learning Experience Reality and Mental Health (Oct 14th , 09:00 - 17:00, (Oct 18th , 09:00 - 12:30, Meeting Room9) Meeting Room3)

Augmented Reality and Mixed Reality technologies The goal of this workshop is to provide an have been around for more than 40 years. Recent opportunity for VR/AR/MR researchers and Health advancement in both all-in-one headset and mobile researchers to submit their original ideas, work-in- platform of the technology has open up the progress contribution, demos and position papers on possibility for adoption of the technology in ways the design and/or evaluation of new mental health not before possible. Augmented Reality and Mixed technologies aiding education, self- assessment, Reality technologies can now be included ubiquitously support to affected individuals, and intervention. as part of the classroom technology to enhance the We are interested in theoretically, empirically, learning experience. The purpose of the workshop and/or methodologically oriented contributions is to bring together researchers, developers, and focused on supporting and educating mental health practitioners interested in using Augmented Reality delivered through novel designs and evaluations of and Mixed Reality technology to enhance educational on AR/VR/MR systems, with/without support of learning experience. Many researches in the additional technologies such as Games, Social Media field have proven the effectiveness of immersive and Internet of Things. In addition to potential technologies-based instructions on students’ learning benefits, we would also like to receive contributions outcomes in K-12 and higher education in improving on potential dangers of using such technologies for learning outcome gains. addressing mental health issues. Website: https://simonmssu.github.io/ismar- Website: https://auckland.op.ac.nz/ismar-2019- workshop-edu.htms workshop-proposal Organizers: Organizers: Simon Su, CCDC Army Research Laboratory Nilufar Baghaei, Otago Polytechnic Auckland DoD Supercomputing Resource Center Campus (OPAIC) Xubo Yang, Shanghai Jiao Tong University Sylvia Hach, Unitec Institute of Technology Henry B.L.Duh, La Trobe University Hai-Ning Liang, Xi'an Jiaotong-Liverpool University John Naslund, Harvard Medical School

ISMAR 2019 20

Workshop

W2. Extended Reality for Good W5. XR-aided Design (XRAD): (XR4Good) next generation of CAD tools (Oct 18th, 09:00 - 17:00, (Oct 18th, 13:30 - 17:00, Meeting Room5) Meeting Room7)

Extended Reality (XR) is becoming mainstream, This workshop aims to overcome these challenges with major corporations releasing consumer products, by inciting research into next generation of CAD and non-experts using the systems to solve real- interfaces combined with virtual, augmented, mixed, world problems outside the traditional XR laboratory. and cross reality interactive environments. The main With the research and technological advances, it is focus of this special issue is on the theoretical and now possible to use these technologies in almost practical dissection of these technologies, their all domains and places. This provides a bigger relationship to one another, and their unique abilities opportunity to create applications that intend to to realize theoretical XR capabilities and design impact society in greater ways than beyond just principles in multimodal immersive environments. entertainment. Today the world is facing different We appreciate contributions from both commercial challenges including, but not limited to, healthcare, and academic sources; from researchers as well as environment, and education. Now is the time to practitioners. explore how XR might be used to solve widespread Website: http://web.engr.oregonstate.edu/~deamicir/ societal challenges. After three successful gatherings, ISMAR2019/index.html the 4th Extended Reality for Good (XR4Good) Organizers: workshop will keep bringing together researchers, Michele Fiorentino, Polytechnic University of Bari developers, and industry partners in presenting and Raffaele de Amicis, Oregon State University promoting research that intends to solve real-world problems using augmented and virtual realities. The 4th XR4Good will keep providing a platform to grow a research community that discusses challenges and opportunities to create Extended Reality for Good: XR that helps humankind and society in more impactful ways. Website:http://forgoodxr.org/ Organizers: Arindam Dey, University of Queensland Mark Billinghurst, University of Auckland Greg Welch, University of Central Florida Gun Lee, University of South Australia Stephan Lukosch, TU Delft

21

Tutorial Tutorial T1. OpenARK — Tackling T2. Interaction Paradigms in Augmented Reality Challenges MR – Lessons from Art via an Open-Source Software Development Kit (Oct 14th, 13:30 - 17:00, (Oct 18th, 13:30 - 17:00, M eeting Room7) Meeting Room7)

The aim of this tutorial is to present an open- This tutorial aims to share our best practices in source augmented reality development kit, called teaching artists to best express themselves via in- OpenARK. OpenARK was founded by Dr. Allen Yang teractive technology, but also in turn, what we have at UC Berkeley in 2015. Since then, the project has learnt from being part of their creative journey. received high-impact awards and visibility. Currently, Examples include navigation in VR with LeapMotion, OpenARK is being used by several industrial alliances narrative change based on user’s head orientation, including HTC Vive, Siemens, Ford, and State Grid. In interaction with mathematical equation visualisation 2018, OpenARK won the only Mixed Reality Award using hand gestures, audio-visual interface where at the Microsoft Imagine Cup Global Finals. In the participants use their voice to control lighting in the same year in China, OpenARK also won a Gold Medal scene, or controlling the flow of particles through at the Internet+ Innovation and Entrepreneurship breathing as part of a meditation experience. The Competition, the largest such competition in China. tutorial will also include a review of the state of the OpenARK currently also receives funding support art in brain-computer interface and neurophysiologi- from a research grant by Intel RealSense project and cal interfaces with XR. the NSF. Website: https://home.doc.gold.ac.uk/~xpan001/ Website: https://vivecenter.berkeley.edu/courses humancentric-ARVR/ openark-ismar-2019-tutorial/ Presenters: Presenters: Xueni Pan, University of London Allen Y. Yang, UC Berkeley William Latham, University of London Joseph Menke, UC Berkeley Doron Friedman, Sammy Ofer Luisa Caldas, UC Berkeley School of Communications

ISMAR 2019 22 Doctoral

Tutorial Consortium

T3.Bridging the gap (Oct 14th, 09:00 - 12:30, Meeting Room5) between research and Time TItle Speakers practice in AR 09:00 - 09:10 Welcome + Ice breaker 09:10 - 10:45 Talks 1-6 (Oct 18th, 09:00 - 12:30, 09:15 - Exploring Perceptual and Cognitive Effects 09:25 of Extreme Augmented Reality Experiences Meeting Room7) Daniel Eckhoff 09:25 - Interaction techniques for Immersive AR has involved plenty of domains and this 09:40 Remote Collaboration by mixing 360 interdiscipline boosts the AR researches and Panoramas and 3D Reconstruction applications. However, plenty of research efforts Theophilus Teo can not be applied in practice readily since some 09:40 - A Virtual On-board System of an researches lacked of user feedbacks thus only tackled 09:55 Autonomous Vehicle for Smooth Trading of the academic issues. This tutorial addresses this Driving Authority Yuki Sakamura gap between research and practice and shares our 09:55 - Deformable Objects for Virtual experiences to bridge it. 10:10 Environments Catherine Taylor The first topic is about the engineering gap in 10:10 - Social Perception of Pedestrians and Virtual traditional 6DOF tracking. We will introduce the 10:25 Agents using Movement Features NetEase AR oriented VIO dataset and new metrics Tanmay Randhavane accounting for user experiences rather than the 10:25 - Mediation of Multispectral Vision normally used ATE or RPE. The differences from the 10:40 and its Impacts on User Perception previous datasets and metrics will be illustrated. We Austin Erickson hope our dataset can help researchers develop more 10:40 - 10:55 Coffee Break user-friendly VIOs for AR. 10:55 - 12:40 Talks 7-13 The second topic is about gaps in AI based virtual content generation for AR, and we will take our 10:55 - Mixed Reality for Knowledge Work of the 11:10 Future Daniel Schneider melody driven choreography (MDC) research and embodied conversational agent (ECA) as examples. 11:10 - Designing Hybrid Spaces: The Importance Both MDC and ECA have already successfully powered 11:25 of Mixed Reality Visualization to Urban multiple video games from NetEase. Projects Marina Lima Medeiros The third topic is about a data-driven approach 11:25 - Volumetric and Varifocal-Occlusion to natural head animation generation for human-like 11:40 Augmented Reality Displays virtual characters who are talking. The synthesized Kishore Rathinavel head movements can reflect speech prosody 11:40 - AR/MR Remote Collaboration on Physical simultaneously 11:55 Tasks for Assembly/Training in Industry Website: https://ar.163.com/ismar2019_tutorial Peng Wang Presenters: 11:55 - Interactive Visualizations for Watching Cheng Wang, Hangzhou EasyXR co., ltd. 12:10 Boxing Matches in VR Tao Tao Xiang Wen, Fuxi AI Lab, NetEase 12:10 - Accurate outdoor perception based on Zhimeng Zhang, Fuxi AI Lab, NetEase 12:25 multi-modal data Ruyu Liu 12:25 - Design and Prototyping of Wide Field of 12:40 View Optical See-through Head-Mounted Displays with Per-pixel Occlusion Capability Yan Zhang 23 SLAM-for-AR Industry Challenge Session

(Oct 14th , 13:30 - 17:00, Meeting Room5) (Oct 16th, 11:00 - 12:30, J u y i n g B a l l r o o m )

As one of the most important techniques for AR SenseAR: Augment Dr. Nina applications, SLAM has achieved the level of maturity, the beauty of Luan and entered the stage of product landing. At this Reality stage, more and more efforts are being made to The practice of AR Ph.D Guofeng improve the overall performance of a SLAM system on smartphone Wang rather than some individual technical indicators. In addition, compared to other applications like robotics, landing on AR products poses higher requirement Augmented Reality Ms. Haifang to handle a variety of challenging situations since a Powered by Dell Luo home user may not carefully move the AR device, Technologies and the real environment may be quite complex. In Real time 3D GM. Xiong this context, this year we launch a SLAM competition reconstruction of Wei specifically designed for AR applications, emphasizing digital human for on the overall performance of SLAM systems. All new media teams in the finals will give presentations about the AR/CV Dr. Jiangwei techniques used in their systems. Empowering Smart Zhong Website: http://www.ismar19.org/newenweb/ Industry news/254.html 3D Human Body Dr. Shuxue Organizers: Modeling from a Quan Guofeng Zhang, Zhejiang University, China Single Image Jing Chen, Beijing Institute of Technology, China Guoquan Huang, University of Delaware, USA Designs and Prof. Dewen Evaluation of AR Cheng Near Eye Display Sponsored by Sensetime with Freeform Optics

ISMAR 2019 24

Olympic Village Night

The dinner will be held at the Bird's Nest Seafood Buffet Restaurant from 7pm to 9pm on Wednesday, October 16. The restaurant is located on the 3rd floor of the Bird's Nest Cultural Center, 1 National Stadium South Road. The restaurant has a panoramic view of the interior of the Bird's Nest and a light show at night. The restaurant has a wide variety of ingredients, and the chef will cook the food on the interactive food court, and guests can also enjoy the sight and taste.During the meal, you can also see Chinese traditional art - opera face change. About Bird's Nest (National Stadium) The Bird's Nest is officially called National Stadium. It is located on the northern part of Beijing. In 2008 it hosted the Beijing Olympics. Due to its magnificent architecture and the international competitions that are regularly hosted there, it has become a famous landmark in recent years. The exterior of the Bird's Nest is made up of intersecting steel sections that look like branches forming a huge nest. The unique shape of the stadium is truly stunning. The Olympic cauldron, shaped like a torch and originally mounted on top of the Bird's Nest, has been moved to the Torch Square northeast of the stadium. It is still visible from the outside of the stadium. Nearly 100,000 spectator seats fill the interior of the Bird's Nest. Their bright red color is magnificent. Transportation to Bird’s Nest(National Stadium) Return transportation from the conference venue will be provided. Details of the transportation will be posted during ISMAR 2019.

25 ISMAR 2019

ISMAR 2019

ISMAR 2019 C

M

Y

CM

MY

CY

CMY

K ISMAR 2019

ISMAR 2019 Hong Kong, Beijing, Shengzhen, Tokyo, Kyoto Shanghai,Hangzhou, Chengdu AI FOR

Singapore A BETTER TOMORROW

OFFICE JOIN US LOCATIONS HONG KONG AND START BEIJING SHENZHEN THE JOURNEY

SHANGHAI Send CV to: [email protected] CHENGDU HANGZHOU KYOTO MORE ABOUT SENSETIME TOKYO SenseTime O cial Recruitment Website: Joinus.sensetime.com SINGAPORE Facebook Page: SenseTime Group

Linkedin: SenseTime 商汤科技 Scan QR Code for more information!

SenseTime SenseTime Recruitment Official Site SenseTime Overview

SenseTime is a global company focused on developing innovative AI technologies that positively contribute to economies, society and humanity. It is also the world’s most-funded AI pure-play with the highest valuation.

We have made a number of technological breakthroughs, one of which is the first ever computer system in the world to achieve higher detection accuracy than the human eye. With our roots in the academic world, we invest in fundamental research to further our understanding and advance the state of art in AI technology. We are a global team of talented individuals with over half dedicated to research and development activities. This has made us a leading global AI algorithm provider and one of the most prolific contributors of related papers in the research community.

The deep learning and computer vision technologies we have developed are already powering industries span- ning across education, healthcare, smart city, automotive, communications and entertainment. Today, our technologies are trusted by over 700 customers and partners around the world to help address real world chal- lenges. Going forward, we strive to empower more industries with our AI platform and build a stronger AI ecosystem together with industry and academia.

SenseTime has offices in Hong Kong, Mainland China, Japan, and Singapore. For more information, please visit SenseTime’s website as well as LinkedIn, Twitter and Facebook pages.

Research Scientist - Researcher - Computational Imaging / 3D Computer Vision / Computational Photography Computer Graphics

Overseas 01 04 Shenzhen/Beijing, China

Research Engineer - WE Researcher - Computational Imaging / Medical Imaging Analysis / Computational Photography ARE Deep Learning Overseas 02 05 Beijing/Shanghai, China HIRING

Researcher - Researcher - 3D Computer Vision / SLAM / AR - Computer Vision Image Processing Hangzhou/Beijing, China 03 06 Beijing/Shanghai, China SenseTime Overview

SenseTime is a global company focused on developing innovative AI technologies that positively contribute to economies, society and humanity. It is also the world’s most-funded AI pure-play with the highest valuation.

We have made a number of technological breakthroughs, one of which is the first ever computer system in the world to achieve higher detection accuracy than the human eye. With our roots in the academic world, we invest in fundamental research to further our understanding and advance the state of art in AI technology. We are a global team of talented individuals with over half dedicated to research and development activities. This has made us a leading global AI algorithm provider and one of the most prolific contributors of related papers in the research community.

The deep learning and computer vision technologies we have developed are already powering industries span- ning across education, healthcare, smart city, automotive, communications and entertainment. Today, our technologies are trusted by over 700 customers and partners around the world to help address real world chal- lenges. Going forward, we strive to empower more industries with our AI platform and build a stronger AI ecosystem together with industry and academia.

SenseTime has offices in Hong Kong, Mainland China, Japan, and Singapore. For more information, please visit SenseTime’s website as well as LinkedIn, Twitter and Facebook pages.

Research Scientist - Researcher - Computational Imaging / 3D Computer Vision / Computational Photography Computer Graphics

Overseas 01 04 Shenzhen/Beijing, China

Research Engineer - WE Researcher - Computational Imaging / Medical Imaging Analysis / Computational Photography ARE Deep Learning Overseas 02 05 Beijing/Shanghai, China HIRING

Researcher - Researcher - 3D Computer Vision / SLAM / AR - Computer Vision Image Processing Hangzhou/Beijing, China 03 06 Beijing/Shanghai, China ISMAR 2019

BEIJING