FINAL PROGRAM • SHORT COURSES • EXHIBITS • DEMONSTRATION SESSION • PLENARY TALKS • TALKS PLENARY • SESSION DEMONSTRATION • EXHIBITS • COURSES SHORT • 13 – 17 January 2019 • Burlingame, CA, USA • INTERACTIVE PAPER SESSION • SPECIAL EVENTS • TECHNICAL SESSIONS • EI2019 Conference Locations*/Acronyms/Names

LOCATION* ACRONYM CONFERENCE NAME Regency C 3DMP 3D Measurement and Data Processing GP FG AVM Automonous Vehicles and Machines Cypress B COLOR Color Imaging XXIV: Displaying, Processing, Hardcopy, and Applications Harbour AB COIMG Computational Imaging XVII GP BC ERVR The Engineering Reality of Virtual Reality GP A HVEI Human Vision and Electronic Imaging Regency C IPAS Image Processing: Algorithms and Systems XVII GP E IQSP Image Quality and System Performance XVI Regency C IMSE Image Sensors and Imaging Systems Harbour AB IMAWM Imaging and Multimedia Analytics in a Web and Mobile World Regency B IRIACV Intelligent Robotics and Industrial Applications using Computer Vision

Cypress A MAAP Material Appearance Cypress C MWSF Media Watermarking, Security, and Forensics Regency AB PMII Photography, Mobile, and Immersive Imaging GP BC SD&A Steroscopic Displays and Applications XXX

Harbour B VDA Visualization andS Data Analysis

* GP = Grand Penninsula Please see program for location of joint sessions.

Hyatt San Francisco Airport Floor Plans

Lobby Level Atrium Level

E 3 SIXTY G C RESTAURANT V

THE GROVE D D C IV F GRAND PENINSULA B B BALLROOM B 3 SIXTY SANDPEBBLE C BAR ROOM BAYSIDE S HARBOUR REGENCY ROOM REGENCY B ROOM FOYER BALLROOM RESTROOMS BOARD ROOMS B A E A A MARKET III RESTROOMS RESTROOMS A A SKY LOUNGE II RESTROOMS GRAND PENINSULA FOYER I

CONFERENCE OFFICE

FRONT DESK

C B A

A CYPRESS ROOM

13-17 January 2019

Hyatt Regency San Francisco Airport 1333 Bayshore Highway Burlingame, California USA

2019 Symposium Co-Chair Welcome Andrew Woods Curtin University (Australia) On behalf of IS&T (the Society for Imaging Science and Technology) and the Electronic Imaging (EI) community, we would like to welcome you to the 31st annual International Symposium on Electronic Imaging.

2019 Symposium The EI Symposium is the premier international meeting in this exciting technological area, Co-Chair one that brings together colleagues from academia and industry to discuss topics on the Radka Tezaur forefront of research and innovation. Intel Corporation (USA) This whole week offers a great opportunity to learn about the latest research in image sens- ing, processing, display, and perception from leading experts from around the world, who innovate and collaborate on the design of imaging systems for consumer photography, autonomous driving, medical imaging, the arts and entertainment, scientific applications, and many other fields. 2019 Short Course Co-Chair This year we have organised special theme days in Autonomous Vehicle Imaging, 3D Jonathan B. Phillips Imaging, and AR/VR and Lightfield Imaging to help provide a focus on these important Google, Inc. (USA) emerging areas. These, as well as “virtual themes” of medical imaging and , are denoted in the Combined Paper Schedule beginning on page 23.

We encourage you to take full advantage of the many special events and networking opportunities available at the symposium, including the plenary presentations, theme-day 2019 Short Course symposium-wide special sessions, individual conference keynotes (see full list on page 10), Co-Chair the Monday evening EI Reception and 3D Theatre, Tuesday’s Demonstration Session, Wednesday’s Interactive (Poster) Paper Session and Meet the Future event, and other spe- Susan Farnand cial events arranged by various conferences. There is a lot to keep you enriched, educated Rochester Institute of and entertained. Technology (USA) Make this week yours and take advantage of it fully! You can create your own program using the Itinerary Planner available on the EI website. You can attend talks in any of the 16 different technical conferences, take short courses, visit the exhibits, and so much more. 2019 Short Course Co-Chair Learn more about the work presented at EI2019 by accessing the open access EI Confer- Arnaud Darmont* ence Proceedings available via www.electronicimaging.org and on the IS&T Digital Library APHESA SPRL (Belgium) (ist.publisher.ingentaconnect.com/content/ist/ei). Proceedings since the 2016 meeting are posted there.

We look forward to seeing you and welcoming you to this unique event. *Arnaud Darmont passed away unexpectedly in September 2018, but was an integral part of the committee prior to his death. We are saddened by —Andrew Woods and Radka Tezaur, EI2019 Symposium Co-chairs this loss. More information is found on page 82. IS&T CORPORATE MEMBERS

SUSTAINING CORPORATE MEMBERS / SYMPOSIUM SPONSORS

®

SUPPORTING CORPORATE MEMBERS

DONOR CORPORATE MEMBERS

IS&T Board of Directors July 2018 - June 2019 President Vice Presidents Chapter Directors Steven Simkse, Colorado State University Susan Farnand, Rochester Institute of Korea: Choon-Woo Kim, Inha University Executive Vice President Technology Rochester: David Odgers, Odgers Imaging Scott Silence, Corning Corporation Jennifer Gille, Oculus VR Tokyo: Masahiko Fujii, Fuji Xerox Co., Ltd. Conference Vice President Liisa Hakola, VTT Research Center of Finland Francisco Hideki Imai, Apple Inc. IS&T Executive Director Teruaki Mitsuya, Ricoh Company, Ltd. Publications Vice President Suzanne E. Grinnan Radka Tezaur, Intel Corporation Robin Jenkin, NVIDIA Corporation Michael Willis, Pivotal Resources Ltd. Secretary Dietmar Wueller, Image Engineering GmbH & Co. KG Immediate Past President Geoff Woolfe, retired, Canon Information Treasurer Systems Research Australia Pty. Ltd Eric Hanson, retired, HP Laboratories Table of Contents

EI Symposium Leadership...... 4 Symposium Overview...... 5 Plenary Speakers...... 7 Special Events...... 8 Short Course Daily Schedule Chart...... 9 Conference Keynotes...... 10 Joint Sessions...... 17 Paper Schedule by Day/Time...... 23

Conferences and Papers Program by Session 3D Measurement and Data Processing 2019...... 36 Autonomous Vehicles and Machines 2019...... 38 Color Imaging XXIV: Displaying, Processing, Hardcopy, and Applications...... 44 Computational Imaging XVII...... 49 The Engineering Reality of Virtual Reality 2019...... 55 Human Vision and Electronic Imaging 2019...... 60 Image Processing: Algorithms and Systems XVII...... 68 Image Quality and System Performance XVI...... 76 Image Sensors and Imaging Systems 2019...... 82 Imaging and Multimedia Analytics in a Web and Mobile World 2019...... 88 Intelligent Robotics and Industrial Applications using Computer Vision 2019...... 92 Material Appearance 2019...... 95 Media Watermarking, Security, and Forensics 2019...... 100 Photography, Mobile, and Immersive Imaging 2019...... 106 Stereoscopic Displays and Applications XXX...... 114 Visualization and Data Analysis 2019...... 120 Short Courses Detail Listing...... 123 General Information Pages...... 137 Author Index...... 140

Plan Now to Participate Join us for Electronic Imaging 2020 January 26 – 31, 2020

electronicimaging.org #EI2019 3 EI SYMPOSIUM LEADERSHIP

EI 2019 Symposium Committee Atanas Gotchev, Tampere University of Symposium Co-Chairs Technology Andrew Woods, Curtin University (Australia) Mathieu Hebert, Université Jean Monnet de Radka Tezaur, Intel Corporation (USA) Saint Etienne Nicolas S. Holliman, Newcastle University Short Course Co-Chairs Robin Jenkin, NVIDIA Corporation Jonathan B. Phillips, Google, Inc. (USA) David L. Kao, NASA Ames Research Center Susan Farnand, Rochester Institute of Takashi Kawai, Waseda University Technology (US) Qian Lin, HP Labs, HP Inc. Arnaud Darmont, APHESA SPRL (Belgium) Gabriel Marcu, Apple Inc. Mark E. McCourt, North Dakota State At-large Conference Chair Representative University Adnan Alattar, Digimarc Corporation Ian McDowall, Intuitive Surgical / Fakespace (United States) Labs Jon S. McElvain, Dolby Labs, Inc. Past Symposium Chair Nasir D. Memon, New York University Joyce Farrell, Stanford University (USA) Jeffrey B. Mulligan, NASA Ames Research Center IS&T Executive Director Henry Y. T. Ngan, ENPS Hong Kong Suzanne E. Grinnan, IS&T (USA) Kurt S. Niel, Upper Austria University of Applied Sciences EI 2019 Technical Committee Arnaud Peizerat, Commissariat à l’Énergie Sos S. Agaian, College of Staten Island, Atomique City University of New York Stuart Perry, University of Technology Sydney David Akopian, The University of Texas at William Puech, Lab. d’Informatique de San Antonio Robotique et de Microelectronique de Adnan M. Alattar, Digimarc Corporation Montpellier Jan Allebach, Purdue University Amy Reibman, Purdue University Nicolas Bonnier, Apple Inc. Alessandro Rizzi, Università degli Studi di Charles A. Bouman, Purdue University Milano Gregery T. Buzzard, Purdue University Juha Röning, University of Oulu Damon M. Chandler, Shizuoka University Nitin Sampat, Edmund Optics, Inc. Yi-Jen Chiang, New York University Gaurav Sharma, University of Rochester Reiner Creutzburg, Technische Hochschule Lionel Simonot, Institut Pprime Brandenburg Robert Sitnik, Warsaw University of Arnaud Darmont, APHESA SPRL Technology Edward J. Delp, Purdue University Robert L. Stevenson, University of Notre Patrick Denny, Valeo Dame Margaret Dolinsky, Indiana University Ingeborg Tastl, HP Labs, HP Inc. Karen Egiazarian, Tampere University of Ralf Widenhorn, Portland State University Technology Thomas Wischgoll, Wright State University Reiner Eschbach, Norwegian University of Andrew J. Woods, Curtin University Science and Technology, and Monroe Buyue Zhang, Apple Inc. Community College Song Zhang, Mississippi State University Zhigang Fan, Apple Inc. Fengqing Maggie Zhu, Purdue University Gregg E. Favalora, Draper IS&T expresses its deep appreciation to the symposium chairs, conference chairs, program committee members, session chairs, and authors who generously give their time Sponsored by and expertise to enrich the Symposium. Society for Imaging Science and Technology (IS&T) EI would not be possible without the 7003 Kilworth Lane • Springfield, VA 22151 dedicated contributions of our participants and members. 703/642-9090 / 703/642-9094 fax / [email protected] / www.imaging.org

4 #EI2019 electronicimaging.org SYMPOSIUM OVERVIEW EI2019 Exhibitors Engage with advances in electronic imaging Exhibit Hours Imaging is integral to the human experience and to exciting technology advances taking shape around Tuesday 10 AM – 7:00 PM us—from personal photographs taken every day with mobile devices to autonomous imaging algo- Wednesday 10 AM – 3:30 PM rithms in self-driving cars to the mixed reality technology that underlies new forms of entertainment, and the latest in image data security. At EI 2019, leading researchers, developers, and entrepreneurs from around the world discuss, learn about, and share the latest imaging developments from industry and academia.

EI 2019 includes theme day topics and virtual tracks, plenary speakers, 25 technical courses, 3D theater, and 16 conferences including cross-topic joint sessions, keynote speakers, and peer-reviewed research presentations.

Symposium Silver Sponsor

EI 2019 THEME DAY HIGHLIGHTS (follow the icons in the Schedule listing for more on these themes and other virtual tracks)

Autonomous Vehicle Imaging Monday, Jan 14 Plenary Session: Autonomous Driving Technology and the OrCam MyEye, A. Shashua (Mobileye) Symposium Session: Panel: Sensing and Perceiving for Autonomous Driving, A. Shashua (Mobileye), B. Fowler (OmniVision), C. Schroeder (Mercedes-Benz), J. Pei (Cepton), moderated by W. Zhang (General Motors) Short Course: Developing Enabling Technologies for Automated Driving, F. Iandola (Deepscale), K. Keutzer and J. Gonzalez (University of California, Berkeley)

3D Imaging Tuesday, Jan 15 Plenary Session: The Quest for Vision Comfort: Head-Mounted Light Field Displays for Virtual and Augmented Reality, H. Hua (University of Arizona) Symposium Session: Computational Models for Human Optics, A. Watson (Apple), J. Polans (Verily Life Sciences), J. Schwiegerling (University of Arizona), B. Barsky (University of California, Berkeley), H. Hua (University of Arizona), T. Lian (Stanford University), chaired by J. Gille (Oculus) Short Courses: Fundamentals of Deep Learning, R. Ptucha (Rochester Institute of Technology)

Using Cognitive and Behavioral Sciences and the Arts in Research and Design, M. López-González (La Petite Noiseuse Productions)

AR/VR and Light Field Imaging Wednesday, Jan 16 Plenary Session: Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, P. Debevec (Google) Symposium Session: Light Field Imaging and Display, R. Ramamoorthi (University of California, San Diego), D. Fattal, (LEIA Inc.), K. Akeley (Google), K. Pulli (Raxium), M. Hirsch (Lumii Inc.), chaired by G. Wetzstein (Stanford University) Short Courses: Build Your Own VR Display: An Introduction to VR Display Systems for Hobbyists & Educators, R. Konrad, N. Padmanaban, H. Ikoma (Stanford University)

electronicimaging.org #EI2019 5 Your research is making important contributions. Consider the Journal of Electronic Imaging as your journal of choice to publish this important work.

Karen Egiazarian Tampere University of Technology, Finland Editor-in-Chief Benefits of publishing The Journal of Electronic Imaging publishes papers in all in the Journal of Electronic Imaging: technology areas that make up the field of electronic imaging and are normally considered in the design, engineering, • Wide availability to readers via the and applications of electronic imaging systems. SPIE Digital Library.

• Rapid publication; The Journal of Electronic Imaging covers research and each article is published when it is ready. applications in all areas of electronic imaging science and technology including image acquisition, data storage, display • Open access for articles immediately and communication of data, visualization, processing, hard with voluntary payment copy output, and multimedia systems. of $960 per article. • Coverage by Web of Science, Journal Citation Reports, and other relevant databases.

www.spie.org/jei imaging and engineering in general. Hua’s current research focuses on opti- PLENARY SPEAKERS cal technologies enabling advanced 3D displays, especially head-mounted display technologies for virtual reality and augmented reality applications, and Autonomous Driving Technology and the OrCam MyEye microscopic and endoscopic imaging systems for medicine. Hua has published Monday, January 14, 2019 more than 200 technical papers and filed a total of 23 patent applications 2:00 – 3:00 PM in her specialty fields, and delivered numerous keynote addresses and invit- ed talks at major conferences and events worldwide. She is an SPIE Fellow Grand Peninsula Ballroom D and OSA senior member. She was a recipient of NSF Career Award (2006) Amnon Shashua, President and CEO, Mobileye, and honored as UA Researchers @ Lead Edge (2010). Hua and her students an Intel Company, and Senior Vice President of Intel shared a total of 8 “Best Paper” awards in various IEEE, SPIE and SID confer- Corporation (United States) ences. Hua received her PhD in optical engineering from the Beijing Institute of Technology in China (1999). Prior to joining the UA faculty in 2003, Hua The field of transportation is undergoing a seismic was an assistant professor with the University of Hawaii at Manoa in 2003, change with the coming introduction of autonomous driving. The technologies was a Beckman Research Fellow at the Beckman Institute of University of Illinois required to enable computer driven cars involves the latest cutting edge artifi- at Urbana-Champaign between 1999 and 2002, and was a post-doc at the cial intelligence algorithms along three major thrusts: Sensing, Planning and University of Central Florida in 1999. Mapping. Shashua will describe the challenges and the kind of computer vi- sion and machine learning algorithms involved, but will do that through the perspective of Mobileye’s activity in this domain. He will then describe how Light Fields and Light Stages for Photoreal Movies, Games, and OrCam leverages computer vision, situation awareness and language process- Virtual Reality ing to enable blind and visually impaired to interact with the world through a Wednesday, January 16, 2019 miniature wearable device. 2:00 – 3:00 PM Prof. Amnon Shashua holds the Sachs chair in computer science at the Hebrew Grand Peninsula Ballroom D University of Jerusalem. His field of expertise is computer vision and machine Paul Debevec, Senior Scientist, Google (United States) learning. Shashua has founded three startups in the computer vision and ma- chine learning fields. In 1995 he founded CogniTens that specializes in the Paul Debevec will discuss the technology and produc- area of industrial metrology and is today a division of the Swedish Corporation tion processes behind “Welcome to Light Fields”, the first Hexagon. In 1999 he cofounded Mobileye with his partner Ziv Aviram. downloadable virtual reality experience based on light Mobileye develops system-on-chips and computer vision algorithms for driving field capture techniques which allow the visual appearance of an explorable assistance systems and is developing a platform for autonomous driving to be volume of space to be recorded and reprojected photorealistically in VR ena- launched in 2021. Today, approximately 32 million cars rely on Mobileye bling full 6DOF head movement. The lightfields technique differs from conven- technology to make their vehicles safer to drive. In August 2014, Mobileye tional approaches such as 3D modelling and photogrammetry. Debevec will claimed the title for largest Israeli IPO ever, by raising $1B at a market cap discuss the theory and application of the technique. Debevec will also discuss of $5.3B. In August 2017, Mobileye became an Intel company in the largest the Light Stage computational illumination and facial scanning systems which Israeli acquisition deal ever of $15.3B. Today, Shashua is the president and use geodesic spheres of inward-pointing LED lights as have been used to create CEO of Mobileye and a senior vice president of Intel Corporation. In 2010 digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, Shashua co-founded OrCam which harnesses computer vision and artificial and have recently been used to create photoreal digital actors based on real intelligence to assist people who are visually impaired or blind. people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One. The lighting reproduction process of light stages allows omnidirectional The Quest for Vision Comfort: Head-Mounted Light Field lighting environments captured from the real world to be accurately reproduced Displays for Virtual and Augmented Reality in a studio, and has recently be extended with multispectral capabilities to en- able LED lighting to accurately mimic the color rendition properties of daylight, Tuesday, January 15, 2019 incandescent, and mixed lighting environments. They have also recently used 2:00 – 3:00 PM their full-body light stage in conjunction with natural language processing and Grand Peninsula Ballroom D automultiscopic video projection to record and project interactive conversations Hong Hua, Professor of Optical Sciences,University with survivors of the World War II Holocaust. of Arizona (United States) Paul Debevec is a senior scientist at Google VR, a member of Google VR’s Hong Hua will discuss the high promises and the tremen- Daydream team, and adjunct research professor of computer science in the dous progress made recently toward the development of Viterbi School of Engineering at the University of Southern California, work- head-mounted displays (HMD) for both virtual and augmented reality displays, ing within the Vision and Graphics Laboratory at the USC Institute for Creative developing HMDs that offer uncompromised optical pathways to both digi- Technologies. Debevec’s computer graphics research has been recognized with tal and physical worlds without encumbrance and discomfort confronts many ACM SIGGRAPH’s first Significant New Researcher Award (2001) for “Creative grand challenges, both from technological perspectives and human factors. and Innovative Work in the Field of Image-Based Modeling and Rendering”, a She will particularly focus on the recent progress, challenges, and opportunities Scientific and Engineering Academy Award (2010) for “the design and engi- for developing head-mounted light field displays (LF-HMD), which are capable neering of the Light Stage capture devices and the image-based facial rendering of rendering true 3D synthetic scenes with proper focus cues to stimulate natural system developed for character relighting in motion pictures” with Tim Hawkins, eye accommodation responses and address the well-known vergence-accom- John Monos, and Mark Sagar, and the SMPTE Progress Medal (2017) in rec- modation conflict in conventional stereoscopic displays. ognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world Dr. Hong Hua is a professor of optical sciences at the University of Arizona. illumination and their effective commercial application in numerous Hollywood With over 25 years of experience, Hua is widely recognized through aca- films. In 2014, he was profiled in The New Yorker magazine’s “Pixel Perfect: demia and industry as an expert in wearable display technologies and optical The Scientist Behind the Digital Cloning of Actors” article by Margaret Talbot.

electronicimaging.org #EI2019 7 - - - #EI2019 electronicimaging.org and the Need for a Paradigm Shift in Vision Research. and the Need for a Paradigm ‘WonkaVision’ Registration required, online or at the registration desk. Jacqueline Snow is an assistant professor in cognitive and brain sciences group in the Department of Psychology at the University of Nevada. Snow teaches the theory and practice of science, functional magnetic resonance She also heads a laboratory imaging (fMRI), and clinical neuropsychology. researching how humans recognize and make decisions about objects, par ticularly studying the behavioral significance of real-world 3-D objects that one can reach out and interact with, such as tools and snack foods, and how neural structures in the brain code and represent action-relevant information. 5:30 – 7:00 pm with academic and in This annual event will bring invited students together 7:00 – 10:00 pm food, for great together HVEI community the brings This annual event will convivial conversation, and a keynote presentation given by Jacqueline C. Snow, Wednesday, January 16, 2019 Wednesday, Industry Exhibition Grand Peninsula Foyer 10:00 am – 3:30 pm industryannual opportunityunique a provides exhibit company meet to EI’s The exhibit imaging. related to electronic working in areas representatives and services,products highlights well as offers as the opportunityto meet prospective employers. Session Interactive Papers (Poster) The Grove 5:30 – 7:00 pm the Interactive Papers (Poster)Conference attendees are encouraged to attend available to answer ques Session where authors display their posters and are thatnote Please work. their about discussions in-depth in engage and tions and that posters mayconference registration badges are required for entrance be previewed by all attendees beginning on Tuesday. Authors are asked to set up their posters starting at 10:00 am on Monday. materials at the conclusionPushpins are provided. Authors must remove poster considered unwanted andof the Interactive Session. Posters not removed are will be removed by staff and discarded. IS&T does not assume responsibility for posters left up before or after the Interactive Session. Research Professionals Meet the Future: A Showcase of Student and Young The Grove and will provide dustry representatives who may have opportunities to offer, each student with an opportunity to present and discuss their academic work via an interactive poster session. employment and explore network their professional expand Student presenters opportunities with the audience of academic and industry representatives. 2019 Friends of HVEI Banquet Location provided on ticket - - 8 5:30 – 7:00 pm This symposium-wide, hands-on, interactive session, which traditionally has elec- and stereoscopic of collection diverse and most largest the showcased tronic imaging research and products in one location, represents a unique Attendees can see the latest research in action, com- networking opportunity. demonstrators, products, ask questions of knowledgeable pare commercial and even make purchasing decisions about a range of electronic imaging products. The demonstration session hosts a vast collection of stereoscopic products providing a perfect opportunity to witness a wide array of stereo scopic displays with your own two eyes. Symposium Demonstration Session Regency Foyer 10:00 – 7:00 pm industryannual opportunityunique a provides exhibit meet company to EI’s The exhibit imaging. related to electronic working in areas representatives and services,products highlights well as offers as the opportunity to meet prospective employers. Industry Exhibition Grand Peninsula Foyer Start day with female colleagues and senior women scientists to share your in Electronic Imaging breakfast. stories and make connections at the Women complimentaryThe limited is Space registrants. full EI to open is breakfast for more informationto 40 people. Visit the onsite registration desk about this special event. 7:15 – 8:45 am Tuesday, January 15, 2019 Tuesday, in Electronic Imaging Breakfast Women Location provided on ticket gram will be announced at the conference and 3D glasses will be provided. gram will be announced at the conference and 3D Hosted by Andrew J. Woods, Curtin University (Australia) Hosted by Andrew J. Woods, Stereoscopic Displays and Applications The 3D Theater Session of each year’s producedbeing is that content 3D of variety wide the showcases Conference screened in the 3D Theaterand exhibited around the world. All 3D footage a large screen. The final pro Session is shown in high-quality polarized 3D on 6:00 – 7:30 pm SD&A Conference 3D Theatre Grand Peninsula Ballroom BC 5:00 – 6:00 pm wine, soft drinks, and reception featuring beer, Join colleagues for a light one to enjoy dinner with old and new friends at hors d’oeuvres. Make plans Conference registration badges are required of the many area restaurants. for entrance. Monday, January 14, 2019 Monday, Reception All-Conference Welcome The Grove SPECIAL EVENTS SPECIAL

Special Events (see CourseDescriptionsbeginningonpage123) EI 2019SHORT COURSESAT-A-GLANCE  electronicimaging.org #EI2019 Monday Jan14 Sunday Jan13 Characterization SC16: Camera SC19: (NEW) Noise Sources 10:45 -12:45 Dev. Enabling 8:30 -10:30 8:00 -17:45 Stereoscopic Application Automated Tech. for Display Driving and its SC01: Issues . Imaging Quality Mobile Imaging and Technology High-Dynamic- SC09: (NEW) 15:45 -17:45 13:30 -15:30 10:15 -12:15 Range Theory Calibration in Developments Optimization

8:00 -10:00

8:30 - 12:45 - 8:30 3D Imaging 3D for Displays Color and Devices Testing: SC02:

SC15: SC25: SC17: SC17: Color . . . Mobile Imaging An Introduction Production Line Camera Color SC18: (NEW) SC05: (NEW) 13:30 -17:45 10:15 -12:15 to Blockchain 8:30 -12:45 8:00 -10:00 Resolution in Devices . Calibration Processing 3D Point SC10: SC08: Cloud Tuesday Jan15 and Deblurring Deep Learning Techniques for 13:30 -17:45 Enhancement Fundamentals 8:00 -12:15 Environments 8:30 -12:45 Advanced Processing Immersive 3D Video SC03: Image SC11: SC20: of Using Cognitive Digital Camera Image Quality SC12: (NEW) 13:30 -17:45 Sciences and 8:00 -12:15 & Behavioral 8:30 -12:45 Research & Intelligence the Arts in the Arts Forensics Artificial Artificial Design Tuning SC04: Digital SC21: Image

Wed. Jan16 Video Quality: 13:30 -17:45 Calibration of Introduction to 15:45 -17:45

Transparency An Intro to VR to Intro An Context from

8:00 -12:15 8:30 - 12:45 - 8:30 Equivalence

to Structural Hobbyists & & Hobbyists Image and

Metrics for Systems for Systems Perceptual Perceptual A Broader TensorFlow

Hardware

Build Your Build Educators Compact Optics & Modules

Camera

Own VR VR Own Display: Display: SC06: SC13:

SC23: SC22: SC22: Thurs. Jan17 Perception and SC07: (NEW) Introduction to 13:30 -17:45 Cognition for Visualization 8:00 -12:15 8:30 -12:45 Probabilistic Techniques (UPDATED) Models for Estimation Tools and Inference Imaging SC14: SC24: & 9

Short Courses

EI 2019 CONFERENCE KEYNOTES

Monday January 14, 2019

Measurement and Evaluation of Appearance I AI for Reconstruction and Sensing I

Session Chairs: Mathieu Hebert, Université Jean Monnet de Saint Etienne 9:10 – 10:10 am Keynotes (France) and Takuroh Sone, Ricoh Company, Ltd. (Japan) Harbour AB COIMG-125 8:50 – 9:30 am Learning to make images, W. Clem Karl, Boston University (United States) Cypress A MAAP-475 W. Clem Karl received his PhD in electrical engineering and computer science On the acquisition and reproduction of material appearance, Jon Yngve (1991) from the Massachusetts Institute of Technology, Cambridge, where he Hardeberg, Norwegian University of Science and Technology (NTNU) also received his SM, EE, and SB. He held the position of staff research scien- (Norway) tist with the Brown-Harvard-MIT Center for Intelligent Control Systems and the MIT Laboratory for Information and Decision Systems from 1992 to 1994. He Jon Yngve Hardeberg (1971) is a professor in the department of comput- joined the faculty of Boston University in 1995, where he is currently professor er science at NTNU in Gjøvik. He has a MSc in signal processing from of electrical and computer engineering and biomedical engineering. Karl is NTNU, and a PhD in signal and image processing from the Ecole Nationale currently the Editor-in-Chief of the IEEE Transactions on Image Processing. He Supérieure des Télécommunications in Paris, France. Hardeberg is a mem- is a member of the Board of Governors of the IEEE Signal Processing Society, ber of the Norwegian Colour and Visual Computing Laboratory where he the Signal Processing Society Conference Board, the IEEE Transactions on teaches, supervises graduate students, manages international study pro- Medical Imaging Steering Committee, and the Technical Committee Review grams and research projects. He has co-authored more than 200 publica- Board. He co-organized two special sessions of the 2012 IEEE Statistical tions. His research interests include multispectral colour imaging, print and Signal Processing Workshop, one on Challenges in High-Dimensional Learning image quality, colorimetric device characterization, colour management, and one on Statistical Signal Processing and the Engineering of Materials. In cultural heritage imaging, and medical imaging. 2011 he was a co-organizer of a workshop on Large Data Sets in Medical Informatics as part of the Institute for Mathematics and Its Applications Thematic Year on the Mathematics of Information. He served as an Associate Editor of Capture to Publication: Authenticating Digital Imagery the IEEE Transactions on Image Processing and was the General Chair of the 2009 IEEE International Symposium on Biomedical Imaging. He is a past Session Chair: Nasir Memon, New York University (United States) member of the IEEE Image, Video, and Multidimensional Signal Processing 9:00 – 10:00 am Technical Committee and is a current member of the IEEE Biomedical Image Cypress C and Signal Processing Technical Committee. Karl’s research interests are in the MWSF-525 areas of multidimensional statistical signal and image processing, estimation, From capture to publication: Authenticating digital imagery, its context, inverse problems, geometric estimation, and applications to problems ranging and its chain of custody, Matt Robben and Daniel DeMattia, Truepic from biomedical signal and image processing to synthetic aperture radar. (United States)

Matt Robben is the VP of engineering for Truepic, responsible for leading Human and Machine Perception 3D Shapes new technology development across the Truepic authenticity platform and building a world-class pool of engineering talent. Prior to Truepic, Robben 10:40 – 11:40 am has helped technology groups and teams at One Medical, Dropbox, Sold. Grand Peninsula Ballroom A (acq. by Dropbox), and Microsoft deliver mission-critical software products HVEI-200 to market across a variety of verticals. Robben holds a BS in computer engi- Human and machine perception of 3D shape from contour, James Elder, neering from Northwestern University. York University (Canada)

Daniel DeMattia is the VP of security for Truepic. He is responsible for ensur- James Elder is a professor and York research chair in human and computer vision ing the security and integrity of Truepic, its systems, technology and data. at York University, Toronto, Canada. He is jointly appointed to the department He brings with him more than 20 years of security experience in high risk of psychology and the department of electrical engineering & computer science environments that he applies to every aspect of Truepic operations. Prior to at York, and is a member of York’s Centre for Vision Research (CVR) and Vision: Truepic, DeMattia was head of security at SpaceX as well as Virgin Orbit, Science to Applications (VISTA) program. He is also director of the NSERC where he helped build mission critical security and communication systems CREATE Training Program in Data Analytics & Visualization (NSERC CREATE that operate both on the ground and in space. In his early days, he acted as DAV) and principal investigator of the Intelligent Systems for Sustainable Urban an independent penetration tester and advised on vulnerability assessment Mobility (ISSUM) project. His research seeks to improve machine vision systems and incident response. through a better understanding of visual processing in biological systems. Elder’s current research is focused on natural scene statistics, perceptual organization, contour processing, shape perception, single-view 3D reconstruction, attentive vision systems and machine vision systems for dynamic 3D urban awareness.

10 #EI2019 electronicimaging.org Conference Keynotes

Appearance Design and 3D Printing I Symmetry in Vision and Image Processing

Session Chair: Jon Yngve Hardeberg, Norwegian University of Science and 3:30 – 4:30 pm Technology (NTNU) (Norway) Grand Peninsula Ballroom A HVEI-201 10:50 – 11:30 am The role of symmetry in vision and image processing, Zygmunt Pizlo, Cypress A University of California, Irvine (United States) MAAP-478 Beyond printing: How to expand 3D applications through postprocessing, Professor Zygmunt Pizlo holds the Falmagne endowed chair in mathemati- Isabel Sanz, HP Inc. (Spain) cal psychology in the department of cognitive sciences at University of California-Irvine. Pizlo received his MSc in electrical engineering (1978) from Isabel Sanz received an MSc in mechanical engineering from the Technical Politechnika, Warsaw, Poland, and PhD in electrical engineering (1982) from

University of Valencia (Spain) and from RWTH Aachen (Germany). Her cur- Keynotes the Institute of Electron Technology, Warsaw, Poland. He then decided to rent position is 3D printing advanced technical consultant at HP Inc. She pursue his interests in, and passion for, natural sciences. Having been already complemented her studies with a master in project management from La exposed to elements of AI, he became absolutely fascinated with the possibil- Salle, in Barcelona (Spain). Her career at HP started as R&D mechanical ity of studying the human mind. In 1982, he started his research on human engineer in the HP large format printing business. After that experience, vision at the Nencki Institute of Experimental Biology in the Polish Academy she moved into the 3D printing business. There, Sanz started the bench- of Sciences in Warsaw. Delving into visual psychophysics as the most ma- mark printing process for multi jet fusion customers. Nowadays, she is tech- ture branch of experimental psychology, Pizlo received his PhD in psychology nically developing new applications and helping customers to introduce from the University of Maryland at College Park. Bob Steinman and Azriel and grow the 3D printing opportunities in their products and processes. Rosenfeld were his advisers. In 1988, he moved to the University of Maryland She holds 9 patents and 1 publication and she keeps looking for new and in College Park, MD where he received his PhD in Psychology (1991). He innovative ways of doing things, evangelizing the movement to additive was a professor of psychological sciences at Purdue University for 26 years. manufacturing. In 2017, he moved to UC Irvine. Pizlo’s research focuses on psychophysics and computational modeling of 3D shape perception. He authored and co- authored two books on shape (MIT Press, 2008 and Oxford University Press, Color Rendering of Materials I Joint Session 2014) and co-edited a book on shape perception in human and computer vi- sion (Springer, 2013). His interest in vision research extends to depth, motion, Session Chair: Lionel Simonot, Université de Poitiers (France) figure-ground, color, eye movement, as well as image and video processing. 3:30 – 4:10 pm He has also done work on human problem solving where he adapted mul- Cypress A tiresolution/multiscale pyramids used in visual models to solve combinatorial This session is jointly sponsored by: Color Imaging XXIV: Displaying, optimization problems such as the Traveling Salesman Problem. Most recently, Processing, Hardcopy, and Applications, and Material Appearance 2019. he has been exploring the role that symmetry and the least-action principle can play in a theoretical formalism that can explain perception and cognition. MAAP-075 Capturing appearance in text: The Material Definition Language (MDL), Andy Kopra, NVIDIA Advanced Rendering Center (Germany) SD&A Keynote I Andy Kopra is a technical writer at the NVIDIA advanced rendering center in Berlin, Germany. With more than 35 years of professional computer graph- Session Chair: Andrew Woods, Curtin University (Australia) ics experience, he writes and edits documentation for NVIDIA customers 3:50 – 4:50 pm on a wide variety of topics. He also designs, programs, and maintains the Grand Peninsula Ballroom BC software systems used in the production of the documentation websites and SD&A-658 printed materials. From set to theater: Reporting on the 3D cinema business and technology roadmaps, Tony Davis, RealD Inc. (United States)

Tony Davis is the VP of technology at RealD where he works with an outstanding team to perfect the cinema experience from set to screen. Davis has a Masters in electrical engineering from Texas Tech University, specializing in advanced sig- nal acquisition and processing. After several years working as a technical staff member for Los Alamos National Laboratory, Davis was director of engineering for a highly successful line of medical and industrial X-ray computed tomography systems at 3M. Later, he was the founder of Tessive, a company dedicated to improvement of temporal representation in motion picture cameras.

electronicimaging.org #EI2019 11 Conference Keynotes

Tuesday January 15, 2019

Production and Deployment I Blockchain to Transform Industries

Session Chair: Robin Jenkin, NVIDIA Corporation (United States) Session Chair: Edward Delp, Purdue University (United States) 8:50 – 9:50 am 9:00 – 10:00 am Grand Peninsula Ballroom FG Cypress C Keynotes AVM-036 MWSF-533 AI and perception for automated driving – From concepts towards Blockchain and smart contract to transform industries – Challenges and production, Wende Zhang, General Motors (United States) opportunities, Sachiko Yoshihama, IBM Research (Japan)

Dr. Wende Zhang is currently the technical fellow on sensing systems at Dr. Sachiko Yoshihama is a senior technical staff member and senior man- General Motors (GM). Zhang has led GM’s Next Generation Perception ager at IBM Research - Tokyo. She leads a team that focuses on financial Systems team, guiding a cross-functional global engineering and R&D team and blockchain solutions. Her research interest is to bring advanced con- focused on identifying next generation perception systems for automated cepts and technologies to practice and address real-world problems to trans- driving and active safety since 2010. He was BFO of Lidar Systems (2017) form industries. She served as a technical leader and advisor in a number and BFO of Viewing Systems (2014-16) at GM. Zhang’s research interests of blockchain projects with clients in Japan and Asia. She joined IBM T.J. include perception and sensing for automated driving, pattern recognition, Watson Research Center in 2001, and then moved to IBM Research – computer vision, artificial intelligence, security, and robotics. He established Tokyo in 2003 and worked on research in information security technologies, GM’s development, execution and sourcing strategy on Lidar systems and including trusted computing, information flow control, and Web security. components and transferred his research innovation into multiple industry- She served as a technology innovation leader at IBM Research Global Labs first applications such as Rear Camera Mirror, Redundant Lane Sensing on HQ in Shanghai in 2012, where she helped define research strategies MY17 Cadillac Super Cruise, Video Trigger Recording on MY16 Cadillac for developing countries. She received her PhD from Yokohama National CT6 and Front Curb Camera System on MY 16 Chevrolet Corvette. Zhang University (2010). She is a member of ACM, a senior member of Information was the technical lead on computer vision and the embedded research- Processing Society of Japan, and a member of IBM Academy of Technology. er in the GM-CMU autonomous driving team that won the DARPA Urban Challenge in 2007. He has 75+ US patents, 35+ publications in sens- ing and viewing systems and received the GM highest technical awards Image Quality Modeling II (Boss Kettering Award) 3 times in 2015, 2016, 2017. Zhang has a doc- toral degree in electrical and computer engineering from Carnegie Mellon Session Chair: Stuart Perry, University of Technology Sydney (Australia) University and an MBA from Indiana University. 9:30 – 10:10 am Grand Peninsula Ballroom E IQSP-306 High Dynamic Range Imaging I Conscious of streaming (Quality), Alan Bovik, The University of Texas at Austin (United States) Session Chair: Michael Kriss, MAK Consultants (United States) and Jackson Roland, Apple Inc. (United States) Alan Bovik is the Cockrell Family Regents Endowed Chair professor at The 8:50 – 9:30 am University of Texas at Austin. He has received many major international awards, including the 2019 IEEE Fourier Award, the 2017 Edwin H. Land Regency AB Medal from IS&T/OSA, the 2015 Primetime Emmy Award for Outstanding PMII-579 Achievement in Engineering Development from the Academy of Television High dynamic range imaging: History, challenges, and opportunities, Arts and Sciences, and the ‘Society’ and ‘Sustained Impact’ Awards of the Greg Ward, Dolby Laboratories, Inc. (United States) IEEE Signal Processing Society. His is a Fellow of IEEE, OSA, and SPIE. Greg Ward is a pioneer in the HDR space, having developed the first His books include The Handbook of Image and Video Processing, Modern widely-used high dynamic range image file format in 1986 as part of the Image Quality Assessment, and The Essential Guides to Image and Video RADIANCE lighting simulation system. Since then, he has developed the Processing. Bovik co-founded and was the longest-serving editor-in-chief of LogLuv TIFF HDR and the JPEG-HDR image formats, and created Photosphere, the IEEE Transactions on Image Processing and created the IEEE International an HDR image builder and browser. He has been involved with BrightSide Conference on Image Processing in Austin, Texas, in November 1994. Technology and Dolby’s HDR display developments. He is currently a senior member of technical staff for research at Dolby Laboratories. He also con- sults for the Lawrence Berkeley National Lab on RADIANCE development, and for IRYStec, Inc. on OS-level mobile display software.

12 #EI2019 electronicimaging.org Conference Keynotes

Wednesday January 16, 2019

Camera Pipelines and Processing I Deep Neural Net Optimization I

Session Chair: Boyd Fowler, OmniVision Technologies (United States) and Session Chair: Buyue Zhang, Apple Inc. (United States) Francisco Imai, Apple Inc. (United States) 8:50 – 9:50 am 10:40 – 11:20 am Grand Peninsula Ballroom FG Regency AB AVM-047 PMII-582 Perception systems for autonomous vehicles using energy-efficient deep Unifying principles of camera processing pipeline in the rapidly changing neural networks, Forrest Iandola, DeepScale (United States) imaging landscape, Keigo Hirakawa, University of Dayton (United States)

Forrest Iandola completed his PhD in electrical engineering and computer Keynotes Keigo Hirakawa is an associate professor at the University of Dayton. Prior science at UC Berkeley, where his research focused on improving the ef- to UD, he was with Harvard University as a research associate of the depart- ficiency of deep neural networks (DNNs). His best-known work includes ment of statistics. He simultaneously earned his PhD in electrical and com- deep learning infrastructure such as FireCaffe and deep models such as puter engineering from Cornell University and his MM in jazz performance SqueezeNet and SqueezeDet. His advances in scalable training and ef- from New England Conservatory of Music. Hirakawa received his MS in ficient implementation of DNNs led to the founding of DeepScale, where electrical and computer engineering from Cornell University and BS in electri- he has been CEO since 2015. DeepScale builds vision/perception systems cal engineering from Princeton University. He is an associate editor for IEEE for automated vehicles. Transactions on Image Processing and for SPIE/IS&T Journal of Electronic Imaging, and served on the technical committee of IEEE SPS IVMSP as well as the organization committees of IEEE ICIP 2012 and IEEE ICASSP 2017. Solutions to Foreign Propaganda He has received a number of recognitions, including a paper award at IEEE ICIP 2007 and keynote speeches at IS&T CGIV, PCSJ-IMPS, CSAJ, and Session Chair: Nasir Memon, New York University (United States) IAPR CCIW. 9:00 – 10:00 am Cypress C MWSF-538 SD&A Keynote 2 Technology in context: Solutions to foreign propaganda and disinformation, Justin Maddox and Patricia Watts, Global Engagement Session Chair: Nicolas Holliman, University of Newcastle (United Kingdom) Center, US State Department (United States) 11:30 am – 12:30 pm Justin Maddox is an adjunct professor in the department of information sci- Grand Peninsula Ballroom BC ences and technology at George Mason University. Maddox is a counterter- SD&A-640 rorism expert with specialization in emerging technology applications. He What good is imperfect 3D?, Miriam Ross, Victoria University of Wellington is the CEO of Inventive Insights LLC, a research and analysis consultancy. (New Zealand) He recently served as the deputy coordinator of the interagency Global Dr. Miriam Ross is Senior Lecturer in the Film Programme at Victoria University Engagement Center, where he implemented cutting-edge technologies to of Wellington. She works with new technologies to combine creative meth- counter terrorist propaganda. He has led counterterrorism activities at the odologies and traditional academic analysis. She is the author of South CIA, the State Department, DHS, and NNSA, and has been a special oera- American Cinematic Culture: Policy, Production, Distribution and Exhibition tions team leader in the US Army. Since 2011, Maddox has taught National (2010) and 3D Cinema: Optical Illusions and Tactile Experiences (2015) Security Challenges, a graduate-level course, requiring students to devise real- as well as publications and creative works relating to film industries, mobile istic solutions to key strategic threats. Maddox holds an MA from Georgetown media, virtual reality, stereoscopic media, and film festivals. University’s national security studies program and a BA in liberal arts from St. John’s College, the “great books” school. He has lived and worked in Iraq, India, and Germany, and can order a drink in Russian, Urdu and German.

Patricia Watts is currently acting chief, science and technology/cyber, in the US State Department. Watts is a skilled senior intelligence professional with extensive research experience, and brings a solid understanding of foriegn operations, weaponry, and worldwide terrorism. Over a diverse career, Watts has managed the Joint Intelligence Directorate, supervising and over- seeing operations of personnel in Afghanistan supporting the global war on terrorism; supervised combat maneuver training operations; aided and as- sisted the tactical training of more than 40,000 maneuver brigade soldiers at the US Army National Training Center; and supplied multi-national support to British, French and U.S. forces in an allied command in Berlin, Germany.

electronicimaging.org #EI2019 13 Conference Keynotes

11:30 AVM-051 Camera Image Quality II Solid-state LiDAR sensors: The future of autonomous vehicles, Louay Session Chair: Peter Burns, Burns Digital Imaging (United States) Eldada, Quanergy Systems, Inc. (United States)

9:30 – 10:10 am Louay Eldada is CEO and co-founder of Quanergy Systems, Inc. Eldada is Grand Peninsula Ballroom E a serial entrepreneur, having founded and sold three businesses to Fortune IQSP-318 100 companies. Quanergy is his fourth start-up. Eldada is a technical busi- Benchmarking image quality for billions of images, Jonathan Phillips, ness leader with a proven track record at both small and large companies

Keynotes Google Inc. (United States) and with 71 patents, is a recognized expert in quantum optics, nanotech- nology, photonic integrated circuits, advanced optoelectronics, sensors and Jonathan Phillips is co-author of Camera Image Quality Benchmarking, a robotics. Prior to Quanergy, he was CSO of SunEdison, after serving as 2018 addition to the Wiley-IS&T Series in Imaging Science and Technology CTO of HelioVolt, which was acquired by SK Energy. Eldada was earlier collection. His experience in the imaging industry spans nearly 30 years, CTO of DuPont Photonic Technologies, formed by the acquisition of telepho- having worked at Kodak in both chemical and electronic photography for tonics where he was founding CTO. His first job was at Honeywell, where more than 20 years followed by image scientist positions with NVIDIA and he started the telecom photonics business and sold it to Corning. He studied Google. Currently, he is managing a color science team at Google re- business administration at Harvard, MIT and Stanford, and holds a PhD in sponsible for the display color of the Pixel phone product line. He was optical engineering from Columbia University. awarded the International Imaging Industry Association (I3A) Achievement Award for his groundbreaking work on modeling consumer-facing camera phone image quality, which is now incorporated into the IEEE Standard for Camera Phone Image Quality. Jonathan has been project lead for numer- Deep Learning I ous photography standards published by I3A, IEEE, and ISO. His graduate Session Chair: Qian Lin, HP Labs, HP Inc. (United States) studies were in color science at Rochester Institute of Technology and his undergraduate studies were in chemistry and music at Wheaton College (IL). 10:50 – 11:50 am Harbour AB IMAWM-405 Deep learning in the VIPER Laboratory, Edward Delp, Purdue University Automotive Image Sensing I Joint Session (United States) Session Chair: Kevin Matherson, Microsoft Corporation (United States); Prof. Edward Delp is the Charles William Harrison distinguished professor Arnaud Peizerat, CEA (France); and Peter van Beek, Intel Corporation of electrical and computer engineering, professor of biomedical engineer- (United States) ing, and professor of psychological sciences (Courtesy) at Purdue University. 10:50 am – 12:10 pm Delp was born in Cincinnati, Ohio. He received his BSEE (cum laude) and Grand Peninsula Ballroom D MS from the University of Cincinnati, and his PhD from Purdue University. In This session is jointly sponsored by: Autonomous Vehicles and Machines May 2002 he received an honorary doctor of technology from the Tampere 2019, Image Sensors and Imaging Systems 2019, and Photography, University of Technology in Tampere, Finland. In 2014 Delp received the Mobile, and Immersive Imaging 2019. Morrill award from Purdue University. This award honors a faculty member’s outstanding career achievements and is Purdue’s highest career achievement 10:50 IMSE-050 recognition for a faculty member. The Office of the Provost gives the Morrill Recent trends in the image sensing technologies, Vladimir Koifman, Award to faculty members who have excelled as teachers, researchers and Analog Value Ltd. (Israel) scholars, and in engagement missions. The award is named for Justin Smith Vladimir Koifman is a founder and CTO of Analog Value Ltd. Prior to that, Morrill, the Vermont congressman who sponsored the 1862 legislation that he was co-founder of Advasense Inc., acquired by Pixim/Sony Image bears his name and allowed for the creation of land-grant college and uni- Sensor Division. Prior to co-founding Advasense, Koifman co-established the versities in the United States. In 2015 Delp was named Electronic Imaging AMCC analog design center in Israel and led the analog design group Scientist of the Year by IS&T and SPIE. The Scientist of the Year award for three years. Before AMCC, Koifman worked for 10 years in Motorola is given annually to a member of the electronic imaging community who Semiconductor Israel (Freescale) managing an analog design group. He has has demonstrated excellence and commanded the respect of his/her peers more than 20 years of experience in VLSI industry and has technical lead- by making significant and substantial contributions to the field of electronic ership in analog chip design, mixed signal chip/system architecture and imaging via research, publications and service. He was cited for his con- electro-optic device development. Koifman has more than 80 granted pat- tributions to multimedia security and image and video compression. Delp ents and several papers. Koifman also maintains Image Sensors World blog. is a fellow of IEEE, SPIE, IS&T, and the American Institute of Medical and Biological Engineering.

14 #EI2019 electronicimaging.org Conference Keynotes

SD&A Keynote 3 mation. Other research topics include how object information is integrated across sensory modalities, such as vision and touch. They use a range of Session Chair: Andrew Woods, Curtin University (Australia) methodological approaches, including fMRI, psychophysics and the study 11:30 am – 12:40 pm of neuropsychological patients with brain damage. The lab is supported by a pilot project grant from the Center of Biomedical Research Excellence Grand Peninsula Ballroom BC (COBRE). SD&A-653 Beads of reality drip from pinpricks in space, Mark Bolas, Microsoft Corporation (United States) Thursday January 17, 2019 Mark Bolas loves perceiving and creating synthesized experiences. To feel, hear and touch experiences impossible in reality and yet grounded Technology and Sensor Design I

as designs that bring pleasure, meaning and a state of flow. His work with Keynotes Ian McDowall, Eric Lorimer and David Eggleston at Fakespace Labs; Scott Session Chair: Arnaud Peizerat, CEA (France) Fisher and Perry Hoberman at USC’s School of Cinematic Arts; the team at USC’s Institute for Creative Technologies; Niko Bolas at SonicBox; and Frank 8:50 – 9:30 am Wyatt, Dick Moore and Marc Dolson at UCSD informed results that led to Regency C his receipt of both the IEEE Virtual Reality Technical Achievement and Career IMSE-364 Awards. See more at https://en.wikipedia.org/wiki/Mark_Bolas. How CIS pixels moved from standard CMOS process to semiconductor process flavors even more dedicated than CCD ever was,Martin Waeny, TechnologiesMW (Switzerland)

HVEI Banquet and Speaker: Dr. Jacqueline C. Snow Martin Waeny graduated in microelectronics from IMT Neuchâtel (1997). In 1998 he worked on CMOS image sensors at IMEC. In 1999 he joined 7:00 – 10:00 pm the CSEM, as a PhD student in the field of digital CMOS image sensors. In Offsite - details provided on ticket 2000 he won the Vision prize for the invention of the LINLOG Technology Join us for a wonderful evening of conversations, a banquet dinner, and an and in 2001 the Photonics circle of excellence award of SPIE. In 2001 enlightening speaker. This banquet is associated with the Human Vision and he co-founded the Photonfocus AG. In 2004 he founded AWAIBA Lda, a Electronic Imaging Conference (HVEI), but everyone interested in research at design-house and supplier for specialty area and linescan image sensors the intersection of human perception/cognition, imaging technologies, and and miniature wafer level camera modules for medical endoscopy. AWAIBA art is welcome. We’ll convene over a family-style meal at a local Lebanese/ merged 2014 into CMOSIS (www.cmosis.com) and 2015 in AMS (www. Middle Eastern restaurant. ams.com). At AMS, Waeny served as member of the CIS technology office HVEI-221 and acted as director of marketing for the micro camera modules. Since ‘WonkaVision’ and the need for a paradigm shift in vision research, 2017 he has been CEO of TechnologiesMW, an independent consult- Jacqueline Snow, University of Nevada at Reno (United States) ing company. Waeny was a member of the founding board of EMVA the European machine vision association and the 1288 vision standard working Jacqueline Snow joined the cognitive and brain sciences group in the de- group. His research interests are in miniaturized optoelectronic modules and partment of psychology at the University of Nevada, Reno in fall 2013. She application systems of such modules, 2D and 3D imaging and image sen- completed her graduate training in clinical neuropsychology and cognitive sors and use of computer vision in emerging application areas. neuroscience at the University of Melbourne, Australia, under the supervision of Professor Jason Mattingley. Snow completed two years of post-doctoral research in the United Kingdom working with Professor Glyn Humphreys of University of Birmingham. During this time, she developed a strong interest in Data Visualization and Displays functional magnetic resonance imaging (fMRI). She subsequently moved to Session Chair: David Kao, NASA Ames Research Center (United States) Canada where she completed a further five years of post-doctoral research in the laboratories of Professors Jody Culham and Melvyn Goodale at the 8:50 – 9:30 am University of Western Ontario. During this time, she developed a range of Harbour B special fMRI techniques to study how objects are represented in the human VDA-675 brain. Now an assistant professor at the University of Nevada, Reno, Snow Data visualization using large-format display systems, Thomas Wischgoll, teaches undergraduate psychology students about the theory and practice Wright State University (United States) of science, and graduate student seminars in functional magnetic resonance Professor Thomas Wischgoll is the director of visualization research and imaging (fMRI) and clinical neuropsychology. She also heads a research professor in the computer science & engineering department at Wright laboratory that consists of four doctoral students and a group of Honors State University. Wischgoll received his PhD in computer science from the Program students and undergraduate trainees. Together, they examine how University of Kaiserslautern (2002), and was a post-doctoral researcher at humans recognize and make decisions about objects. They are particularly the University of California, Irvine from 2003 through 2005. The Advanced interested in studying the behavioral significance of real-world 3-D objects Visual Data Analysis (AViDA) group at Wright State is devoted to research that one can reach out and interact with, such as tools and snack foods, and and support of the community in the areas of scientific visualization, medi- how neural structures in the brain code and represent action-relevant infor-

electronicimaging.org #EI2019 15 Conference Keynotes

cal imaging and visualiation, virtual environments, information visualization and analysis, big data analysis, and data science, etc. The AViDA group runs and supports the Appenzeller Visualization Laboratory, a state-of-the- art visualization facility that supports large-scale visualizating and fully im- mersive, virtual reality equipment. The Appenzeller Visualization laboratory provides access to cutting edge visualization technology and equipment, including a traditional CAVE-type setup as well as other fully immersive display environments. Keynotes

Color and Spectral Imaging

Session Chair: Ralf Widenhorn, Portland State University (United States) 11:40 am – 12:20 pm Regency C IMSE-370 The new effort for hyperspectral standarization - IEEE P4001, Christopher Durell, Labsphere, Inc. (United States)

Christopher Durell holds a BSEE and an MBA and has worked for Labsphere, Inc. in many executive capacities. He is currently leading business develop- ment for remote sensing technology. He has lead product development ef- forts in optical systems, light measurement and remote sensing systems for more than two decades. He is a member of SPIE, IEEE, IES, ASTM, CIE, CORM, and ICDM, and is a participant in CEOS/IVOS, QA4EO and other remote sensing groups. As of early 2018, Durell accepted the chair position on the new IEEE P4001 Hyperspectral Standards Working Group.

16 #EI2019 electronicimaging.org JOINT SESSIONS Monday January 14, 2019

Automotive Image Quality Joint Session Color Rendering of Materials I Joint Session Session Chairs: Patrick Denny, Valeo (Ireland); Stuart Perry, University of Session Chair: Lionel Simonot, Université de Poitiers (France) Technology Sydney (Australia); and Peter van Beek, Intel Corporation 3:30 – 4:10 pm (United States) Cypress A 8:50 – 10:10 am Grand Peninsula Ballroom D This session is jointly sponsored by: Color Imaging XXIV: Displaying, Processing, Hardcopy, and Applications, and Material Appearance 2019. This session is jointly sponsored by: Autonomous Vehicles and Machines 2019, and Image Quality and System Performance XVI. MAAP-075 KEYNOTE: Capturing appearance in text: The Material Definition Language (MDL), Andy Kopra, NVIDIA Advanced Rendering Center 8:50 AVM-026 (Germany) Updates on the progress of IEEE P2020 Automotive Imaging Standards Working Group, Robin Jenkin, NVIDIA Corporation (United States) Andy Kopra is a technical writer at the NVIDIA Advanced Rendering Center in Berlin, Germany. With more than 35 years of professional com- 9:10 AVM-027 puter graphics experience, he writes and edits documentation for NVIDIA Signal detection theory and automotive imaging, Paul Kane, ON customers on a wide variety of topics. He also designs, programs, and

Semiconductor (United States) Joint Sessions maintains the software systems used in the production of the documentation websites and printed materials. 9:30 AVM-029 Digital camera characterisation for autonomous vehicles applications, Paola Iacomussi and Giuseppe Rossi, INRIM (Italy) Color Rendering of Materials II Joint Session Session Chair: Lionel Simonot, Université de Poitiers (France) 9:50 AVM-030 Contrast detection probability - Implementation and use cases, Uwe 4:10 – 4:50 pm Artmann1, Marc Geese2, and Max Gäde1; 1Image Engineering GmbH & Cypress A Co KG and 2Robert Bosch GmbH (Germany) This session is jointly sponsored by: Color Imaging XXIV: Displaying, Processing, Hardcopy, and Applications, and Material Appearance 2019.

Panel: Sensing and Perceiving for Autonomous Driving Joint Session 4:10 COLOR-076 Real-time accurate rendering of color and texture of car coatings, 3:30 – 5:30 pm Eric Kirchner1, Ivo Lans1, Pim Koeckhoven1, Khalil Huraibat2, Francisco 2 2 3 Grand Peninsula Ballroom D Martinez-Verdu , Esther Perales , Alejandro Ferrero , and Joaquin Campos3; 1AkzoNobel (the Netherlands), 2University of Alicante (Spain), This session is jointly sponsored by the EI Steering Committee. and 3CSIC (Spain) Moderator: Dr. Wende Zhang, technical fellow, General Motors 4:30 COLOR-077 Panelists: Recreating Van Gogh’s original colors on museum displays, Eric Dr. Amnon Shashua, professor of computer science, Hebrew University; Kirchner1, Muriel Geldof2, Ella Hendriks3, Art Ness Proano Gaibor2, Koen president and CEO, Mobileye, an Intel Company, and senior vice presi- Janssens4, John Delaney5, Ivo Lans1, Frank Ligterink2, Luc Megens2, Teio dent, Intel Corporation Meedendorp6, and Kathrin Pilz6; 1AkzoNobel (the Netherlands), 2RCE Dr. Boyd Fowler, CTO, OmniVision Technologies (the Netherlands), 3University of Amsterdam (the Netherlands), 4University Dr. Christoph Schroeder, head of autonomous driving N.A., Mercedes- of Antwerp (Belgium), 5National Gallery (United States), and 6Van Gogh Benz R&D Development North America, Inc. Museum (the Netherlands) Dr. Jun Pei, CEO and co-founder, Cepton Technologies Inc. Driver assistance and autonomous driving rely on perceptual systems that combine data from many different sensors, including camera, ultrasound, radar and lidar. The panelists will discuss the strengths and limitations of different types of sensors and how the data from these sensors can be ef- fectively combined to enable autonomous driving.

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 17 Joint Sessions

Tuesday January 15, 2019 4:50 SD&A-646 StarCAM - A 16K stereo panoramic video camera with a novel parallel interleaved arrangement of sensors, Dominique Meyer1, Daniel Sandin2, Material Appearance Perception Joint Session Christopher McFarland1, Eric Lo1, Gregory Dawe1, Haoyu Wang2, Ji Dai1, Maxine Brown2, Truong Nguyen1, Harlyn Baker3, Falko Kuester1, and Session Chair: Ingeborg Tastl, HP Labs, HP Inc. (United States) Tom DeFanti1; 1University of California, San Diego, 2University of Illinois at 9:10 – 10:10 am Chicago, and 3EPIImaging, LLC (United States) Grand Peninsula Ballroom D This session is jointly sponsored by: Human Vision and Electronic Imaging 2019, and Material Appearance 2019. Computational Models for Human Optics Joint Session Session Chair: Jennifer Gille, Oculus VR (United States) 9:10 MAAP-202 Material appearance: Ordering and clustering, Davit Gigilashvili, Jean- 3:30 – 5:30 pm Baptiste Thomas, Marius Pedersen, and Jon Yngve Hardeberg, Norwegian Grand Peninsula Ballroom D University of Science and Technology (NTNU) (Norway) This session is jointly sponsored by the EI Steering Committee.

Joint Sessions 9:30 MAAP-203 A novel translucency classification for computer graphics, Morgane 3:30 EISS-704 Gerardin1, Lionel Simonot2, Jean-Philippe Farrugia3, Jean-Claude Iehl3, Eye model implementation (Invited), Andrew Watson, Apple Inc. (United Thierry Fournel4, and Mathieu Hebert4; 1Institut d’Optique Graduate States) 2 3 4 School, Université de Poitiers, LIRIS, and Université Jean Monnet de Saint Dr. Andrew Watson is the chief vision scientist at Apple Inc., where he Etienne (France) specializes in vision science, psychophysics display human factors, visual human factors, computation modeling of vision, and image and video 9:50 MAAP-204 compression. For thirty-four years prior to joining Apple, Watson was the Constructing glossiness perception model of computer graphics with senior scientist for vision research at NASA. Watson received his PhD in sounds, Takumi Nakamura, Keita Hirai, and Takahiko Horiuchi, Chiba Psychology from the University of Pennsylvania (1977) and followed that University (Japan) with post doc work in vision at the University of Cambridge.

3:50 EISS-700 Wide field-of-view optical model of the human eye (Invited), James Visualization Facilities Joint Session Polans, Verily Life Sciences (United States) Session Chairs: Margaret Dolinsky, Indiana University (United States) and Björn Sommer, University of Konstanz (Germany) Dr. James Polans is an engineer who works on surgical robotics at Verily Life Sciences in South San Francisco. Polans received his PhD in biomedical 3:30 – 5:10 pm engineering from Duke University under the mentorship of Joseph Izatt. His Grand Peninsula Ballroom BC doctoral work explored the design and development of wide field-of-view op- tical coherence tomography systems for retinal imaging. He also has a MS This session is jointly sponsored by: The Engineering Reality of Virtual in electrical engineering from the University of Illinois at Urbana-Champaign. Reality 2019, and Stereoscopic Displays and Applications XXX. 4:10 EISS-702 3:30 SD&A-641 Evolution of the Arizona Eye Model (Invited), Jim Schwiegerling, Tiled stereoscopic 3D display wall – Concept, applications and University of Arizona (United States) evaluation, Björn Sommer, Alexandra Diehl, Karsten Klein, Philipp Meschenmoser, David Weber, Michael Aichem, Daniel Keim, and Falk Prof. Jim Schwiegerling is a professor in the College of Optical Sciences Schreiber, University of Konstanz (Germany) at the University of Arizona. His research interests include the design of ophthalmic systems such as corneal topographers, ocular wavefront sensors 3:50 SD&A-642 and retinal imaging systems. In addition to these systems, Schwiegerling The quality of stereo disparity in the polar regions of a stereo has designed a variety of multifocal intraocular and contact lenses and has panorama, Daniel Sandin1,2, Haoyu Wang3, Alexander Guo1, Ahmad expertise in diffractive and extended depth of focus systems. Atra1, Dick Ainsworth4, Maxine Brown3, and Tom DeFanti2; 1Electronic 4:30 EISS-705 Visualization Lab (EVL), University of Illinois at Chicago, 2California Institute Berkeley Eye Model (Invited), Brian Barsky, University of California, for Telecommunications and Information Technology (Calit2), University of Berkeley (United States) California, San Diego, 3University of Illinois at Chicago, and 4Ainsworth & Partners, Inc. (United States) Prof. Brian Barsky is professor of computer science and affiliate profes- sor of optometry and vision science at UC Berkeley. He attended McGill 4:10 SD&A-644 University, Montréal, received a DCS in engineering and a BSc in Opening a 3-D museum - A case study of 3-D SPACE, Eric Kurland, 3-D mathematics and computer science. He studied computer graphics and SPACE (United States) computer science at Cornell University, Ithaca, where he earned an MS. His PhD is in computer science from the University of Utah, Salt Lake City. 4:30 SD&A-645 He is a fellow of the American Academy of Optometry. His research inter- State of the art of multi-user virtual reality display systems, Juan Munoz ests include computer aided geometric design and modeling, interactive Arango, Dirk Reiners, and Carolina Cruz-Neira, University of Arkansas at three-dimensional computer graphics, visualization in scientific computing, Little Rock (United States) computer aided cornea modeling and visualization, medical imaging, and virtual environments for surgical simulation.

18 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org electronicimaging.org #EI2019 visual system. tions. Theserangefromnovelcameradesignstosimulationsofthehuman topicsthatinvolveimagesystemssimula- and worksoninterdisciplinary from DukeUniversity. SheiscurrentlyadvisedbyProfessorBrianWandell Before Stanford,shereceivedherbachelor’s inbiomedicalengineering Trisha LianisanelectricalengineeringPhDstudentatStanfordUniversity. States) Lian, KevinMacKenzie,andBrianWandell, StanfordUniversity(United Ray-tracing 3Dspectralscenesthroughhumanoptics(Invited), 5:10 was apost-docattheUniversityofCentralFloridain1999. University ofIllinoisatUrbana-Champaignbetween1999and2002, in 2003,wasaBeckmanresearchfellowattheInstituteof Hua wasanassistantprofessorwiththeUniversityofHawaiiatManoa Technology inChina(1999).PriortojoiningtheUAfaculty in2003, Hua receivedherPhDinopticalengineeringfromtheBeijingInstituteof total of8“BestPaper”awardsinvariousIEEE,SPIEandSIDconferences. as [email protected] member. ShewasarecipientofNSFCareerAward in2006andhonored conferences andeventsworldwide.SheisanSPIEfellowOSAsenior and deliverednumerouskeynoteaddressesinvitedtalksatmajor papers andfiledatotalof23patentapplicationsinherspecialtyfields, imaging systemsformedicine.Huahaspublishedover200technical and augmentedrealityapplications,microscopicendoscopic reality displays, especiallyhead-mounteddisplaytechnologiesforvirtual current researchfocusesonopticaltechnologiesenablingadvanced3D technologies andopticalimagingengineeringingeneral.Hua’s inwearabledisplay as anexpert through academiaandindustry Arizona. Withover25yearsofexperience,Huaiswidelyrecognized Prof. HongHuaisaprofessorofopticalsciencesattheUniversity Huang, MohanXu,andHongHua,UniversityofArizona(UnitedStates) forlightfielddisplays(Invited),Hekun Modeling retinalimageformation 4:50 J oint

S essions Paper titlesandauthors asof1Jan2019;finalproceedings manuscriptmayvary. Trisha EISS-701 EISS-703 of StatenIsland(UnitedStates) Urbana-Champaign (UnitedStates) resolution, QianwenWan Enhanced head-mountedeyetrackingdataanalysisusingsuper- 8:50 Reality 2019,andStereoscopicDisplaysApplicationsXXX. This sessionisjointlysponsoredby:TheEngineeringRealityofVirtual Grand PeninsulaBallroomBC 8:50 –10:10am States) Zealand) andIanMcDowall,IntuitiveSurgical/FakespaceLabs(United Session Chairs:NeilDodgson,VictoriaUniversityofWellington (New 360, 3D,andVR Wednesday 16,2019 January Waseda University(Japan) behavior, Yoshihiro Banchi,KeisukeYoshikawa, andTakashi Kawai, ofbinocular parallaxin360-degreeVRimagesonviewing Effects 9:10 in medicalapplications,SvenJörissen acquisitionforintegratedpositioning verification Self-calibrated surface 9:50 PMII-353 (Invited), OferLevi,UniversityofToronto (Canada) healthcare Imaging thebodywithminiaturecameras,towardsportable 9:30 PMII-352 (Invited), JoyceFarrell,StanfordUniversity(UnitedStates) Simulating amultispectralimagingsystemfororalcancerscreening 9:10 PMII-351 Plenoptic medicalcameras(Invited), 8:50 PMII-350 2019. Imaging Systems2019,andPhotography, Mobile,andImmersiveImaging This medicalimagingsessionisjointlysponsoredby:ImageSensorsand Grand PeninsulaBallroomD 8:50 –10:30am StateUniversity(UnitedStates) Portland Widenhorn, Session Chairs:JonMcElvain,DolbyLaboratories(UnitedStates)andRalf Medical Imaging-CameraSystems 1 videos (JIST-first), JukkaHäkkinen Time courseofsicknesssymptomswithHMDviewing360-degree 9:50 Mendiburu,Starbreeze(United States) headset, Bernard withStarVR Visual qualityinVRheadmounteddevice:Lessonslearned 9:30 Holly Taylor (Germany) (Germany) Nüchter University ofHelsinki(Finland)and 1 ; 1 University ofWuerzburg and 1 , andSosAgaian

1 , AleksandraKaszowska 2 ; 1 Tufts Universityand 1 , FumiyaOhta 2 Waseda University(Japan)

Liang Gao,UniversityofIllinois 1 , MichaelBleier 2 Zentrum fürTelematik e.V. 2 , andTakashi Kawai 1 2 , KarenPanetta CUNY/ TheCollege 2 , andAndreas J J oint oint

SD&A-647 SD&A-648 SD&A-650 SD&A-649 S S ession ession 1 , 2 ; 19

Joint Sessions Joint Sessions

10:10 IMSE-354 12:10 PMII-052 Measurement and suppression of multipath effect in time-of-flight depth Driving, the future – The automotive imaging revolution (Invited), Patrick imaging for endoscopic applications, Ryota Miyagi1, Yuta Murakami1, Denny, Valeo (Ireland) Keiichiro Kagawa1, Hajime Ngahara2, Kenji Kawashima3, Keita 1 1 1 2 Yasutomi , and Shoji Kawahito ; Shizuoka University, Osaka University, 12:30 AVM-053 and 3Tokyo Medical and Dental University (Japan) A system for generating complex physically accurate sensor images for automotive applications, Zhenyi Liu1,2, Minghao Shen1, Jiaqi Zhang3, Shuangting Liu3, Henryk Blasinski2, Trisha Lian2, and Brian Wandell2; Automotive Image Sensing I Joint Session 1Jilin University (China), 2Stanford University (United States), and 3Beihang Session Chairs: Kevin Matherson, Microsoft Corporation (United States); University (China) Arnaud Peizerat, CEA (France); and Peter van Beek, Intel Corporation (United States) Light Field Imaging and Display Joint Session 10:50 am – 12:10 pm Grand Peninsula Ballroom D Session Chair: Gordon Wetzstein, Stanford University (United States) This session is jointly sponsored by: Autonomous Vehicles and Machines 3:30 – 5:30 pm

Joint Sessions 2019, Image Sensors and Imaging Systems 2019, and Photography, Grand Peninsula Ballroom D Mobile, and Immersive Imaging 2019. This session is jointly sponsored by the EI Steering Committee. 10:50 IMSE-050 KEYNOTE: Recent trends in the image sensing technologies, Vladimir 3:30 EISS-706 Light fields - From shape recovery to sparse reconstruction (Invited), Koifman, Analog Value Ltd. (Israel) Ravi Ramamoorthi, University of California, San Diego (United States) Vladimir Koifman is a founder and CTO of Analog Value Ltd. Prior to that, he was co-founder of Advasense Inc., acquired by Pixim/Sony Image Prof. Ravi Ramamoorthi is the Ronald L. Graham professor of computer Sensor Division. Prior to co-founding Advasense, Koifman co-established science, and director of the Center for Visual Computing, at the University the AMCC analog design center in Israel and led the analog design group of California, San Diego. Ramamoorthi received his PhD in computer for three years. Before AMCC, Koifman worked for 10 years in Motorola science (in 2002) from Stanford University. Prior to joining UC San Diego, Semiconductor Israel (Freescale) managing an analog design group. He Ramamoorthi was associate professor of EECS at the University of California, has more than 20 years of experience in VLSI industry and has technical Berkeley, where he developed the complete graphics curricula. His research leadership in analog chip design, mixed signal chip/system architecture and centers on the theoretical foundations, mathematical representations, and electro-optic device development. Koifman has more than 80 granted pat- computational algorithms for understanding and rendering the visual appear- ents and several papers. Koifman also maintains Image Sensors World blog. ance of objects, exploring topics in frequency analysis and sparse sampling and reconstruction of visual appearance datasets a digital data-driven visual 11:30 AVM-051 appearance pipeline; light-field cameras and 3D photography; and physics- KEYNOTE: Solid-state LiDAR sensors: The future of autonomous vehicles, based computer vision. Ramamoorthi is an ACM Fellow for contributions to Louay Eldada, Quanergy Systems, Inc. (United States) computer graphics rendering and physics-based computer vision, awarded on Dec. 2017, and an IEEE Fellow for contributions to foundations of com- Louay Eldada is CEO and co-founder of Quanergy Systems, Inc. Eldada is puter graphics and computer vision, awarded Jan. 2017. a serial entrepreneur, having founded and sold three businesses to Fortune 100 companies. Quanergy is his fourth start-up. Eldada is a techni- 4:10 EISS-707 cal business leader with a proven track record at both small and large The beauty of light fields (Invited),David Fattal, LEIA Inc. (United States) companies and with 71 patents, is a recognized expert in quantum optics, nanotechnology, photonic integrated circuits, advanced optoelectronics, Dr. David Fattal is co-founder and CEO at LEIA Inc., where hs is in charge sensors and robotics. Prior to Quanergy, he was CSO of SunEdison, after of bringing their mobile holographic display technology to market. Fattal serving as CTO of HelioVolt, which was acquired by SK Energy. Eldada received his PhD in physics from Stanford University (2005). Prior to was earlier CTO of DuPont Photonic Technologies, formed by the acquisi- founding LEIA Inc., Fattal was a research scientist with HP Labs, HP Inc. At tion of Telephotonics where he was founding CTO. His first job was at LEIA Inc., the focus is on immersive mobile, with screens that come alive Honeywell, where he started the Telecom Photonics business and sold it to in richer, deeper, more beautiful ways. Flipping seamlessly between 2D Corning. He studied business administration at Harvard, MIT and Stanford, and lightfields, mobile experiences become truly immersive: no glasses, no and holds a PhD in optical engineering from Columbia University. tracking, no fuss. Alongside new display technology LEIA Inc. is develop- ing Leia Loft™ — a whole new canvas.

4:30 EISS-708 Automotive Image Sensing II oint ession J S Light field insights from my time at Lytro (Invited),Kurt Akeley, Google Session Chairs: Kevin Matherson, Microsoft Corporation (United States); Inc. (United States) Arnaud Peizerat, CEA (France); and Peter van Beek, Intel Corporation Dr. Kurt Akeley is a distinguished engineer at Google Inc. Akeley received (United States) his PhD in stereoscopic display technology from Stanford University 12:10 – 12:50 pm (2004), where he implemented and evaluated a stereoscopic display that Grand Peninsula Ballroom D passively (e.g., without eye tracking) produces nearly correct focus cues. After Stanford, Akeley worked with OpenGL at NVIDIA Incorporated, was This session is jointly sponsored by: Autonomous Vehicles and Machines a principal researcher at Microsoft Corporation, and a consulting professor 2019, Image Sensors and Imaging Systems 2019, and Photography, at Stanford University. In 2010, he joined Lytro Inc. as CTO. During his Mobile, and Immersive Imaging 2019. seven-year tenure as Lytro’s CTO, he guided and directly contributed to the

20 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Netherlands) and Perez electronicimaging.org #EI2019 1 JaimeJesúsRuiz medicine universitydegree,EstherGuervós1, experienceinveterinary Using 360VRvideotoimprovethelearning 3:50 Francesca DeSimone Complexity measurementandcharacterizationof360-degreecontent, 3:30 XVI. 2019, andImageQualitySystemPerformance This sessionisjointlysponsoredby:HumanVisionandElectronicImaging Grand PeninsulaBallroomA 3:30 –5:10pm Perry,Session Chair:Stuart UniversityofTechnology Sydney(Australia) Immersive QoE and display, withafocusonDIY. courses atSIGGRAPHonarangeofsubjectsincomputationalimaging and hasappearedinSIGGRAPH,CHI,ICCP. Hirschhas alsotaught His workhasbeenfundedbytheNSFandMediaLabconsortia, for imagereconstructionandunderstandinginvolumetricx-rayscanners. Analogic Corp.asanimagingengineer, whereheadvancedalgorithms and DoctoratefromtheMITMediaLab.Betweendegrees,heworkedat bachelors fromTufts Universityincomputerengineering,andhisMasters generation ofinteractiveandglasses-free3Ddisplays.Hirschreceivedhis Raskar’s CameraCultureGroupattheMITMediaLab,makingnext Holtzman’sworked withHenry EcologyGroupandRamesh Information Dr. ofLumii.He MatthewHirschisaco-founderandchieftechnicalofficer (United States) Industrial scalelightfieldprinting(Invited),MatthewHirsch,LumiiInc. 5:10 worked asaresearcheratStanford,UniversityofOulu,andMIT. (PhD), aswellanMBAfromtheUniversityofOulu.Hehastaughtand (BSc), UniversityofOulu(MSc,Lic.Tech), andUniversityofWashington AR. PulliholdscomputersciencedegreesfromtheUniversityofMinnesota where hefocusedoncomputationalphotography, computervision,and director atNVIDIAResearchandasaNokiaFellowResearch, a singlehigh-qualityimage.Hepreviouslyledresearchteamsassenior algorithms forcombiningimagesfromaheterogeneouscameraarrayinto vice presidentofcomputationalimagingatLight,wherehedeveloped ing thearchitectureoffutureIPU’s inhardwareandsoftware. Prior, hewas the CTOofImagingandCameraTechnologies Group atIntel,influenc- the overallarchitectureofsystem.BeforejoiningMeta,heworkedas heading upcomputervision,software,displays,andhardware,aswell he wastheCTOofMeta,anaugmentedrealitycompanyinSanMateo, panies suchasIntel,NVIDIAandNokia.Beforejoiningastealthstartup, Dr. KariPullihasspenttwodecadesincomputerimaging andARatcom- (UnitedStates) Quest forimmersion(Invited),KariPulli,stealthstartup 4:50 realityplayback. immersive,six-degree-of-freedomvirtual ported thatsup- systems, andalsotoacinematiccaptureprocessingservice development oftwoconsumerlight-fieldcamerasandtheirrelateddisplay J Politécnica deMadrid(Spain) Universidad AlfonsoXElSabio, oint

2 , Juan Alberto Muñoz , JuanAlberto S essions

2 Université deNantes(France) 1 , JesúsGutiérrez 1 , CésarDíaz 2 Nokia BellLabs,and 2 , andPatrickLeCallet 3 , andNarcisoGarcia Paper titlesandauthors asof1Jan2019;finalproceedings manuscriptmayvary. 3 Universidad J 2 oint ; 2 , Pablo 1 3 CWI (the ;

S HVEI-217 HVEI-216 EISS-710 EISS-709 ession Women andChildrenMedicalCenter(China), Woo Sweden University, and MultimediaAnalyticsinaWeb andMobileWorld 2019. Imaging XVII,HumanVisionandElectronic2019, This medicalimagingsessionisjointlysponsoredby:Computational Grand PeninsulaBallroomA 8:50 –10:10pm Medical Imaging-Computational 17,2019 Thursday January Are peoplepixel-peeping360°videos?, 4:50 IQSP-220 Alexander Raake,Technische UniversitätIlmenau(Germany) perception, FrankHofmeyer, StephanFremerey, ThadenCohrs,and HMDplaybackprocessingonsubjectivequality Impacts ofinternal 4:30 simulator, KjellBrunnstrom reality Quality ofExperiencevisual-hapticinteractioninavirtual 4:10 mandible curves ofdentalvolumetricCT,mandible curves SanghunLee Fully automateddentalpanoramicradiographbyusinginternal 9:50 (United States) images, SandamaliDevadithyaandDavidCastañón,BostonUniversity totalvariationregularizationfor dual-energyCT Edge-preserving 9:30 (United States) Dietz,ChadwickParrish,andKevinDonohue,UniversityofKentucky Henry ratesfromliveviewwithinaCanonPowershot, extraction ofheart Self-contained, passive,non-contact,photoplethysmography:Real-time 9:10 2 fetalcare, Smart 8:50 Huang and and Sjöström (China), and and HongboYang Chinese AcademyofSciences(China) Shenzhen Institute of Information TechnologyShenzhen InstituteofInformation (China), 2 2 1 Huawei Technologies Co.,Ltd.(China) Dio Implant(RepublicofKorea) , JoonwooLee 2 , andAlexanderRaake 2 , Tahir Qureshi 5 Suzhou InstituteofBiomedicalEngineeringandTechnology, JaneYou 5 ; 3 2 HIAB AB,and 1 , JaejunSeo The HongKongPolytechnicUniversity(HongKong), 3 , andMathiasJohanson 1,2 1 , QinLi , ElijsDima 1 ; 1 2 Technische UniversitätIlmenau(Germany) , andChulheeLee 2 4

, QiaozhuChen Alkit CommunicationsAB(Sweden) 2 , MattiasAndersson StephanFremerey 4 Tsinghua University 4 ; 1 RISE AcreoAB, 3 1 , ZhenhuaGuo ; 3 1 Guangzhou 1 Yonsei University , Seongyoun 1 J 2 , Rachel , Mårten , Mårten oint IMAWM-145 COIMG-148 COIMG-147 COIMG-146

S HVEI-219 HVEI-218 ession 2 4 Mid ,

21

Joint Sessions Joint Sessions

11:30 HVEI-224 Imaging Systems Joint Session Observer classification images and efficiency in 2D and 3D search tasks (Invited), Craig Abbey, Miguel Lago, and Miguel Eckstein, University of Session Chairs: Atanas Gotchev, Tampere University of Technology California, Santa Barbara (United States) (Finland) and Michael Kriss, MAK Consultants (United States)

8:50 – 10:10 am 11:50 HVEI-226 Regency B Image recognition depends largely on variety (Invited), Tamara Haygood1, Christina Thomas2, Tara Sagebiel2, Diana Palacio2, Myrna This session is jointly sponsored by: Image Processing: Algorithms and Godoy2, and Karla Evans1; 1University of York (United Kingdom) and 2UT Systems XVII, and Photography, Mobile, and Immersive Imaging 2019. M.D. Anderson Cancer Center (United States) 8:50 PMII-278 EDICT: Embedded and distributed intelligent capture technology (Invited), Scott Campbell, Timothy Macmillan, and Katsuri Rangam, Area4 Professional Design Services (United States)

9:10 IPAS-279

Joint Sessions Modeling lens optics and rendering virtual views from fisheye imagery, Filipe Gama, Mihail Georgiev, and Atanas Gotchev, Tampere University of Technology (Finland)

9:30 PMII-280 Digital distortion correction to measure spatial resolution from cameras with wide-angle lenses, Brian Rodricks1 and Yi Zhang2; 1SensorSpace, LLC and 2Facebook Inc. (United States)

9:50 IPAS-281 LiDAR assisted large-scale protection in street view cycloramas, Clint Sebastian1, Bas Boom2, Egor Bondarev1, and Peter de With1; 1Eindhoven University of Technology and 2CycloMedia Technology B.V. (the Netherlands)

Medical Imaging - Perception II Joint Session Session Chair: Sos Agaian, CUNY/ The College of Staten Island (United States) 10:50 am – 12:10 pm Grand Peninsula Ballroom A This medical imaging session is jointly sponsored by: Human Vision and Electronic Imaging 2019, and Image Processing: Algorithms and Systems XVII.

10:50 IPAS-222 Specular reflection detection algorithm for endoscopic images, Viacheslav Voronin1, Evgeny Semenishchev1, and Sos Agaian2; 1Don State Technical University (Russian Federation) and 2CUNY/The College of Staten Island (United States)

11:10 IPAS-223 Feedback alfa-rooting algorithm for medical image enhancement, Viacheslav Voronin1, Evgeny Semenishchev1, and Sos Agaian2; 1Don State Technical University (Russian Federation) and 2CUNY/ The College of Staten Island (United States)

22 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Schedule

Paper Schedule by Day/Time

PAPER SCHEDULE BY DAY/TIME

Monday, January 14, 2019 Key EI2019 Theme: Autonomous Imaging

8:50 am (Autonomous Car by Effach from the Noun Project) AVM-026 Updates on the progress of IEEE Grand Peninsula EI2019 Theme: 3D Imaging P2020 Automotive Imaging Stan- Ballroom D (3D by Adrien Coquet from the Noun Project) dards Working Group (Jenkin) EI2019 Theme:AR/VR & Light Field IPAS-250 JIST-first: Additive spatially correlated Regency C (Virtual Reality Head by Milan Gladiš from the Noun Project) noise suppression by robust block matching and adaptive 3D filtering EI2019 Virtual Track: Medical Imaging (Rubel) (X-Ray by iconsmind.com from the Noun Project) MAAP-475 Keynote: On the acquisition and Cypress A EI2019 Virtual Track: Deep Learning reproduction of material appearance (Deep Learning by Noura Mbarki from the Noun Project) (Hardeberg) SD&A-625 Invited: 3D image processing - From Grand Peninsula IPAS-253 Invited: Leveraging training data in Regency C capture to display (Fujii) Ballroom BC computational image reconstruction (Gilton) 9:00 am MAAP-477 Comparative analysis of transmit- Cypress A MWSF-525 Keynote: From capture to publica- Cypress C tance measurement geometries and tion: Authenticating digital imagery, apparatus (Shahpaski) its context, and its chain of custody (Robben) 10:30 am 9:10 am MWSF-526 Printed image watermarking with Cypress C synchronization using direct binary AVM-027 Signal detection theory and Grand Peninsula search (Xu) Schedule automotive imaging (Kane) Ballroom D PMII-575 Invited: Expanding the impact of Regency AB COIMG-125 Keynote: Learning to make images Harbour AB deep learning (Ptucha) (Karl) IPAS-251 A snowfall noise elimination using Regency C 10:40 am moving object compositing method HVEI-200 Keynote: Human and machine Grand Peninsula adaptable to natural boundary (Sato) perception of 3D shape from Ballroom A contour (Elder) SD&A-626 Invited: 3D TV based on spatial Grand Peninsula imaging (Kawakita) Ballroom BC IPAS-254 Invited: General Adaptive Neighbor- Regency C hood Image Processing (GANIP) 9:30 am (Debayle) AVM-029 Digital camera characterisation for Grand Peninsula autonomous vehicles applications Ballroom D 10:50 am (Iacomussi) AVM-031 Hyperspectral shadow detection Grand Peninsula for semantic road scene analysis Ballroom FG IPAS-252 Patch-based image despeckling using Regency C (Winkens) low-rank Hankel matrix approach with speckle level estimation (Kim) COIMG-126 Invited: Light field image reconstruc- Harbour AB tion with generative adversarial MAAP-476 Evaluation of sparkle impression Cypress A networks (Santos-Villalobos) considering observation distance (Watanabe) MAAP-478 Keynote: Beyond printing: How to Cypress A expand 3D applications through SD&A-627 Invited: Stereoscopic capture and Grand Peninsula postprocessing (Sanz) viewing parameters: Geometry and Ballroom BC perception (Allison) SD&A-628 A Full-HD super-multiview display Grand Peninsula with a deep viewing zone (Kakeya) Ballroom BC 9:50 am AVM-030 Contrast detection probability - Grand Peninsula 10:55 am Implementation and use cases Ballroom D MWSF-527 Hiding in plain sight: Enabling the Cypress C (Geese) vision of signal rich art (Kamath)

electronicimaging.org #EI2019 23 Grand Peninsula Ballroom E Cypress C Regency C Grand Peninsula Ballroom D Grand Peninsula Ballroom A Cypress A Grand Peninsula Grand Peninsula Ballroom FG Harbour AB Regency C Cypress A Grand Peninsula Ballroom BC Grand Peninsula Ballroom FG Harbour AB Regency C Cypress A Grand Peninsula Ballroom BC Deep Learning #EI2019 electronicimaging.org Medical Imaging Detection of streaks on printed pages (Zhang) Keynote: Capturing appearance in text: The Material Definition Language (MDL) (Kopra) Deep learning methods for event verification and image repurposing detection (Flenner) Phase extraction from interferogram using machine learning (Kando) Plenary: driving Autonomous technology and the OrCam MyEye (Shashua) Keynote: The role of symmetry in vision and image processing (Pizlo) From stixels to asteroids: A collision to asteroids: A collision From stixels warning vision system using stereo (Sanberg) learningInvited: Joint direct deep for one-sided ultrasonic non-destructive evaluation (Almansouri) filter using Enhanced guided image trilateral kernel for disparity error correction (Ho) post- Improving aesthetics through partsprocessing for 3D printed (Ju) Thin form-factor super multiview head-up display system (Akpinar) An autonomous drone surveillance and tracking architecture (Zenou) Invited: Modeling long range features from serial section imagery of continuous fiber reinforced com- posites (Sherman) Phase masks optimization for broadband diffractive imaging (Egiazarian) Invited: Refractive index of inks and colored gloss (Simonot) Dynamic multi-view autostereoscopy (Jiao) IQSP-300 MAAP-075 MWSF-530 12:30 pm IPAS-259 2:00 pm EISS-711 3:30 pm HVEI-201 11:50 am AVM-034 COIMG-129 IPAS-257 MAAP-480 SD&A-631 12:10 pm AVM-035 COIMG-130 IPAS-258 MAAP-481 SD&A-632 AR/VR & Light Field Grand Peninsula Ballroom A Regency AB Cypress C Grand Peninsula Ballroom FG Harbour AB Regency C Cypress A Grand Peninsula Ballroom BC Cypress C Regency AB Regency AB Grand Peninsula Ballroom FG Harbour AB Regency C Grand Peninsula Ballroom BC 3D Imaging

Forensic reconstruction of severely degraded license plates (Lorch) Invited: Do different radiologists perceive medical images the same way? Some insights from Represen- tational Similarity Analysis (Hegde) Face skin tone adaptive automatic exposure control (El-Yamany) Real-time traffic sign recognition using deep network for embedded platforms (Nagpal) Invited: 4D reconstruction using consensus equilibrium (Majee) Image stitching by creating a virtual depth (Eid) A soft-proofing workflow for color 3D printing - Addressing needs for the future (Tastl) Electro-holographic light field projector AOMs, modules: progress in SAW illumination, and packaging (Favalora) How re-training process affect the performance of no-reference image quality metric for face images (Charrier) Autofocus by deep reinforcement learning data (Chen) of phase Invited: Towards combining domain combining domain Invited: Towards and deep learningknowledge for (Gallo) computational imaging stereo Integration of advanced perspectively obstacle detection with correct surround views (Fuchs) with an Invited: Multi-target tracking and the event-based vision sensor GMPHD filter (Foster) alge- Gradient management and single image braic reconstruction for Dominguez) super resolution (Ochoa A 360-degrees holographic true 3D display unit using a Fresnel phase plate (Onural) Autonomous Imaging

24 11:45 am MWSF-529 PMII-578 HVEI-225 11:40 am COIMG-128 IPAS-256 MAAP-479 SD&A-630 11:30 am AVM-033 MWSF-528 PMII-577 11:20 am AVM-032 COIMG-127 IPAS-255 SD&A-629 11:10 am 11:00 am PMII-576 PAPER SCHEDULE BY DAY/TIME SCHEDULE PAPER

Schedule electronicimaging.org #EI2019 IQSP-304 IPAS-260 COLOR-078 COIMG-131 AVM-036 8:50 am Tuesday, 15,2019 January IQSP-303 COLOR-077 4:30 pm MWSF-532 4:20 pm IQSP-302 COLOR-076 4:10 pm MWSF-531 3:55 pm SD&A-658 IQSP-301 3:50 pm SD&A-633 PAPER SCHEDULEBYDAY/TIME

AutonomousImaging LCP texturedescriptors(Farias) ment basedonBSIF, CLBP, LPQ,and A referencelessimagequalityassess- (Egiazarian) analysis metric anditsperformance Invited: Combinedno-referenceIQA space (Safdar) color model withembeddeduniform Development ofacolorappearance ring forfull-fieldtomography(Ching) Simultaneous denoisinganddeblur- towards production(Zhang) automated driving–Fromconcepts Keynote: AIandperceptionfor on printedpages(Xiang) Blockwise detectionoflocaldefects ors onmuseumdisplays(Kirchner) Recreating Van Gogh’s originalcol- (Flenner) Images usingco-occurrencematrices Detecting GANgeneratedFake (Huang) Banding estimationforprintquality and textureofcarcoatings(Kirchner) Real-time accuraterenderingofcolor detection (Aloraini) coding fordigitalimageforgery andsparse learning Dictionary technology roadmaps(Davis) ing onthe3Dcinemabusinessand - Keynote: Fromsettotheater:Report cal defectsonprintedpages(Chen) Segmentation-based detectionoflo- toactivatable dyedisplays(Haris) color volumetric3Ddigitallightpho- Spirolactam rhodaminesformultiple

3DImaging Ballroom FG Grand Peninsula Ballroom E Grand Peninsula Regency C Cypress B Harbour AB Cypress C Cypress A Ballroom BC Grand Peninsula Ballroom E Grand Peninsula Cypress A Ballroom E Grand Peninsula Cypress C Ballroom E Grand Peninsula Ballroom BC Grand Peninsula AR/VR&LightField IPAS-261 COLOR-079 COIMG-132 9:10 am MWSF-533 9:00 am PMII-580 MAAP-203 IQSP-306 IPAS-262 COLOR-080 COIMG-133 9:30 am SD&A-635 MAAP-202 IQSP-305 SD&A-634 PMII-579 scenario (Arru) age qualitymetricsinalightfield ofim- Evaluating theeffectiveness ness scale(Zhao) Colour gamutmappingusingvivid- video (Parrish) ratesfrom erant, extractionofheart noise-tol- Computationally-efficient, non-contact, photoplethysmography: Autocorrelation-based, passive, (Yoshihama)lenges andopportunities industries–Chal- contract totransform Keynote: Blockchainandsmart tions (Fowler) applica- ing forhighperformance Invited: Highdynamicrangeimag- for computergraphics(Gerardin) A noveltranslucencyclassification (Quality) (Bovik) Keynote: Consciousofstreaming tic sceneanalysis(Abdelazim) bysingleframeseman- rate-distortion Parameter optimizationinH.265 era systems(El-Yamany) processing pipelinesindigitalcam- mapping solutionforcolorimage gamut A computationally-efficient (Zehni) gular refinementinCryo-EM Joint densitymapandcontinuousan- simulations (Grover) and diffraction focus retinalblurwithexperiments displays toprovideaccurateout-of- Understanding abilityof3Dintegral clustering (Gigilashvili) Material appearance:Orderingand qualitylimitations (Koren) for chart Compensating MTFmeasurements (Burnett) the heterogeneousdisplayecosystem Light-field displayarchitectureand (Ward)opportunities imaging: History, challenges,and Keynote: Highdynamicrange MedicalImaging DeepLearning Cypress B Harbour AB Cypress C Regency AB Ballroom D Grand Peninsula Ballroom E Grand Peninsula Regency C Cypress B Harbour AB Ballroom BC Grand Peninsula Ballroom D Grand Peninsula Ballroom E Grand Peninsula Regency C Ballroom BC Grand Peninsula Regency AB 25

Schedule Cypress B Harbour AB Regency C Grand Peninsula Ballroom E Cypress A Grand Peninsula Ballroom BC Cypress C Cypress B Grand Peninsula Ballroom FG Grand Peninsula Ballroom A Regency C Grand Peninsula Ballroom E Cypress A Grand Peninsula Ballroom BC Harbour AB Deep Learning #EI2019 electronicimaging.org Medical Imaging Height estimation of biomass sorghumbiomass of Height estimation in the field using LiDAR (Waliman) About glare and luminance measure- ments (Rizzi) Combining quality metrics using Combining quality metrics machine learning for improved and assess- robust HDR image quality ment (Choudhury) (Davis) Image-based BRDF design assess- Operational based vision research ment: Stereo acuity testing and development (Winterbottom) Patternfrontier-based, efficient and and effective exploration of au- tonomous mobile robots in unknown environments (Fujimoto) Discovery of activities via statistical clustering of fixation patterns (Mulligan) On-street parked vehicle detection via view-normalized classifier (Wu) Subjective evaluations on perceptual image brightness in high dynamic range television (Ikeda) Hair tone estimation at roots via im- aging device with embedded deep learning (Bokaris) Operational based vision as- sessment: Evaluating the effect of stereoscopic display crosstalk on simulated remote vision system depth discrimination (O’Keefe) Uncertainty for semi- quantification supervised in multilabel classification and ego motion image processing body wornanalysis from cameras (Li) Algorithm mismatch in spatial stega- nalysis (Reinders) GAN based image deblurring using dark channel prior (Zhang) Beyond limits of current high dynamic range displays: Ultra-high dynamic range display (Park) COIMG-137 COLOR-084 IQSP-307 MAAP-482 SD&A-638 11:10 am AVM-039 HVEI-206 IPAS-265 IQSP-308 MAAP-483 SD&A-639 11:20 am IPAS-264 10:55 am MWSF-535 11:00 am COIMG-136 COLOR-082 AR/VR & Light Field Grand Peninsula Ballroom FG Grand Peninsula Ballroom A Cypress C Harbour AB Cypress B Regency AB Grand Peninsula Ballroom BC Grand Peninsula Ballroom FG Harbour AB Cypress B Regency C Grand Peninsula Ballroom D Regency AB Grand Peninsula Grand Peninsula Ballroom BC 3D Imaging

HD map for every mobile robot: A novel, accurate, efficient mapping approach based on 3D reconstruc- tion and deep learning (Yuan) Object-based and multi-frame motion information predict human eye move- ment patterns during video viewing (Ma) Detection of diversified stego sources with CNNs (Butora) A comparative study on wavelets and residuals in deep super resolu- tion (Zhou) Viewing angle characterization of HDR/WCG displays using color vol- umes and new color spaces (Boher) Keynote: Unifying principles of camera processing pipeline in the rapidly changing imaging landscape (Hirakawa) EPIModules on a geodesic: Toward on a geodesic: Toward EPIModules light-field imaging 360-degree (Baker) Massive Invited: Self-driving cars: cars and deployment of production artificial (Gu) intelligence evolution from projec- Point source localization invariant tion lines using rotation features (Zehni) bound- A simple approach for gamut ary description using radial basis function network (Park) Additional lossless compression of JPEG images based on BPG (Egiazarian) Constructing glossiness perception model of computer graphics with sounds (Nakamura) Improved image selection for stack- based HDR imaging (van Beek) A photographing method of Integral Photography with high angle repro- ducibility of light rays (Mori) Autonomous Imaging

26 HVEI-205 10:50 am AVM-038 COIMG-135 COLOR-083 PMII-582 10:40 am 10:30 am MWSF-534 COIMG-134 COLOR-081 IPAS-263 MAAP-204 PMII-581 SD&A-637 9:50 am AVM-037 SD&A-636 PAPER SCHEDULE BY DAY/TIME SCHEDULE PAPER

Schedule electronicimaging.org #EI2019 IQSP-310 IPAS-267 HVEI-208 AVM-041 11:50 am MWSF-537 11:45 am PMII-584 COLOR-085 COIMG-138 11:40 am SD&A-640 MAAP-484 IQSP-309 IPAS-266 HVEI-207 AVM-040 11:30 am PMII-583 MWSF-536 PAPER SCHEDULEBYDAY/TIME

AutonomousImaging tensor displays(Ebrahimi) visual qualityassessmentoflightfield A comprehensiveframeworkfor (Takanashi) based facialimageprocessing and itsapplicationtodeeplearning- base throughsubjectiveexperiments Construction offacialemotiondata- pending) (Dodgson) language ofcolourwheels(JPI- What istheoppositeofblue?:The Markov models(Pichler) Bayesian networksandhidden Autonomous highwaypilotusing Are wethereyet?(Boroumand) (Campbell) Invited: Imagesensoroversampling comparisons (McCann) chromatic adaptationandspatial Comparison ofthesignatures Invited: Limitsofcolorconstancy: plant stems(Sahiner) In situwidthestimationofbiofuel 3D? (Ross) Keynote: Whatgoodisimperfect for texturesynthesis(He) CNN basedparameteroptimization OLED display(Tian) Image qualityevaluationonanHDR (Bondarev) surveillance recognition ofvesselsinmaritime Multi-class detectionandorientation human eye(Timár-Fülep) neuro-physiological modelofthe diameter onvisualacuityusinga ofpupil Investigation oftheeffect terrain classification(Carmichael) ization priors,sensorfusion,and Autonomous navigationusinglocal- pipelines (Pulli) Invited: RearchitectingandtuningISP forensics imagedatabase(Newman) StegoAppDB: Asteganographyapps

3DImaging Ballroom FG Grand Peninsula Harbour AB Ballroom BC Grand Peninsula Regency C Ballroom FG Grand Peninsula Ballroom E Grand Peninsula Regency C Ballroom A Grand Peninsula Cypress C Regency AB Cypress B Cypress A Ballroom E Grand Peninsula Ballroom A Grand Peninsula Regency AB Cypress C AR/VR&LightField MAAP-486 IQSP-311 IPAS-268 AVM-042 12:10 pm PMII-585 COIMG-139 12:00 pm MAAP-485 SD&A-641 MAAP-487 IQSP-312 EISS-704 COLOR-086 AVM-043 3:30 pm EISS-712 2:00 pm PMII-586 COIMG-140 12:20 pm simulation ofdisplays(Boher) Accurate physico-realisticraytracing ization perspective(LeCallet) video qualityevaluation:Astandard- Semantic labelbiasinsubjective and betterpersondetection(Groot) bycustomizeddataset performance Improving personre-identification drivable areadetection(Yogamani) DriveSpace: Towards context-aware (Dietz) PDAF stripingartifacts Credible repairofSonymain-sensor detection (Aeron) tracechemical imaging forstandoff Invited: Visionguided,hyperspectral fluorescentobjects(Tominaga)curved betweenplaneand illumination effect Appearance reconstructionofmutual tion (Sommer) – Concept,applicationsandevalua- Tiled stereoscopic3Ddisplaywall Enough (data)already!(Ellens) data bytheJPEGCommittee(Perry) quality evaluationof3Dpointcloud Study ofsubjectiveandobjective (Watson) Invited: Eyemodelimplementation enhancements (Abebe) readability ofwhiteboardimage Evaluation ofnaturalnessand sensor data(vanBeek) Image-based compressionofLiDAR andaugmented reality(Hua) virtual Head-mounted lightfielddisplaysfor Thequestforvisioncomfort: Plenary: mobile phonecameras(Bucher) Issues reproducinghandshakeon driver recognition(Santos-Villalobos) Invited: Throughthewindshield MedicalImaging DeepLearning Cypress A Ballroom E Grand Peninsula Regency C Ballroom FG Grand Peninsula Regency AB Harbour AB Cypress A Ballroom BC Grand Peninsula Cypress A Ballroom E Grand Peninsula Ballroom D Grand Peninsula Cypress B Ballroom FG Grand Peninsula Ballroom D Grand Peninsula Regency AB Harbour AB 27

Schedule Grand Peninsula Grand Peninsula Ballroom BC Grand Peninsula Ballroom D Grand Peninsula Ballroom E Grand Peninsula Ballroom BC Cypress B Grand Peninsula Ballroom A Harbour AB Harbour AB Regency B Grand Peninsula Ballroom D Regency C Regency C Grand Peninsula Ballroom FG Cypress B Cypress C Deep Learning #EI2019 electronicimaging.org Medical Imaging Multivariate statistical modeling for image quality prediction (Gupta) Laser quadrat and photogrammetry based autonomous coral reef map- ping ocean robot (Gupta) Invited: Plenoptic medical cameras (Gao) Enhanced head-mounted eye tracking data analysis using super- resolution (Wan) Impression evaluation between color vision types (Ichihara) On the role of edge orientation in stereo vision (Restrepo) Dense prediction for micro- expression spotting based on deep sequence model (Tran) Keynote: Technology in context: Keynote: Technology Solutions to foreign propaganda and disinformation (Maddox) 4D scanning system for measurement of human body in motion (Sitnik) High-speed multiview 3D structured High-speed multiview 3D (Zhang) light imaging technique Keynote: Perception systems for autonomous vehicles using energy- efficient deep neural networks (Iandola) How is colour harmony perceived by colour vision deficient observers? (Green) Face set recognition (Liu) Invited: Ray-tracing 3D spectral Invited: Ray-tracing 3D (Lian) scenes through human optics StarCAM - A 16K stereo panoramic 16K stereo panoramic StarCAM - A with a novel parallel video camera of sensors interleaved arrangement (Meyer) IQSP-316 IRIACV-450 PMII-350 SD&A-647 COLOR-092 HVEI-209 IMAWM-401 9:00 am MWSF-538 9:10 am 3DMP-002 5:10 pm EISS-703 January 2019 16, Wednesday, 8:50 am 3DMP-001 AVM-047 COLOR-091 IMAWM-400 SD&A-646 AR/VR & Light Field Grand Peninsula Ballroom BC Cypress B Grand Peninsula Ballroom D Grand Peninsula Ballroom BC Grand Peninsula Ballroom FG Cypress B Grand Peninsula Ballroom D Grand Peninsula Ballroom E Grand Peninsula Ballroom FG Cypress B Grand Peninsula Ballroom D Grand Peninsula Ballroom E Grand Peninsula Grand Peninsula Ballroom FG Cypress B Grand Peninsula Ballroom D Grand Peninsula Ballroom E Grand Peninsula Ballroom BC 3D Imaging

A CNN adapted to time series for the classification of supernovae (Chaumont) Invited: Modeling retinal image for- mation for light field displays (Hua) Color correction for RGB sensors with dual-band filters for in-cabin imaging applications (Skorka) Relationship between faithfulness and preference of stars in a plan- etarium (JPI-pending) (Tanaka) Invited: Berkeley Eye Model (Barsky) Visual noise revision for ISO 15739 (Wueller) State of the art of multi-user virtual re- ality display systems (Munoz Arango) Learning demosaicing and based color correction for RGB-IR patterned image sensors (Bhat) Automatic image enhancement for under-exposed, over-exposed, or backlit images (Park) Invited: Evolution of the Arizona Eye Model (Schwiegerling) Adaptive video streaming with cur- rent codecs and formats: Extensions to parametric video quality model Rao) (Ramachandra ITU-T P.1203 Opening a 3-D museum - A case (Kurland) study of 3-D SPACE Optimization of ISP parameters for Optimization algorithms (Deegan) object detection Automatic detection of scanned page orientation (Hu) optical Invited: Wide field-of-view (Polans) model of the human eye of Reducing the cross-lab variation image quality metrics (Koren) in the The quality of stereo disparity panorama polar regions of a stereo (Sandin) Autonomous Imaging

28 4:50 pm COLOR-090 EISS-701 COLOR-089 EISS-705 IQSP-315 SD&A-645 4:30 pm AVM-046 COLOR-088 EISS-702 IQSP-314 SD&A-644 4:10 pm AVM-045 COLOR-087 EISS-700 IQSP-313 SD&A-642 3:50 pm AVM-044 PAPER SCHEDULE BY DAY/TIME SCHEDULE PAPER

Schedule electronicimaging.org #EI2019 COLOR-094 AVM-048 3DMP-004 9:50 am SD&A-649 PMII-352 IRIACV-452 IQSP-318 IMAWM-402 HVEI-210 COLOR-093 3DMP-003 9:30 am SD&A-648 PMII-351 IRIACV-451 IQSP-317 PAPER SCHEDULEBYDAY/TIME

AutonomousImaging (Kawamura) using layeredgray-worldassumption Multiple illuminants’colorestimation (Denny) techniques forautonomousdriving Yes weGAN:Applyingadversarial local deeprandomforest(Kim) nation ofglobaldeepnetworkand Depth-map estimationusingcombi- headset (Mendiburu) withStarVR device: Lessonslearned Visual qualityinVRheadmounted healthcare (Levi) iature cameras,towardsportable Invited: Imagingthebodywithmin- (Nishikawa) on analyzingdepthinformation disaster responserobotbased grasped andmanipulatedbythe and orientationofthedrilltobe Automatic estimationoftheposition ity forbillionsofimages(Phillips) Keynote: Benchmarkingimagequal- (Xu) tion usingdeeplearning Real timefacialexpressionrecogni- illumination (Rudd) viewed underGelb real surfaces model explainstheperceptionof Neurocomputational lightness error incameracolorspace(Lee) Analysis ofilluminationcorrection Structure-from-Motion (Traxler) 3D microscopicimagingusing behavior (Banchi) 360-degree VRimagesonviewing ofbinocularparallaxin Effects screening (Farrell) imaging systemfororalcancer Invited: Simulatingamultispectral mous agents(Relyea) Multimodal localizationforautono- computer vision(Li) Image qualityassessmentusing

3DImaging Ballroom BC Grand Peninsula Ballroom D Grand Peninsula Regency B Harbour AB Cypress B Regency C Ballroom BC Grand Peninsula Ballroom D Grand Peninsula Cypress B Ballroom FG Grand Peninsula Regency C Ballroom E Grand Peninsula Ballroom A Grand Peninsula Regency B Ballroom E Grand Peninsula AR/VR&LightField 3DMP-005 10:10 am IMAWM-405 HVEI-212 3DMP-006 10:50 am IRIACV-454 IQSP-319 COLOR-095 10:40 am MWSF-539 10:30 am IMSE-354 IMAWM-404 AVM-049 SD&A-650 PMII-353 IRIACV-453 IMAWM-403 HVEI-211 Volumetric display(Burnett) Metrology onfield-of-lightdisplay: Laboratory (Delp) Laboratory intheVIPER Keynote: Deeplearning (Denes) matic bandingartifacts A visualmodelforpredictingchro- RGB-D capturedevice(Farias) tion ofhumanbodyfromsingleview Real-time 3Dvolumetricreconstruc- for backgroundestimation(Wu) Foreground-aware statisticalmodels MTF measurement(Haefner) Best practicesforimagingsystem (Safdar) based onimagecolordifference Consistency ofcolorappearance social steganography(Wu) New graph-theoreticapproachto tions (Miyagi) imaging forendoscopicapplica- in time-of-flightdepth multipath effect Measurement andsuppressionof (Stentiford) tion ofmatchingcliquespoints Face recognitionbytheconstruc- sification (Winkens) spatial-spectral roadsceneclas- Deep dimensionreductionfor 360-degree videos(Häkkinen) symptoms withHMDviewingof JIST-first: Time courseofsickness in medicalapplications(Jörissen) for integratedpositioningverification acquisition Self-calibrated surface packages(Chuang) abnormal-shaped Automated opticalinspectionfor features (Guo) Face alignmentvia3D-assisted multi-cue depthperception(Tyler) Accelerated cuecombinationfor MedicalImaging DeepLearning Ballroom FG Grand Peninsula Regency C Ballroom D Grand Peninsula Regency B Ballroom A Grand Peninsula Regency B Ballroom E Grand Peninsula Cypress B Cypress C Harbour AB Ballroom BC Grand Peninsula Harbour AB Harbour AB Ballroom A Grand Peninsula Regency C Ballroom D Grand Peninsula 29

Schedule Grand Peninsula Ballroom A Regency C Harbour AB Grand Peninsula Ballroom E Regency B Grand Peninsula Ballroom D Grand Peninsula Grand Peninsula Ballroom BC Cypress B Grand Peninsula Ballroom E Regency B Cypress C Cypress B Grand Peninsula Ballroom E Deep Learning #EI2019 electronicimaging.org Medical Imaging Analyze and predict the perceptibil- ity of UHD video contents (Göring) Comparison of texture retrieval techniques using deep convolutional features (Valente) ECDNet: Efficient Siamese convolutional network for real-time small object change detection from ground vehicles (Klomp) Crotch detection on 3D optical scans of human subjects (Sobhiyeh) A data-driven approach for garment color classification in on-line fashion images (Li) Invited: Driving, the future – The au- tomotive imaging revolution (Denny) Analyzing the influence of cross- modal IP-based degradations on the perceived audio-visual quality (Farias) Keynote: Beads of reality drip from Keynote: Beads space (Bolas) pinpricks in compliant image EMVA1288 interpolation creating homogeneous pixel size and gain (Kunze) quantiza- Saliency-based perceptual quality tion method for HDR video enhancement (Sidaty) convo- Exploring variants of fully and lutional networks with local segmen- global contexts in semantic tation problem (Ho) A natural steganography embed- ding scheme dedicated to color sensors in the JPEG domain (Bas) Subjective and objective quality assessment for volumetric video compression (Zerman) HVEI-215 IMAWM-406 IRIACV-458 3DMP-010 12:00 pm COLOR-099 12:10 pm PMII-052 12:20 pm IQSP-324 SD&A-653 11:40 am COLOR-098 IQSP-322 IRIACV-457 11:45 am MWSF-542 11:50 am IQSP-323 AR/VR & Light Field Regency C Grand Peninsula Ballroom D Grand Peninsula Ballroom A Cypress B Grand Peninsula Ballroom E Regency B Cypress C Regency B Regency C Grand Peninsula Ballroom A Grand Peninsula Ballroom BC Grand Peninsula Grand Peninsula Ballroom BC Cypress C Cypress B Grand Peninsula Ballroom E Grand Peninsula Grand Peninsula Ballroom D 3D Imaging

Modified M-estimation for fast global registration of 3D point clouds (Azhar) Keynote: Solid-state LiDAR sensors: The future of autonomous vehicles (Eldada) An improved objective metric to predict image quality using deep neural networks (Ebrahimi) Refining ACES best practice (Hasche) Subjective analysis of an end-to-end streaming system (Bampis) Study on selection of construction waste using sensor fusion (Nyumura) Nondestructive ciphertext injection in document files (Craver) Holo reality: Real-time low-bandwidth 3D range video communications on consumer mobile devices with appli- cation to augmented reality (Bell) A no-reference video quality NARVAL: tool for real-time communications (Roux) The looking glass: A new type of superstereoscopic display (Frayne) Reducing coding loss with irregular Reducing coding loss with syndrome trellis codes (Kin-Cleaves) Determination of individual-observer for use incolor matching functions (Walowit) color management systems ap- Quantify aliasing – A new proach to make resolution measure- ment more robust (Artmann) Change detection in Cadastral 3D models and point clouds and its use for improved texturing (Klomp) Keynote: Recent trends in the image trends in the image Keynote: Recent (Koifman) sensing technologies Head-tracked patterned-backlightHead-tracked autostereoscopic (virtual reality) display system (Gaudreau) Autonomous Imaging

30 AVM-051 HVEI-214 3DMP-008 11:30 am COLOR-097 IQSP-321 IRIACV-456 MWSF-541 11:20 am 11:10 am 3DMP-007 HVEI-213 SD&A-652 11:00 am COLOR-096 10:55 am MWSF-540 IQSP-320 IRIACV-455 IMSE-050 PAPER SCHEDULE BY DAY/TIME SCHEDULE PAPER SD&A-651

Schedule electronicimaging.org #EI2019 HVEI-217 COLOR-101 AVM-055 3:50 pm MWSF-543 IRIACV-459 IMSE-355 IMAWM-407 HVEI-216 EISS-706 COLOR-100 AVM-054 3:30 pm EISS-713 2:00 pm AVM-053 12:30 pm PAPER SCHEDULEBYDAY/TIME

AutonomousImaging medicine universitydegree(Garcia) experienceinveterinary learning Using 360VRvideotoimprovethe sion (TDFED)(Michals) 3D Tone-Dependent- FastErrorDiffu (Kang) 3D undervariouscircumstances Pupil detectionandtrackingforAR tion (Aloraini) detec- object-based videoforgery Statistical sequentialanalysisfor fisheye camera(Guan) measurement inworkplaceby People recognitionandposition (Lee) apertures pixel image sensorwithoffset extraction inmonochromeCMOS Measurement ofdisparityfordepth (Shreve) intelligence skin careviaartificial Invited: Diagnosticandpersonalized (Le Callet) acterization of360-degreecontent Complexity measurementandchar- (Ramamoorthi) tosparsereconstruction recovery Invited: Lightfields-Fromshape (Eschbach) reconstruction ofancientdocuments Creating asimulationoptionforthe (López-González) autonomous vehicleperception human cognitiveintelligenceinto ment andproposalforintegrating Today istoseeandknow:Anargu- reality(Debevec) virtual for photorealmovies,games,and Lightfieldsandlightstages Plenary: for automotiveapplications(Liu) physically accuratesensorimages A systemforgeneratingcomplex

3DImaging Regency B Regency C Harbour AB Ballroom A Grand Peninsula Ballroom D Grand Peninsula Ballroom FG Grand Peninsula Ballroom D Grand Peninsula Ballroom D Grand Peninsula Ballroom A Grand Peninsula Cypress B Ballroom FG Grand Peninsula Cypress C Cypress B AR/VR&LightField AVM-056 4:10 pm IMAWM-408 4:00 pm MWSF-544 3:55 pm EISS-708 COLOR-103 4:30 pm MWSF-545 IMAWM-409 4:20 pm IRIACV-461 IMSE-357 HVEI-218 EISS-707 COLOR-102 IRIACV-460 IMSE-356 depth camerasenvironment(Chang) recurrent neuralnetworkinmultiple Driver behaviorrecognitionusing nostics (Esteva) Computer visioninimagingdiag- identification system(Shankar) basedprinter machine learning Explaining andimprovinga time atLytro (Akeley) Invited: Lightfieldinsightsfrommy (Chen) intheYyCxCzcolorspace diffusion Vector tone-dependentfasterror liable cameraIDverification(Sencar) Tackling in-cameradownsizingforre- man facialappearance(Matts) A newmodeltoreliablypredicthu- ies (Aykac) methods fornaturalisticdrivingstud- Investigating cameracalibration (Ruokamo) for real-time3Drangeimaging A range-gatedCMOSSPAD array simulator (Brunnstrom) reality haptic interactioninavirtual Quality ofExperiencevisual- (Fattal) Invited: Thebeautyoflightfields tions (Liu) Indigo press:Challengesandsolu- NPAC FMcolorhalftoningforthe tion (Sudoh) focusing distanceandhighresolu- minimum that achievesbothshort Optical systemofindustrialcamera drift fieldphotodiode(Kondo) image sensorbasedonabuilt-in a 4-taplock-in-pixelCMOSrange flight measurementtechniqueusing A range-shiftingmulti-zonetime-of- MedicalImaging DeepLearning Cypress C Ballroom FG Grand Peninsula Harbour AB Regency C Cypress B Cypress C Harbour AB Cypress B Regency B Ballroom D Grand Peninsula Regency B Regency C Ballroom A Grand Peninsula Ballroom D Grand Peninsula 31

Schedule The Grove The Grove The Grove The Grove The Grove The Grove The Grove The Grove The Grove The Grove The Grove The Grove The Grove The Grove The Grove The Grove The Grove Deep Learning #EI2019 electronicimaging.org Medical Imaging The quaternion-based anisotropic gradient for the color images (Voronin) An examination of the effects of noise level on methods to deter- mine curvature in range images (Hauenstein) The characterization of an HDR OLED display (Tian) Understanding fashion aesthetics: a neural network based Training predictor using likes and dislikes (Bilbo) Improved 3D scene modeling for im- age registration in change detection (De With) Single Shot Appearance Model (SSAM) for multi-target tracking (Ullah) Real time enhancement of low light Real time enhancement cost embedded images for low platforms (Bhat) for Spline-based colour correction image monotonic nonlinear CMOS sensors (Hussain) for System-on-Chip design flow of a the image signal processor system nonlinear CMOS imaging (Nascimento) using Multi- Background subtraction Channel Fused Lasso (Liu) field im- Depth from stacked light ages using generative adversarial network (Mun) Depth-based saliency estimation for omnidirectional images (Battisti) Driver drowsiness detection in facial images (Dornaika) Illumination invariant NIR face recognition using directional visibility (Wan) Microscope image matching in scope of multi-resolution observation system (Shin) Multi-frame super-resolution utilizing spatially adaptive regularization for camera (Lee) ToF Pixelwise JPEG compression detec- tion and quality factor estimation based on convolutional neural network (Uchida) IPAS-277 IQSP-325 IQSP-326 IQSP-327 IRIACV-465 IRIACV-466 IMSE-361 IMSE-362 IMSE-363 IPAS-269 IPAS-270 IPAS-271 IPAS-272 IPAS-273 IPAS-274 IPAS-275 IPAS-276 AR/VR & Light Field The Grove The Grove The Grove Grand Peninsula Ballroom D Regency B The Grove The Grove Cypress B Grand Peninsula Ballroom D Grand Peninsula Ballroom A Regency B Regency C Regency B Harbour AB Grand Peninsula Grand Peninsula Ballroom A 3D Imaging

How hot pixel defect rate growth from pixel size shrinkage creates image degradation (Chapman) Hybrid image-based defect detection for railroad maintenance (Gavai) Industrial computer vision in Industrial computer vision in academic - Is there a need besides so many professional business models supporting ready to go solutions? (Niel) Adaptive loss regression for flex- ible graph-based semi-supervised embedding (Dornaika) An efficient motion correction method for frequency-domain images based on Fast Robust Cor- relation (Reeves) Compton camera imaging with spherical movement (Kwon) Invited: Industrial scale light field printing (Hirsch) Appearance-preserving error diffu- sion algorithm using texture informa- tion (Tanaka) Invited: Quest for immersion (Pulli) Are people pixel-peeping 360° videos? (Fremerey) Hazmat label recognition and local- ization for rescue robots in disaster scenarios (Zauner) Application of semantic segmenta- Application of semantic rail tamping tion for an autonomous assistance system (Zauner) of artificialInvited: The intersection reality intelligence and augmented (Arabi) Impacts of internal playback HMD subjective quality processing on perception (Fremerey) 3D scanning measurement using 3D scanning measurement with a time-of-flight range imager (Okura) improved range resolution Autonomous Imaging

32 IMSE-359 IMSE-360 5:30 pm COIMG-141 IRIACV-464 COIMG-142 COIMG-143 5:10 pm EISS-710 EISS-709 IQSP-220 IRIACV-463 COLOR-104 4:50 pm 4:40 pm IMAWM-410 IRIACV-462 HVEI-219 PAPER SCHEDULE BY DAY/TIME SCHEDULE PAPER IMSE-358

Schedule electronicimaging.org #EI2019 COIMG-146 9:10 am VDA-675 PMII-278 IMSE-364 IMAWM-145 8:50 am Thursday, 17,2019 January HVEI-221 7:00 pm SD&A-657 SD&A-656 SD&A-655 SD&A-654 PMII-590 PMII-589 PMII-588 PMII-587 MWSF-546 PAPER SCHEDULEBYDAY/TIME

AutonomousImaging within aCanonPowershot(Dietz) ratesfromliveview traction ofheart photoplethysmography: Real-timeex - Self-contained, passive,non-contact, displaysystems(Wischgoll) format Keynote: Data visualization usinglarge- ogy (Campbell) tributed intelligentcapturetechnol- Invited: EDICT: Embeddedanddis- (Waeny) more dedicatedthanCCDeverwas semiconductor processflavorseven from standardCMOSprocessto Keynote: HowCISpixelsmoved fetalcare(You)Smart research (Snow) need foraparadigmshiftinvision Keynote: ‘WonkaVision’ andthe view 2D-plus-depthvideo(Sespede) Semi-automatic post-processingofmulti- displays (Jiao) rendering forautostereoscopic Saliency mapbasedmulti-view parallaxes (Yamaguchi) A studyon3Dprojectorwithfour applications, andchallenges(Wan) tracking review:Softwaresolutions, A comprehensivehead-mountedeye they produce(Eberhart) Shuttering methodsandtheartifacts reset sensor(Hassan) image appearanceformulti-partial Fast restoringofhighdynamicrange reduction(Ahn) for flickeringartifact Deep videosuper-resolutionnetwork cameras (Ghelmansaraei) the auto-flashqualityofmobile A newmethodologyinoptimizing (Dugelay) to sourcesmartphones PRNU matching,associatingvideos invariant approachforasymmetric Hybrid G-PRNU:Anovelscale-

3DImaging Ballroom A Grand Peninsula Harbour B Regency B Regency C Ballroom A Grand Peninsula ticket provided on -details Offsite The Grove The Grove The Grove The Grove The Grove The Grove The Grove The Grove The Grove AR/VR&LightField VDA-676 PMII-280 IMSE-365 ERVR-176 COIMG-147 9:30 am IPAS-279 ERVR-178 10:10 am VDA-678 IPAS-281 IMSE-366 ERVR-177 COIMG-148 9:50 am VDA-677 9:40 am ERVR-175 apply machine learning (Baynes) apply machinelearning the averagepersonwithwaysto Visual analyticprocesstofamiliarize with wide-anglelenses(Rodricks) sure spatialresolutionfromcameras correction tomea- Digital distortion nous sunsensors(Leñero-Bardallo) On theimplementationofasynchro- (Sharma) environment foremergencyresponse crowd simulationinanimmersive intelligenceagentsfor Artificial (Devadithya) larization fordual-energyCTimages totalvariationregu - Edge-preserving (Gama) viewsfromfisheyeimagery virtual Modeling lensopticsandrendering phone sensors(Aurini) through motionprocessingviasmart reality age andnavigationinvirtual 3D visualizationof2D/360°im- fake Russianaccounts(Hsu) Visualizing tweetsfromconfirmed (De With) protection instreetviewcycloramas LiDAR assistedlarge-scaleprivacy (Nabeshima) pixel forcomputationalimaging A low-noisenondestructive-readout ICARUS” (Sommer) to Africa,alongdistancetravelwith the exhibition“FromLakeKonstanz BinocularsVR –AVRexperiencefor CT (Lee) ofdentalvolumetric mandible curves radiograph byusinginternal Fully automateddentalpanoramic (Wischgoll) released fromfirearms particles Visualization ofcarbonmonoxide decoration stylecolorization(Wan) Augmentedrealityindoor ARFurniture: MedicalImaging DeepLearning Harbour B Regency B Regency C Ballroom BC Grand Peninsula Ballroom A Grand Peninsula Regency B Ballroom BC Grand Peninsula Ballroom BC Grand Peninsula Harbour B Regency B Regency C Ballroom BC Grand Peninsula Ballroom A Grand Peninsula Harbour B 33

Schedule Regency C Regency C Grand Peninsula Ballroom A Grand Peninsula Ballroom BC Harbour A Harbour B Grand Peninsula Ballroom A Regency C Grand Peninsula Ballroom BC Grand Peninsula Ballroom A Harbour A Harbour B Grand Peninsula Ballroom BC Harbour A Deep Learning #EI2019 electronicimaging.org Medical Imaging JIST-first: Digital circuit methods to JIST-first: correct and filter noise of nonlinear CMOS image sensors (Nascimento) Method for the optimal approxima- tion of the spectral response of multicomponent image (Gouton) - An Vision scientist Chris Tyler appreciation of his contributions (Westheimer) Enhancing mobile VR immersion: A multimodal system of neural net- works approach to an IMU Gesture Controller (Niño) Smart cooking for camera-enabled multifunction oven (Wang) Visualizing mathematical knot equivalence (Lin) Paradoxical, quasi-ideal, spatial summation in the modelfest data (Klein) Keynote: The new effortKeynote: The for hyper- - IEEE P4001 spectral standarization (Durell) to de- PlayTIME: A tangible approach (Buckstein) signing digital experiences depends Invited: Image recognition largely on variety (Haygood) for Edge/region fusion network imageryscene labeling in infrared (Asari) for sleep Correlation visualisation (Sleep data analytics in SWAPP Application) (Vincent) Wake Augmented reality education sysyem for developing countries (Alam) Detecting non-native content in on- line fashion images (Yuan) 2:10 pm IMSE-372 12:20 pm IMSE-371 1:50 pm HVEI-227 2:00 pm ERVR-184 IMAWM-416 VDA-683 2:05 pm HVEI-228 11:40 AM IMSE-370 11:50 AM ERVR-182 HVEI-226 IMAWM-414 VDA-682 12:10 pm ERVR-183 IMAWM-415 AR/VR & Light Field Harbour A Harbour B Regency C Grand Peninsula Ballroom BC Grand Peninsula Ballroom A Harbour A Grand Peninsula Ballroom A Harbour B Harbour B Regency C Grand Peninsula Ballroom BC Harbour A Regency C Grand Peninsula Ballroom BC Grand Peninsula Ballroom A 3D Imaging

CCVis: Visual analytics of student online learning behaviors using course clickstream data (Goulden) Invited: Observer classification im- ages and efficiency in 2D and 3D search tasks (Abbey) Detecting and decoding barcode in on-line fashion image (Yang) A comparison between noise reduc- tion & analysis techniques for RTS pixels (Hendrickson) Collaborative virtual reality environ- ment for a real-time emergency evacuation of a nightclub disaster (Sharma) Feedback alfa-rooting algorithm for medical image enhancement (Voronin) Dynamic color mapping with a multi-scale histogram: A design study with physical scientists (Chae) Both-hands motion recognition and reproduction characteristics in front/ side/rear view (Ikeda) A heuristic approach for detecting frames in online fashion images (Hu) Overcoming limitations of the HoloLensOvercoming limitations of (Miller) for use in product assembly Correlated Multiple Sampling impact analysis on 1/fE noise for image sensors (Peizerat) Noise suppression effect of folding- column- integration applied to a ADC in a parallel 3-stage pipeline image 2.1μm 33-megapixel CMOS sensor (Tomioka) algorithmSpecular reflection detection for endoscopic images (Voronin) Chemometric data analysis with autoencoder neural network (Ullah) Invited: Similarity and differenceInvited: Similarity in architectures (Eigen) object detection Autonomous Imaging

VDA-681 34 HVEI-224 IMAWM-413 11:30 AM ERVR-181 IMSE-369 IPAS-223 VDA-680 11:20 AM ERVR-180 11:10 AM IMAWM-412 11:00 AM IMSE-368 10:50 am ERVR-179 IMSE-367 IPAS-222 VDA-679 10:40 am IMAWM-411 PAPER SCHEDULE BY DAY/TIME SCHEDULE PAPER

Schedule electronicimaging.org #EI2019 IMAWM-417 ERVR-187 3:00 pm IMSE-374 HVEI-231 2:50 pm VDA-685 IMAWM-419 ERVR-186 2:40 pm HVEI-230 2:35 pm IMSE-373 2:30 pm VDA-684 IMAWM-418 HVEI-229 ERVR-185 2:20 pm PAPER SCHEDULEBYDAY/TIME

AutonomousImaging as socialcommons(Hadzi) British Waterways boattr-towpath (Lahoud) reality glassesforimagefusion AR inVR:Simulatingaugmented (Chuang) planes projectedfromalaserlevel laser ing horizontalandvertical Fish-eye cameracalibrationus- Christopher Tyler (McCourt) The notoriousCWT: Adventureswith ensembles (Martin) Analyzingvideo VideoSwarm: fashion marketplace(Norman) processing appliedtoanon-line New resultsfornaturallanguage ing conditions(Alipour) reality underdynamicambientlight- Real-time photo-realisticaugmented (Barghout) Tyler’s preceptstocomputervision “Trust thePsychophysics”.Applying digital video(Pourian) Auto whitebalancestabilizationin physics (McGuigan) ergy, nuclearandcondensedmatter quantum computationsinhighen- Visualization anddataanalysisof mobile colordetector(Pan) Paint codeidentificationusing quantities (Mulligan) spatial sensitivityofhigher-order Modulate this!CWTmeasuresthe reality (Hirao) invirtual to audio-visualinformation sponses, knowledgeandimpression Translating thephysiologicalre- JIST-first: Augmentedcross-modality:

3DImaging Harbour A Ballroom BC Grand Peninsula Regency C Ballroom A Grand Peninsula Harbour B Harbour A Ballroom BC Grand Peninsula Ballroom A Grand Peninsula Regency C Harbour B Harbour A Ballroom A Grand Peninsula Ballroom BC Grand Peninsula AR/VR&LightField HVEI-233 3:50 pm IMAWM-420 ERVR-188 3:40 pm IMSE-375 3:10 pm HVEI-232 3:05 pm HVEI-239 5:20 pm HVEI-238 5:05 pm HVEI-237 4:50 pm HVEI-236 4:35 pm IMAWM-421 HVEI-235 4:20 pm HVEI-234 4:05 pm VDA-686 glass... ofHVEI(Rogowitz) Christopher Tyler throughthelooking ence improvement(Guo) Invited: Vision-baseddrivingexperi- reality(Schulze) virtual 3D medicalimagesegmentationin reconstruction model(Osinski) Focused lightfieldcamerafordepth (Kontsevich) A retrospectiveofourcollaboration physical evolution(Tyler) Light, quantaandvision:Ameta- sides ofthevisualsystem(Chan) Explorations intothelightanddark (Likova) Quantum jumpintothebrain and infants(Peterzell) spatiotemporal sensitivitiesofadults inthe individualdifferences Normal Factors ofthevisualmindandbrain: segmentation (Jiao) sion andsiftingforRGB-Dsemantic methodoffu- A simplebutefficient (Norcia) yearsofhuman stereopsis Forty tion (Stork) authentica- image analysisinfineart The roleofrigorouscomputer-aided exploration (Laumond) graph byevolutiveextractionand bigmultilayer M-QuBE3: Querying MedicalImaging DeepLearning Ballroom A Grand Peninsula Harbour A Ballroom BC Grand Peninsula Regency C Ballroom A Grand Peninsula Ballroom A Grand Peninsula Ballroom A Grand Peninsula Ballroom A Grand Peninsula Ballroom A Grand Peninsula Harbour A Ballroom A Grand Peninsula Ballroom A Grand Peninsula Harbour B 35

Schedule Virtual and Augmented Reality, 3D, and Stereoscopic Systems

3D Image Processing, Measurement (3DIPM), and Applications 2016

Conference Chairs: William Puech, 3D Measurement and Data Processing 2019 Lab. d’Informatique de Robotique et de Microelectronique de Montpellier (France), and Conference overview Robert Sitnik, Warsaw University of Technology Scientific and technological advances during the last decade in the fields of image ac- (Poland) quisition, processing, telecommunications, and computer graphics have contributed to the emergence of new multimedia, especially 3D digital data. Nowadays, the acquisition, Program Committee: Atilla M. Baskurt, Université processing, transmission, and visualization of 3D objects are a part of possible and re- de Lyon (France); Hugues Benoit-Cattin, Institut alistic functionalities over the internet. Confirmed 3D processing techniques exist and a National des Sciences Appliquées de Lyon large scientific community works hard on open problems and new challenges, including (France); Silvia Biasotti, Consiglio Nazionale Adrian G. Bors, 3D data processing, transmission, fast access to huge 3D databases, or content security delle Ricerche (Italy); The University of York (United Kingdom); Saida management. Bouakaz, University Claude Bernard Lyon 1 (France); Mohamed Daoudi, Télécom Lille 1 The emergence of 3D media is directly related to the emergence of 3D acquisition technolo- (France); Florent Dupont, University Claude gies. Indeed, recent advances in 3D scanner acquisition and 3D graphics rendering tech- Bernard Lyon 1 (France); Gilles Gesquière, Lab. nologies boost the creation of 3D model archives for several application domains. These des Sciences de l’Information et des Systèmes include archaeology, cultural heritage, computer assisted design (CAD), medicine, face (France); Afzal Godil, National Institute of recognition, video games, and bioinformatics. New devices such as time-of-flight cameras Standards and Technology (United States); Serge open challenging new perspectives on 3D scene analysis and reconstruction. Miguet, University Lumière Lyon 2 (France); Eric Paquet, National Research Council Canada (Canada); Frédéric Payan, University of Nice Three-dimensional objects are more complex to handle than other multimedia data, such as Sophia Antipolis - I3S Laboratory, CNRS (France); audio signals, images, or videos. Indeed, only a unique and simple 2D grid representation Frédéric Truchetet, Université de Bourgogne is associated to a 2D image. All the 2D acquisition devices generate this same representa- (France); and Stefano Tubaro, Politecnico di tion (digital cameras, scanners, 2D medical systems). Unfortunately (for the users), but fortu- Milano (Italy) nately (for scientists), there exist different 3D representations for a 3D object. For example, an object can be represented on a 3D grid (digital image) or in 3D Euclidian space. In the latter, the object can be expressed by a single equation (like algebraic implicit surfaces), by a set of facets representing its boundary surface, or by a set of mathematical surfaces. One can easily imagine the numerous open problems related to these different representations and their processing, a new challenge for the image processing community. 3DMP

36 #EI2019 electronicimaging.org

3D MEASUREMENT AND DATA PROCESSING 2019

Wednesday, January 16, 2019 12:30 – 2:00 pm Lunch

3D/4D Scanning and Applications Wednesday Plenary Session Chair: Robert Sitnik, Warsaw University of Technology (Poland) 2:00 – 3:00 pm 8:50 – 10:30 am Grand Peninsula Ballroom D Regency C Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, senior scientist, Google (United States) 8:50 3DMP-001 High-speed multiview 3D structured light imaging technique, Chufan Paul Debevec will discuss the technology and production processes behind Jiang and Song Zhang, Purdue University (United States) “Welcome to Light Fields”, the first downloadable virtual reality experience based on light field capture techniques which allow the visual appear- 9:10 3DMP-002 ance of an explorable volume of space to be recorded and reprojected 4D scanning system for measurement of human body in motion, Robert photorealistically in VR enabling full 6DOF head movement. The lightfields Sitnik, Pawel Liberadzki, and Jakub Michonski, Warsaw University of technique differs from conventional approaches such as 3D modelling Technology (Poland) and photogrammetry. Debevec will discuss the theory and application of 9:30 3DMP-003 the technique. Debevec will also discuss the Light Stage computational 3D microscopic imaging using Structure-from-Motion, Lukas Traxler and illumination and facial scanning systems which use geodesic spheres of Svorad Štolc, AIT Austrian Institute of Technology GmbH (Austria) inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently 9:50 3DMP-004 been used to create photoreal digital actors based on real people in mov- Depth-map estimation using combination of global deep network and ies such as Furious 7, Blade Runner: 2049, and Ready Player One. The local deep random forest, SangJun Kim, Sangwon Kim, Mira Jeong, lighting reproduction process of light stages allows omnidirectional lighting Deokwoo Lee, and ByoungChul Ko, Keimyung University (Republic of environments captured from the real world to be accurately reproduced in Korea) a studio, and has recently be extended with multispectral capabilities to 10:10 3DMP-005 enable LED lighting to accurately mimic the color rendition properties of Metrology on field-of-light display: Volumetric display,Abhishek Bichal daylight, incandescent, and mixed lighting environments. They have also and Thomas Burnett, FoVI3D (United States) recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project interactive conversations with survivors of the World War II Holocaust. 10:00 am – 3:30 pm Industry Exhibition Paul Debevec is a senior scientist at Google VR, a member of Google 10:10 – 10:50 am Coffee Break VR’s Daydream team, and adjunct research professor of computer sci- ence in the Viterbi School of Engineering at the University of Southern 3D Data Processing and Visualization California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec’s computer graphics Session Chair: Robert Sitnik, Warsaw University of Technology (Poland) research has been recognized with ACM SIGGRAPH’s first Significant 10:50 am – 12:10 pm New Researcher Award (2001) for “Creative and Innovative Work in 3DMP Regency C the Field of Image-Based Modeling and Rendering”, a Scientific and Engineering Academy Award (2010) for “the design and engineering 10:50 3DMP-006 of the Light Stage capture devices and the image-based facial render- Real-time 3D volumetric reconstruction of human body from single view ing system developed for character relighting in motion pictures” with RGB-D capture device, Rafael Diniz and Mylène Farias, University of Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Brasilia (Brazil) Medal (2017) in recognition of his achievements and ongoing work 11:10 3DMP-007 in pioneering techniques for illuminating computer-generated objects Holo reality: Real-time low-bandwidth 3D range video communications based on measurement of real-world illumination and their effective on consumer mobile devices with application to augmented reality, Tyler commercial application in numerous Hollywood films. In 2014, he Bell1 and Song Zhang2; 1University of Iowa and 2Purdue University (United was profiled in The New Yorker magazine’s “Pixel Perfect: The Scientist States) Behind the Digital Cloning of Actors” article by Margaret Talbot.

11:30 3DMP-008 3:00 – 3:30 pm Coffee Break Modified M-estimation for fast global registration of 3D point clouds, Faisal Azhar, Stephen Pollard, and Guy Adams, HP Inc. (United Kingdom) 5:30 – 7:00 pm Symposium Interactive Papers (Poster) Session

11:50 3DMP-010 Crotch detection on 3D optical scans of human subjects, Sima Sobhiyeh1, Friedrich Dunkel2, Marcelline Dechenaud2, Samantha Kennedy1, John Shepherd3, Steven Heymsfield1, and Peter Wolenski2; 1Pennington Biomedical Research Center, 2Louisiana State University, and 3University of California, San Francisco (United States)

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 37 Conference Chairs: Buyue Zhang, Apple Inc. Autonomous Vehicles and Machines 2019 (United States), Patrick Denny, Valeo (Ireland); and Robin Jenkin, NVIDIA Corporation (United Conference overview States) Advancements in sensing, computing, imaging processing, and computer vision technolo- gies are enabling unprecedented growth and interest in autonomous vehicles and intelligent Program Committee: Umit Batur, Rivian machines, from self-driving cars to unmanned drones to personal service robots. These new Automotive (United States); Zhigang Fan, Apple capabilities have the potential to fundamentally change the way people live, work, com- Inc. (United States); Ching Hung, NVIDIA mute, and connect with each other and will undoubtedly provoke entirely new applications Corporation (United States); Darnell Moore, Texas and commercial opportunities for generations to come. Instruments (United States); Bo Mu, Quanergy, Inc. (United States); Binu Nair, United Technologies Research Center (United States); Dietrich Paulus, Successfully launched in 2017, Autonomous Vehicles and Machines (AVM) considers a Universität Koblenz-Landau (Germany); Pavan broad range of topics as it relates to equipping vehicles and machines with the capac- Shastry, Continental (Germany); Peter van Beek, ity to perceive dynamic environments, inform human participants, demonstrate situational Intel Corporation (United States); Luc Vincent, awareness, and make unsupervised decisions on self-navigating. The conference seeks Lyft (United States); Weibao Wang, Xmotors. high-quality papers featuring novel research in areas intersecting sensing, imaging, vision ai (United States); and Yi Zhang, Argo AI, LLC and perception with applications including, but not limited to, autonomous cars, ADAS (ad- (United States) vanced driver assistance system), drones, robots, and industrial automation. AVM welcomes both academic researchers and industrial experts to join the discussion. In addition to the main technical program, AVM will include interactive sessions / open forum between AVM Conference Sponsor speakers, committee members and conference participants.

Award Best Paper Award given to the author(s) of a proceedings paper presented at the confer- ence, selected by the Organizing Committee. AVM

38 #EI2019 electronicimaging.org

AUTONOMOUS VEHICLES AND MACHINES 2019

Monday January 14, 2019 11:50 AVM-034 From stixels to asteroids: A collision warning system using stereo vision, Willem Sanberg, Gijs Dubbelman, and Peter de With, Eindhoven Automotive Image Quality oint ession J S University of Technology (the Netherlands) Session Chairs: Patrick Denny, Valeo (Ireland); Stuart Perry, University of 12:10 AVM-035 Technology Sydney (Australia); and Peter van Beek, Intel Corporation An autonomous drone surveillance and tracking architecture, Eren Unlu (United States) and Emmanuel Zenou, ISAE-SUPAERO (France) 8:50 – 10:10 am Grand Peninsula Ballroom D 12:30 - 2:00 pm Lunch This session is jointly sponsored by: Autonomous Vehicles and Machines 2019, and Image Quality and System Performance XVI. Monday Plenary 2:00 – 3:00 pm 8:50 AVM-026 Updates on the progress of IEEE P2020 Automotive Imaging Standards Grand Peninsula Ballroom D Working Group, Robin Jenkin, NVIDIA Corporation (United States) Autonomous Driving Technology and the OrCam MyEye, Amnon Shashua, President and CEO, Mobileye, an Intel Company, and senior 9:10 AVM-027 vice president, Intel Corporation (United States) Signal detection theory and automotive imaging, Paul Kane, ON Semiconductor (United States) The field of transportation is undergoing a seismic change with the coming introduction of autonomous driving. The technologies required 9:30 AVM-029 Digital camera characterisation for autonomous vehicles applications, to enable computer driven cars involves the latest cutting edge artifi­cial intelligence algorithms along three major thrusts: Sensing, Planning and Paola Iacomussi and Giuseppe Rossi, INRIM (Italy) Mapping. Shashua will describe the challenges and the kind of com- 9:50 AVM-030 puter vi­sion and machine learning algorithms involved, but will do that Contrast detection probability - Implementation and use cases, Uwe through the perspective of Mobileye’s activity in this domain. He will Artmann1, Marc Geese2, and Max Gäde1; 1Image Engineering GmbH & then describe how OrCam leverages computer vision, situation aware- Co. KG and 2Robert Bosch GmbH (Germany) ness and language process­ing to enable blind and visually impaired to interact with the world through a miniature wearable device. 10:10 – 10:50 am Coffee Break Prof. Amnon Shashua holds the Sachs chair in computer science at Recognition, Detection, and Tracking the Hebrew University of Jerusalem. His field of expertise is computer vision and machine learning. Shashua has founded three startups in Session Chairs: Binu Nair, United Technologies Research Center (UTRC) the computer vision and ma­chine learning fields. In 1995 he founded (United States) and Buyue Zhang, Apple Inc. (United States) CogniTens that specializes in the area of industrial metrology and is today a division of the Swedish Corporation Hexagon. In 1999 he 10:50 am – 12:30 pm cofounded Mobileye with his partner Ziv Aviram. Mobileye develops Grand Peninsula Ballroom FG system-on-chips and computer vision algorithms for driving assistance 10:50 AVM-031 systems and is developing a platform for autonomous driving to be Hyperspectral shadow detection for semantic road scene analysis, launched in 2021. Today, approximately 32 million cars rely on Christian Winkens, Veronika Adams, and Dietrich Paulus, University of Mobileye technology to make their vehicles safer to drive. In August Koblenz-Landau (Germany) 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising $1B at a market cap of $5.3B. In August 2017, Mobileye became 11:10 AVM-032 an Intel company in the largest Israeli acquisition deal ever of $15.3B. Integration of advanced stereo obstacle detection with perspectively Today, Shashua is the president and CEO of Mobileye and a senior correct surround views, Christian Fuchs and Dietrich Paulus, University of vice president of Intel Corporation. In 2010 Shashua co-founded Koblenz-Landau (Germany) OrCam which harnesses computer vision and artificial intelligence to AVM assist people who are visually impaired or blind. 11:30 AVM-033 Real-time traffic sign recognition using deep network for embedded platforms, Raghav Nagpal, Chaitanya Krishna Paturu, Vijaya Ragavan, 3:00 – 3:30 pm Coffee Break Navinprashath R R, Radhesh Bhat, and Dipanjan Ghosh, PathPartner Technology Pvt. Ltd. (India)

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 39 Autonomous Vehicles and Machines 2019

Production and Deployment II Panel: Sensing and Perceiving for Autonomous Driving Joint Session Session Chair: Robin Jenkin, NVIDIA Corporation (United States) 3:30 – 5:30 pm Grand Peninsula Ballroom D 9:50 – 10:20 am Grand Peninsula Ballroom FG This session is jointly sponsored by the EI Steering Committee AVM-037 Moderator: Dr. Wende Zhang, technical fellow, General Motors Self-driving cars: Massive deployment of production cars and artificial Panelists: intelligence evolution (Invited), Junli Gu, Xmotors.ai (United States) Dr. Amnon Shashua, professor of computer science, Hebrew University; president and CEO, Mobileye, an Intel Company, and senior vice 10:00 am – 7:00 pm Industry Exhibition president, Intel Corporation Dr. Boyd Fowler, CTO, OmniVision Technologies 10:10 – 10:50 am Coffee Break Dr. Christoph Schroeder, head of autonomous driving N.A., Mercedes- Benz R&D Development North America, Inc. Navigation and Mapping Dr. Jun Pei, CEO and co-founder, Cepton Technologies Inc. Session Chairs: Binu Nair, United Technologies Research Center (UTRC) Driver assistance and autonomous driving rely on perceptual systems (United States) and Peter van Beek, Intel Corporation (United States) that combine data from many different sensors, including camera, 10:50 am – 12:30 pm ultrasound, radar and lidar. The panelists will discuss the strengths and Grand Peninsula Ballroom FG limitations of different types of sensors and how the data from these sensors can be effectively combined to enable autonomous driving. 10:50 AVM-038 HD map for every mobile robot: A novel, accurate, efficient mapping approach based on 3D reconstruction and deep learning, Chang Yuan, 5:00 – 6:00 pm All-Conference Welcome Reception Foresight AI Inc. (United States) 11:10 AVM-039 Pattern and frontier-based, efficient and effective exploration of Tuesday January 15, 2019 autonomous mobile robots in unknown environments, Hiroyuki Fujimoto, Waseda University (Japan) 7:15 – 8:45 am Women in Electronic Imaging Breakfast 11:30 AVM-040 Autonomous navigation using localization priors, sensor fusion, and Production and Deployment I terrain classification,Zachariah Carmichael, Benjamin Glasstone, Frank Cwitkowitz, Kenneth Alexopoulos, Robert Relyea, and Ray Ptucha, Session Chair: Robin Jenkin, NVIDIA Corporation (United States) Rochester Institute of Technology (United States) 8:50 – 9:50 am Grand Peninsula Ballroom FG 11:50 AVM-041 Autonomous highway pilot using Bayesian networks and hidden AVM-036 Markov models, Kurt Pichler, Sandra Haindl, Daniel Reischl, and Martin KEYNOTE: AI and perception for automated driving – From concepts Trinkl, Linz Center of Mechatronics GmbH (Austria) towards production, Wende Zhang, General Motors (United States) 12:10 AVM-042 Dr. Wende Zhang is currently the Technical Fellow on Sensing Systems DriveSpace: Towards context-aware drivable area detection, Sunil at General Motors (GM). Zhang has led GM’s Next Generation Chandra, Ganesh Sistu, Senthil Yogamani, and Ciaran Hughes, Valeo Perception Systems team, guiding a cross-functional global Engineering (Ireland) and R&D team focused on identifying next generation perception systems for automated driving and active safety since 2010. He was 12:30 – 2:00 pm Lunch BFO of Lidar Systems (2017) and BFO of Viewing Systems (2014- 16) at GM. Zhang’s research interests include perception and sensing AVM for automated driving, pattern recognition, computer vision, artificial intelligence, security, and robotics. He established GM’s development, execution and sourcing strategy on Lidar systems and components and transferred his research innovation into multiple industry-first applications such as Rear Camera Mirror, Redundant Lane Sensing on MY17 Cadillac Super Cruise, Video Trigger Recording on MY16 Cadillac CT6 and Front Curb Camera System on MY 16 Chevrolet Corvette. Zhang was the technical lead on computer vision and the embedded researcher in the GM-CMU autonomous driving team that won the DARPA Urban Challenge in 2007. He has 75+ US patents, 35+ publications in sensing and viewing systems and received the GM highest technical awards (Boss Kettering Award) 3 times in 2015, 2016, 2017. Zhang has a doctoral degree in electrical and computer engineering from Carnegie Mellon University and an MBA from Indiana University.

40 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Autonomous Vehicles and Machines 2019

4:10 AVM-045 Tuesday Plenary Learning based demosaicing and color correction for RGB-IR patterned image sensors, Navinprashath R R and Radhesh Bhat, PathPartner 2:00 – 3:00 pm Technology Pvt. Ltd. (India) Grand Peninsula Ballroom D 4:30 AVM-046 The Quest for Vision Comfort: Head-Mounted Light Field Displays Color correction for RGB sensors with dual-band filters for in-cabin for Virtual and Augmented Reality, Hong Hua, professor of optical imaging applications, Orit Skorka, Paul Kane, and Radu Ispasoiu, ON sciences, University of Arizona (United States) Semiconductor (United States) Hong Hua will discuss the high promises and the tremendous progress made recently toward the development of head-mounted displays 5:30 – 7:00 pm Symposium Demonstration Session (HMD) for both virtual and augmented reality displays, developing HMDs that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many Wednesday January 16, 2019 grand challenges, both from technological perspectives and human factors. She will particularly focus on the recent progress, challenges Deep Neural Net Optimization I and opportunities for developing head-mounted light field displays (LF-HMD), which are capable of rendering true 3D synthetic scenes Session Chair: Buyue Zhang, Apple Inc. (United States) with proper focus cues to stimulate natural eye accommodation respons- 8:50 – 9:50 am es and address the well-known vergence-accommodation conflict in Grand Peninsula Ballroom FG conventional stereoscopic displays. AVM-047 KEYNOTE: Perception systems for autonomous vehicles using Dr. Hong Hua is a professor of optical sciences at the University of energy-efficient deep neural networks,Forrest Iandola, DeepScale Arizona. With more than 25 years of experience, Hua is widely (United States) recognized through academia and industry as an expert in wearable Forrest Iandola completed his PhD in electrical engineering and com- display technologies and optical imaging and engineering in gen- puter science at UC Berkeley, where his research focused on improving eral. Hua’s current research focuses on optical technologies enabling the efficiency of deep neural networks (DNNs). His best-known work advanced 3D displays, especially head-mounted display technologies includes deep learning infrastructure such as FireCaffe and deep mod- for virtual reality and augmented reality applications, and microscopic els such as SqueezeNet and SqueezeDet. His advances in scalable and endoscopic imaging systems for medicine. Hua has published more training and efficient implementation of DNNs led to the founding of than 200 technical papers and filed a total of 23 patent applications DeepScale, where he has been CEO since 2015. DeepScale builds in her specialty fields, and delivered numerous keynote addresses and vision/perception systems for automated vehicles. invited talks at major conferences and events worldwide. She is an SPIE Fellow and OSA senior member. She was a recipient of NSF Career Award in 2006 and honored as UA Researchers @ Lead Edge in 2010. Hua and her students shared a total of 8 “Best Paper” awards Deep Neural Net Optimization II in various IEEE, SPIE and SID conferences. Hua received her PhD in Session Chair: Buyue Zhang, Apple Inc. (United States) optical engineering from the Beijing Institute of Technology in China (1999). Prior to joining the UA faculty in 2003, Hua was an assistant 9:50 – 10:30 am professor with the University of Hawaii at Manoa in 2003, was a Grand Peninsula Ballroom FG Beckman Research Fellow at the Beckman Institute of University of Illinois at Urbana-Champaign between 1999 and 2002, and was a post-doc 9:50 AVM-048 at the University of Central Florida in 1999. Yes we GAN: Applying adversarial techniques for autonomous driving, Michal Uricar, Pavel Krizek, Ibrahim Sobh, David Hurych, Senthil Yogamani, and Patrick Denny, Valeo (Ireland) 3:00 – 3:30 pm Coffee Break 10:10 AVM-049 Deep dimension reduction for spatial-spectral road scene classification, Image Processing and Imaging Pipes for Automotive Christian Winkens, Florian Sattler, and Dietrich Paulus, University of Session Chairs: Patrick Denny, Valeo (Ireland) and Robin Jenkin, NVIDIA Koblenz-Landau (Germany) Corporation (United States) AVM 3:30 – 4:50 pm 10:00 am – 3:30 pm Industry Exhibition Grand Peninsula Ballroom FG 10:10 – 10:50 am Coffee Break 3:30 AVM-043 Image-based compression of LiDAR sensor data, Peter van Beek, Intel Corporation (United States)

3:50 AVM-044 Optimization of ISP parameters for object detection algorithms, Lucie Yahiaoui, Jonathan Horgan, Brian Deegan, Patrick Denny, Senthil Yogamani, and Ciaran Hughes, Valeo (Ireland)

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 41 Autonomous Vehicles and Machines 2019

12:10 PMII-052 Driving, the future – The automotive imaging revolution (Invited), Patrick Automotive Image Sensing I Joint Session Denny, Valeo (Ireland) Session Chairs: Kevin Matherson, Microsoft Corporation (United 12:30 AVM-053 States); Arnaud Peizerat, CEA (France); and Peter van Beek, Intel A system for generating complex physically accurate sensor images Corporation (United States) for automotive applications, Zhenyi Liu1,2, Minghao Shen1, Jiaqi Zhang3, 10:50 am – 12:10 pm Shuangting Liu3, Henryk Blasinski2, Trisha Lian2, and Brian Wandell2; Grand Peninsula Ballroom D 1Jilin University (China), 2Stanford University (United States), and 3Beihang University (China) This session is jointly sponsored by: Autonomous Vehicles and Machines 2019, Image Sensors and Imaging Systems 2019, and 12:50 – 2:00 pm Lunch Photography, Mobile, and Immersive Imaging 2019.

10:50 IMSE-050 Wednesday Plenary KEYNOTE: Recent trends in the image sensing technologies, Vladimir Koifman, Analog Value Ltd. (Israel) 2:00 – 3:00 pm Grand Peninsula Ballroom D Vladimir Koifman is a founder and CTO of Analog Value Ltd. Prior to that, he was co-founder of Advasense Inc., acquired by Pixim/ Light Fields and Light Stages for Photoreal Movies, Games, and Sony Image Sensor Division. Prior to co-founding Advasense, Koifman Virtual Reality, Paul Debevec, senior scientist, Google (United States) co-established the AMCC analog design center in Israel and led the analog design group for three years. Before AMCC, Koifman worked Paul Debevec will discuss the technology and production processes behind for 10 years in Motorola Semiconductor Israel (Freescale) managing “Welcome to Light Fields”, the first downloadable virtual reality experience an analog design group. He has more than 20 years of experience based on light field capture techniques which allow the visual appear- in VLSI industry and has technical leadership in analog chip design, ance of an explorable volume of space to be recorded and reprojected mixed signal chip/system architecture and electro-optic device develop- photorealistically in VR enabling full 6DOF head movement. The lightfields ment. Koifman has more than 80 granted patents and several papers. technique differs from conventional approaches such as 3D modelling Koifman also maintains Image Sensors World blog. and photogrammetry. Debevec will discuss the theory and application of the technique. Debevec will also discuss the Light Stage computational 11:30 AVM-051 illumination and facial scanning systems which use geodesic spheres of KEYNOTE: Solid-state LiDAR sensors: The future of autonomous inward-pointing LED lights as have been used to create digital actor effects vehicles, Louay Eldada, Quanergy Systems, Inc. (United States) in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in mov- Louay Eldada is CEO and co-founder of Quanergy Systems, Inc. Eldada is ies such as Furious 7, Blade Runner: 2049, and Ready Player One. The a serial entrepreneur, having founded and sold three businesses to Fortune lighting reproduction process of light stages allows omnidirectional lighting 100 companies. Quanergy is his fourth start-up. Eldada is a techni- environments captured from the real world to be accurately reproduced in cal business leader with a proven track record at both small and large a studio, and has recently be extended with multispectral capabilities to companies and with 71 patents, is a recognized expert in quantum optics, enable LED lighting to accurately mimic the color rendition properties of nanotechnology, photonic integrated circuits, advanced optoelectronics, daylight, incandescent, and mixed lighting environments. They have also sensors and robotics. Prior to Quanergy, he was CSO of SunEdison, after recently used their full-body light stage in conjunction with natural language serving as CTO of HelioVolt, which was acquired by SK Energy. Eldada processing and automultiscopic video projection to record and project was earlier CTO of DuPont Photonic Technologies, formed by the acquisi- interactive conversations with survivors of the World War II Holocaust. tion of Telephotonics where he was founding CTO. His first job was at Honeywell, where he started the Telecom Photonics business and sold it to Paul Debevec is a senior scientist at Google VR, a member of Google Corning. He studied business administration at Harvard, MIT and Stanford, VR’s Daydream team, and adjunct research professor of computer sci- and holds a PhD in optical engineering from Columbia University. ence in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec’s computer graphics research has been recognized with ACM SIGGRAPH’s first Significant AVM Automotive Image Sensing II Joint Session New Researcher Award (2001) for “Creative and Innovative Work in the Field of Image-Based Modeling and Rendering”, a Scientific and Session Chairs: Kevin Matherson, Microsoft Corporation (United States); Engineering Academy Award (2010) for “the design and engineering Arnaud Peizerat, CEA (France); and Peter van Beek, Intel Corporation of the Light Stage capture devices and the image-based facial render- (United States) ing system developed for character relighting in motion pictures” with 12:10 – 12:50 pm Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal (2017) in recognition of his achievements and ongoing work Grand Peninsula Ballroom D in pioneering techniques for illuminating computer-generated objects This session is jointly sponsored by: Autonomous Vehicles and Machines based on measurement of real-world illumination and their effective 2019, Image Sensors and Imaging Systems 2019, and Photography, commercial application in numerous Hollywood films. In 2014, he Mobile, and Immersive Imaging 2019. was profiled in The New Yorker magazine’s “Pixel Perfect: The Scientist Behind the Digital Cloning of Actors” article by Margaret Talbot.

3:00 – 3:30 pm Coffee Break

42 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Autonomous Vehicles and Machines 2019

Interaction with People

Session Chair: Robin Jenkin, NVIDIA Corporation (United States) 3:30 – 4:30 pm Grand Peninsula Ballroom FG

3:30 AVM-054 Today is to see and know: An argument and proposal for integrating human cognitive intelligence into autonomous vehicle perception, Mónica López-González, La Petite Noiseuse Productions (United States)

3:50 AVM-055 Pupil detection and tracking for AR 3D under various circumstances, Dongwoo Kang, Jingu Heo, Byongmin Kang, and Dongkyung Nam, Samsung Advanced Institute of Technology (Republic of Korea)

4:10 AVM-056 Driver behavior recognition using recurrent neural network in multiple depth cameras environment, Ying-Wei Chuang1, Chien-Hao Kuo1, Shih- Wei Sun2, and Pao-Chi Chang1; 1National Central University and 2Taipei National University of the Arts (Taiwan)

5:30 – 7:00 pm Symposium Interactive Papers (Poster) Session AVM

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 43 Conference Chairs: Reiner Eschbach, Color Imaging XXIV: Displaying, Processing, Norwegian University of Science and Technology (Norway) and Monroe Community Hardcopy, and Applications College (United States); Gabriel G. Marcu, Apple Inc. (United States); and Alessandro Conference overview Rizzi, Università degli Studi di Milano (Italy) Color imaging has historically been treated as a constant phenomenon well described by three independent parameters. Recent advances in computational resources and in the Program Committee: Jan P. Allebach, Purdue understanding of the human aspects are leading to new approaches that extend the purely University (United States); Vien Cheung, metrological view towards a perceptual view of color in documents and displays. Part of University of Leeds (United Kingdom); Scott J. this perceptual view is the incorporation of spatial aspects, adaptive color processing based Daly, Dolby Labs., Inc. (United States); Philip J. on image content, and the automation of color tasks, to name a few. This dynamic nature Green, Norwegian University of Science and applies to all output modalities, e.g., hardcopy devices, but to an even larger extent to Technology (Norway); Choon-Woo Kim, lnha soft-copy displays. University (Republic of Korea); Michael A. Kriss, MAK Consultants (United States); Fritz Lebowsky, Consultant (France); John J. McCann, McCann Spatially adaptive gamut and tone mapping, dynamic contrast, and color management Imaging (United States); Nathan Moroney, continue to support the unprecedented development of the display hardware spreading HP Labs, HP Inc. (United States); Carinna E. from mobile displays to large size screens and emerging technologies. This conference Parraman, University of the West of England provides an opportunity for presenting, as well as getting acquainted, with the most recent (United Kingdom); Marius Pedersen, Norwegian developments in color imaging researches, technologies, and applications. Focus of the University of Science and Technology (Norway); conference is on color basic research and testing, color image input, dynamic color image Shoji Tominaga, Chiba University (Japan); output and rendering, color image automation, emphasizing color in context and color in Sophie Triantaphillidou, University of Westminster images, and reproduction of images across local and remote devices. (United Kingdom); and Stephen Westland, University of Leeds (United Kingdom) In addition, the conference covers software, media, and systems related to color. Special attention is given to applications and requirements created by and for multidisciplinary fields involving color and/or vision. COLOR

44 #EI2019 electronicimaging.org

COLOR IMAGING XXIV: DISPLAYING, PROCESSING, HARDCOPY, AND APPLICATIONS

Monday January 14, 2019

Monday Plenary Color Rendering of Materials I Joint Session 2:00 – 3:00 pm Session Chair: Lionel Simonot, Université de Poitiers (France) Grand Peninsula Ballroom D 3:30 – 4:10 pm Autonomous Driving Technology and the OrCam MyEye, Amnon Cypress A Shashua, President and CEO, Mobileye, an Intel Company, and senior vice president, Intel Corporation (United States) This session is jointly sponsored by: Color Imaging XXIV: Displaying, Processing, Hardcopy, and Applications, and Material Appearance The field of transportation is undergoing a seismic change with the 2019. coming introduction of autonomous driving. The technologies required to enable computer driven cars involves the latest cutting edge artifi­cial MAAP-075 intelligence algorithms along three major thrusts: Sensing, Planning and KEYNOTE: Capturing appearance in text: The Material Definition Mapping. Shashua will describe the challenges and the kind of com- Language (MDL), Andy Kopra, NVIDIA Advanced Rendering Center puter vi­sion and machine learning algorithms involved, but will do that (Germany) through the perspective of Mobileye’s activity in this domain. He will Andy Kopra is a technical writer at the NVIDIA Advanced Rendering then describe how OrCam leverages computer vision, situation aware- Center in Berlin, Germany. With more than 35 years of professional com- ness and language process­ing to enable blind and visually impaired to puter graphics experience, he writes and edits documentation for NVIDIA interact with the world through a miniature wearable device. customers on a wide variety of topics. He also designs, programs, and Prof. Amnon Shashua holds the Sachs chair in computer science at maintains the software systems used in the production of the documentation the Hebrew University of Jerusalem. His field of expertise is computer websites and printed materials. vision and machine learning. Shashua has founded three startups in the computer vision and ma­chine learning fields. In 1995 he founded CogniTens that specializes in the area of industrial metrology and is Color Rendering of Materials II Joint Session today a division of the Swedish Corporation Hexagon. In 1999 he cofounded Mobileye with his partner Ziv Aviram. Mobileye develops Session Chair: Lionel Simonot, Université de Poitiers (France) system-on-chips and computer vision algorithms for driving assistance systems and is developing a platform for autonomous driving to be 4:10 – 4:50 pm launched in 2021. Today, approximately 32 million cars rely on Cypress A Mobileye technology to make their vehicles safer to drive. In August This session is jointly sponsored by: Color Imaging XXIV: Displaying, 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising Processing, Hardcopy, and Applications, and Material Appearance 2019. $1B at a market cap of $5.3B. In August 2017, Mobileye became an Intel company in the largest Israeli acquisition deal ever of $15.3B. 4:10 COLOR-076 Today, Shashua is the president and CEO of Mobileye and a senior Real-time accurate rendering of color and texture of car coatings, vice president of Intel Corporation. In 2010 Shashua co-founded Eric Kirchner1, Ivo Lans1, Pim Koeckhoven1, Khalil Huraibat2, Francisco OrCam which harnesses computer vision and artificial intelligence to Martinez-Verdu2, Esther Perales2, Alejandro Ferrero3, and Joaquin assist people who are visually impaired or blind. Campos3; 1AkzoNobel (the Netherlands), 2University of Alicante (Spain), and 3CSIC (Spain) 3:00 – 3:30 pm Coffee Break 4:30 COLOR-077

Recreating Van Gogh’s original colors on museum displays, Eric Kirchner1, Muriel Geldof2, Ella Hendriks3, Art Ness Proano Gaibor2, Koen Janssens4, John Delaney5, Ivo Lans1, Frank Ligterink2, Luc Megens2, Teio Meedendorp6, and Kathrin Pilz6; 1AkzoNobel (the Netherlands), 2RCE (the Netherlands), 3University of Amsterdam (the Netherlands), 4University of Antwerp (Belgium), 5National Gallery (United States), and 6Van Gogh Museum (the Netherlands)

5:00 – 6:00 pm All-Conference Welcome Reception COLOR

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 45 Color Imaging XXIV: Displaying, Processing, Hardcopy, and Applications

Tuesday January 15, 2019

7:15 – 8:45 am Women in Electronic Imaging Breakfast

Gamut Mapping Tuesday Plenary Session Chair: Gabriel Marcu, Apple Inc. (United States) 2:00 – 3:00 pm Grand Peninsula Ballroom D 8:50 – 10:10 am Cypress B The Quest for Vision Comfort: Head-Mounted Light Field Displays for Virtual and Augmented Reality, Hong Hua, professor of optical 8:50 COLOR-078 sciences, University of Arizona (United States) Development of a color appearance model with embedded uniform color space, Muhammad Safdar, Norwegian University of Science and Hong Hua will discuss the high promises and the tremendous progress Technology (NTNU) (Norway) made recently toward the development of head-mounted displays (HMD) for both virtual and augmented reality displays, developing 9:10 COLOR-079 HMDs that offer uncompromised optical pathways to both digital and Colour gamut mapping using vividness scale, Baiyue Zhao1, Lihao Xu1, physical worlds without encumbrance and discomfort confronts many and Ming Ronnier Luo1,2; 1Zhejiang University (China) and 2University of grand challenges, both from technological perspectives and human Leeds (United Kingdom) factors. She will particularly focus on the recent progress, challenges and opportunities for developing head-mounted light field displays 9:30 COLOR-080 (LF-HMD), which are capable of rendering true 3D synthetic scenes A computationally-efficient gamut mapping solution for color image with proper focus cues to stimulate natural eye accommodation respons- processing pipelines in digital camera systems, Noha El-Yamany, Intel es and address the well-known vergence-accommodation conflict in Corporation (Finland) conventional stereoscopic displays. Dr. Hong Hua is a professor of optical sciences at the University of 9:50 COLOR-081 Arizona. With more than 25 years of experience, Hua is widely A simple approach for gamut boundary description using radial basis recognized through academia and industry as an expert in wearable function network, In-ho Park, Hyunsoo Oh, and Ki-Min Kang, HP Printing display technologies and optical imaging and engineering in gen- Korea (HPPK) (Republic of Korea) eral. Hua’s current research focuses on optical technologies enabling advanced 3D displays, especially head-mounted display technologies 10:00 am – 7:00 pm Industry Exhibition for virtual reality and augmented reality applications, and microscopic and endoscopic imaging systems for medicine. Hua has published more 10:10 – 10:40 am Coffee Break than 200 technical papers and filed a total of 23 patent applications in her specialty fields, and delivered numerous keynote addresses and Display & Color Constancy invited talks at major conferences and events worldwide. She is an SPIE Session Chair: Reiner Eschbach, Norwegian University of Science and Fellow and OSA senior member. She was a recipient of NSF Career Technology (Norway) and Monroe Community College (United States) Award in 2006 and honored as UA Researchers @ Lead Edge in 2010. Hua and her students shared a total of 8 “Best Paper” awards 10:40 am – 12:20 pm in various IEEE, SPIE and SID conferences. Hua received her PhD in Cypress B optical engineering from the Beijing Institute of Technology in China 10:40 COLOR-083 (1999). Prior to joining the UA faculty in 2003, Hua was an assistant Viewing angle characterization of HDR/WCG displays using color professor with the University of Hawaii at Manoa in 2003, was a volumes and new color spaces, Pierre Boher1, Thierry Leroux1, and Pierre Beckman Research Fellow at the Beckman Institute of University of Illinois Blanc2; 1ELDIM and 2Laboratoires d’Essai de la FNAC (France) at Urbana-Champaign between 1999 and 2002, and was a post-doc at the University of Central Florida in 1999.

11:00 COLOR-082 Beyond limits of current high dynamic range displays: Ultra-high 3:00 – 3:30 pm Coffee Break dynamic range display, Jae Sung Park, Sungwon Seo, Dukjin Kang, James Langehennig, and Byungseok Min, Samsung Electronics (Republic of Color Processing Korea) Session Chairs: Phil Green, Norwegian University of Science and Technology (Norway) and Alessandro Rizzi, Università degli Studi di 11:20 COLOR-084 Milano (Italy) About glare and luminance measurements, Simone Liberini1, Maurizio Rossi2, Matteo Lanaro1, and Alessandro Rizzi1; 1Università degli Studi di 3:30 – 5:10 pm Milano and 2Politecnico di Milano (Italy) Cypress B COLOR 3:30 COLOR-086 11:40 COLOR-085 Evaluation of naturalness and readability of whiteboard image Limits of color constancy: Comparison of the signatures of chromatic enhancements, Mekides Abebe and Jon Yngve Hardeberg, Norwegian adaptation and spatial comparisons (Invited), John McCann, McCann University of Science and Technology (NTNU) (Norway) Imaging (United States)

12:30 – 2:00 pm Lunch

46 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Color Imaging XXIV: Displaying, Processing, Hardcopy, and Applications

3:50 COLOR-087 Observers & Appearance Automatic detection of scanned page orientation, Zhenhua Hu1, Peter Bauer2, and Todd Harris2; 1Purdue University and 2Hewlett-Packard (United Session Chairs: Reiner Eschbach, Norwegian University of Science and States) Technology (Norway) and Monroe Community College (United States) and John McCann, McCann Imaging (United States) 4:10 COLOR-088 10:40 am – 12:20 pm Automatic image enhancement for under-exposed, over-exposed, or Cypress B backlit images, Jaemin Shin, Hyunsoo Oh, Kyeongman Kim, Ki-Min Kang, and In-ho Park, HP Printing Korea (Republic of Korea) 10:40 COLOR-095 Consistency of color appearance based on image color difference, 4:30 COLOR-089 Muhammad Safdar, Phil Green, and Peter Nussbaum, Norwegian Relationship between faithfulness and preference of stars in a planetarium University of Science and Technology (NTNU) (Norway) (JPI-pending), Midori Tanaka1, Takahiko Horiuchi1, and Kenichi Otani2; 1Chiba University and 2Konica Minolta Planetarium Co., Ltd. (Japan) 11:00 COLOR-096 Determination of individual-observer color matching functions for use in 4:50 COLOR-090 color management systems, Eric Walowit, Consultant (United States) A CNN adapted to time series for the classification of supernovae, 1 2 3 1 Anthony Brunel , Johanna Pasquet , Jérôme Pasquet , Nancy Rodriguez , 11:20 COLOR-097 1 2 1 1 Frédéric Comby , Dominique Fouchez , and Marc Chaumont ; LIRMM Refining ACES best practice,Eberhard Hasche, Oliver Karaschewski, and 2 3 Montpellier, CPPM Marseille, and LIS Marseille (France) Reiner Creutzburg, Technische Hochschule Brandenburg (Germany)

5:30 – 7:00 pm Symposium Demonstration Session 11:40 COLOR-098 EMVA1288 compliant image interpolation creating homogeneous pixel Wednesday January 16, 2019 size and gain, Jörg Kunze, Basler AG (Germany)

Color Vision & Illuminants 12:00 COLOR-099 A data-driven approach for garment color classification in on-line Session Chair: Alessandro Rizzi, Università degli Studi di Milano (Italy) fashion images, Zhi Li1, Gautam Golwala2, Sathya Sundaram2, and Jan 1 1 2 8:50 – 10:10 am Allebach ; Purdue University and Poshmark Inc. (United States) Cypress B 12:30 – 2:00 pm Lunch 8:50 COLOR-091 How is colour harmony perceived by colour vision deficient observers?, Susann Lundekvam and Phil Green, Norwegian University of Science and Technology (Norway)

9:10 COLOR-092 Impression evaluation between color vision types, Yasuyo Ichihara, Kogakuin University (Japan)

9:30 COLOR-093 Analysis of illumination correction error in camera color space, Minji Lee and Byung-Uk Lee, Ewha Womans University (Republic of Korea)

9:50 COLOR-094 Multiple illuminants’ color estimation using layered gray-world assumption, Harumi Kawamura, Salesian Polytechnic University (Japan)

10:00 am – 3:30 pm Industry Exhibition

10:10 – 10:40 am Coffee Break COLOR

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 47 Color Imaging XXIV: Displaying, Processing, Hardcopy, and Applications

Halftoning & Image Representation Wednesday Plenary Session Chair: Gabriel Marcu, Apple Inc. (United States) 2:00 – 3:00 pm Grand Peninsula Ballroom D 3:30 – 5:10 pm Cypress B Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, senior scientist, Google (United States) 3:30 COLOR-100 Creating a simulation option for the reconstruction of ancient Paul Debevec will discuss the technology and production processes behind documents, Reiner Eschbach1,2, Roger Easton3, Sony George1, and Jon “Welcome to Light Fields”, the first downloadable virtual reality experience Yngve Hardeberg1; 1Norwegian University of Science and Technology based on light field capture techniques which allow the visual appear- (NTNU) (Norway), 2Monroe Community College (United States), and ance of an explorable volume of space to be recorded and reprojected 3Rochester Institute of Technology (United States) photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling and photogrammetry. Debevec will discuss the theory and application of 3:50 COLOR-101 the technique. Debevec will also discuss the Light Stage computational 3D Tone-Dependent Fast Error Diffusion (TDFED), Adam Michals, Altyngul illumination and facial scanning systems which use geodesic spheres of Jumabayeva, and Jan Allebach, Purdue University (United States) inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently 4:10 COLOR-102 been used to create photoreal digital actors based on real people in mov- NPAC FM color halftoning for the Indigo press: Challenges and ies such as Furious 7, Blade Runner: 2049, and Ready Player One. The solutions, Jiayin Liu1, Tal Frank2, Ben-Shoshan Yotam2, Robert Ulichney3, lighting reproduction process of light stages allows omnidirectional lighting and Jan Allebach1; 1Purdue University (United States), 2HP Inc. (Israel), and environments captured from the real world to be accurately reproduced in 3HP Labs, HP Inc. (United States) a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of 4:30 COLOR-103 daylight, incandescent, and mixed lighting environments. They have also Vector tone-dependent fast error diffusion in the YyCxCz color space, recently used their full-body light stage in conjunction with natural language Chin-Ning Chen, Zhen Luan, and Jan Allebach, Purdue University (United processing and automultiscopic video projection to record and project States) interactive conversations with survivors of the World War II Holocaust. Paul Debevec is a senior scientist at Google VR, a member of Google 4:50 COLOR-104 VR’s Daydream team, and adjunct research professor of computer sci- Appearance-preserving error diffusion algorithm using texture ence in the Viterbi School of Engineering at the University of Southern information, Takuma Kiyotomo, Midori Tanaka, and Takahiko Horiuchi, California, working within the Vision and Graphics Laboratory at the Chiba University (Japan) USC Institute for Creative Technologies. Debevec’s computer graphics 5:30 – 7:00 pm Symposium Interactive Papers (Poster) Session research has been recognized with ACM SIGGRAPH’s first Significant New Researcher Award (2001) for “Creative and Innovative Work in the Field of Image-Based Modeling and Rendering”, a Scientific and Engineering Academy Award (2010) for “the design and engineering of the Light Stage capture devices and the image-based facial render- ing system developed for character relighting in motion pictures” with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal (2017) in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine’s “Pixel Perfect: The Scientist Behind the Digital Cloning of Actors” article by Margaret Talbot.

3:00 – 3:30 pm Coffee Break COLOR

48 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org COIMG

Computational Imaging XV

Conference Chairs: Charles A. Bouman, Purdue Computational Imaging XVII University (United States), Gregery T. Buzzard, Purdue University (United States), and Robert COIMG Conference overview Stevenson, University of Notre Dame (United More than ever before, computers and computation are critical to the image formation pro- States) cess. Across diverse applications and fields, remarkably similar imaging problems appear, requiring sophisticated mathematical, statistical, and algorithmic tools. This conference fo- Program Committee: Ken D. Sauer, University of cuses on imaging as a marriage of computation with physical devices. It emphasizes the Notre Dame (United States) interplay between mathematical theory, physical models, and computational algorithms that enable effective current and future imaging systems. Contributions to the conference are solicited on topics ranging from fundamental theoretical advances to detailed system-level implementations and case studies.

Special Session This year Computational Imaging hosts a special session on AI for Reconstruction and Sensing. Presentations will cover such topics as advances in AI in CT reconstruction, multi- target tracking, and more, presented by researchers from academia, national laboratories and industry.

electronicimaging.org #EI2019 49

COIMG COMPUTATIONAL IMAGING XVII

Monday January 14, 2019 11:50 COIMG-129 Joint direct deep learning for one-sided ultrasonic non-destructive evaluation (Invited), Hani Almansouri1, Singanallur Venkatakrishnan2, AI for Reconstruction and Sensing I Charles Bouman1, and Hector Santos-Villalobos2; 1Purdue University and 2Oak Ridge National Laboratory (United States) 9:10 – 10:10 am Harbour AB 12:10 COIMG-130 Modeling long range features from serial section imagery of continuous COIMG-125 fiber reinforced composites (Invited),Sam Sherman1, Jeffrey Simmons2, KEYNOTE: Learning to make images, W. Clem Karl, Boston University and Craig Przybyla2; 1Air Force Life Cycle Management Center and 2Air (United States) Force Research Laboratory (United States) VW. Clem Karl received his PhD in electrical engineering and com- 12:30 – 2:00 pm Lunch puter science (1991) from the Massachusetts Institute of Technology, Cambridge, where he also received his SM, EE, and SB. He held the position of staff research scientist with the Brown-Harvard-MIT Center for Monday Plenary Intelligent Control Systems and the MIT Laboratory for Information and Decision Systems from 1992 to 1994. He joined the faculty of Boston 2:00 – 3:00 pm University in 1995, where he is currently professor of electrical and com- Grand Peninsula Ballroom D puter engineering and biomedical engineering. Karl is currently the Editor- in-Chief of the IEEE Transactions on Image Processing. He is a member of Autonomous Driving Technology and the OrCam MyEye, Amnon the Board of Governors of the IEEE Signal Processing Society, the Signal Shashua, President and CEO, Mobileye, an Intel Company, and senior Processing Society Conference Board, the IEEE Transactions on Medical vice president, Intel Corporation (United States) Imaging Steering Committee, and the Technical Committee Review The field of transportation is undergoing a seismic change with the Board. He co-organized two special sessions of the 2012 IEEE Statistical coming introduction of autonomous driving. The technologies required Signal Processing Workshop, one on Challenges in High-Dimensional to enable computer driven cars involves the latest cutting edge artifi­cial Learning and one on Statistical Signal Processing and the Engineering of intelligence algorithms along three major thrusts: Sensing, Planning and Materials. In 2011 he was a co-organizer of a workshop on Large Data Mapping. Shashua will describe the challenges and the kind of com- Sets in Medical Informatics as part of the Institute for Mathematics and Its puter vi­sion and machine learning algorithms involved, but will do that Applications Thematic Year on the Mathematics of Information. He served through the perspective of Mobileye’s activity in this domain. He will as an Associate Editor of the IEEE Transactions on Image Processing and then describe how OrCam leverages computer vision, situation aware- was the General Chair of the 2009 IEEE International Symposium on ness and language process­ing to enable blind and visually impaired to Biomedical Imaging. He is a past member of the IEEE Image, Video, and interact with the world through a miniature wearable device. Multidimensional Signal Processing Technical Committee and is a current member of the IEEE Biomedical Image and Signal Processing Technical Prof. Amnon Shashua holds the Sachs chair in computer science at Committee. Karl’s research interests are in the areas of multidimensional the Hebrew University of Jerusalem. His field of expertise is computer statistical signal and image processing, estimation, inverse problems, geo- vision and machine learning. Shashua has founded three startups in metric estimation, and applications to problems ranging from biomedical the computer vision and ma­chine learning fields. In 1995 he founded signal and image processing to synthetic aperture radar. CogniTens that specializes in the area of industrial metrology and is today a division of the Swedish Corporation Hexagon. In 1999 he cofounded Mobileye with his partner Ziv Aviram. Mobileye develops system-on-chips and computer vision algorithms for driving assistance 10:10 – 10:50 am Coffee Break systems and is developing a platform for autonomous driving to be launched in 2021. Today, approximately 32 million cars rely on AI for Reconstruction and Sensing II Mobileye technology to make their vehicles safer to drive. In August 10:50 am – 12:30 pm 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising $1B at a market cap of $5.3B. In August 2017, Mobileye became Harbour AB an Intel company in the largest Israeli acquisition deal ever of $15.3B. 10:50 COIMG-126 Today, Shashua is the president and CEO of Mobileye and a senior Light field image reconstruction with generative adversarial networks vice president of Intel Corporation. In 2010 Shashua co-founded (Invited), Hector Santos-Villalobos, David Bolme, and David Cornett III, OrCam which harnesses computer vision and artificial intelligence to Oak Ridge National Laboratory (United States) assist people who are visually impaired or blind.

11:10 COIMG-127 Multi-target tracking with an event-based vision sensor and the GMPHD filter (Invited),Benjamin Foster1, Dong Hye Ye2, and Charles Bouman3; 1Lockheed Martin, 2Marquette University, and 3Purdue University (United States)

11:30 COIMG-128 4D reconstruction using consensus equilibrium (Invited), Soumendu Majee1, Thilo Balke1, Craig Kemp2, Gregery Buzzard1, and Charles Bouman1; 1Purdue University and 2Eli Lilly and Company (United States)

50 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Computational Imaging XVII

3:00 – 3:30 pm Coffee Break COIMG Panel: Sensing and Perceiving for Autonomous Driving Joint Session Image Enhancement via Neural Network 3:30 – 5:30 pm 10:40 – 11:20 am Grand Peninsula Ballroom D Harbour AB This session is jointly sponsored by the EI Steering Committee 10:40 COIMG-135 Moderator: Dr. Wende Zhang, technical fellow, General Motors A comparative study on wavelets and residuals in deep super Panelists: resolution, Ruofan Zhou, Fayez Lahoud, Majed El Helou, and Sabine Dr. Amnon Shashua, professor of computer science, Hebrew University; Süsstrunk, École Polytechnique Fédérale de Lausanne (EPFL) (Switzerland) president and CEO, Mobileye, an Intel Company, and senior vice president, Intel Corporation 11:00 COIMG-136 GAN based image deblurring using dark channel prior, Shuang Zhang, Dr. Boyd Fowler, CTO, OmniVision Technologies Ada Zhen, and Robert Stevenson, University of Notre Dame (United States) Dr. Christoph Schroeder, head of autonomous driving N.A., Mercedes- Benz R&D Development North America, Inc. Dr. Jun Pei, CEO and co-founder, Cepton Technologies Inc. In Situ 3D/4D Image Capture and Analysis Driver assistance and autonomous driving rely on perceptual systems 11:20 am – 12:40 pm that combine data from many different sensors, including camera, Harbour AB ultrasound, radar and lidar. The panelists will discuss the strengths and limitations of different types of sensors and how the data from these 11:20 COIMG-137 sensors can be effectively combined to enable autonomous driving. Height estimation of biomass sorghum in the field using LiDAR,Matthew Waliman and Avideh Zakhor, University of California, Berkeley (United States)

5:00 – 6:00 pm All-Conference Welcome Reception 11:40 COIMG-138 In situ width estimation of biofuel plant stems, Arda Sahiner, Franklin Heng, Adith Balamurugan, and Avideh Zakhor, University of California, Tuesday January 15, 2019 Berkeley (United States)

7:15 – 8:45 am Women in Electronic Imaging Breakfast 12:00 COIMG-139 Vision guided, hyperspectral imaging for standoff trace chemical detection (Invited), Raiyan Ishmam1, Ashish Neupane1, Shuchin Aeron1, Medical and Scientific Imaging Eric Miller1, Mark Witinski2, Christian Pfluegl2, Brandt Pein2, and Romain 8:50 – 10:10 am Blanchard2; 1Tufts University and 2Pendar Technologies (United States) Harbour AB 12:20 COIMG-140 Through the windshield driver recognition (Invited), David Cornett III, 8:50 COIMG-131 Grace Nayola, Diane Montez, Alec Yen, Christi Johnson, Seth Baird, Simultaneous denoising and deblurring for full-field tomography,Daniel Hector Santos-Villalobos, and David Bolme, Oak Ridge National Ching and Dog˘a Gürsoy, Argonne National Laboratory (United States) Laboratory (United States)

9:10 COIMG-132 12:40 – 2:00 pm Lunch Autocorrelation-based, passive, non-contact, photoplethysmography: Computationally-efficient, noise-tolerant, extraction of heart rates from video, Chadwick Parrish, Kevin Donohue, and Henry Dietz, University of Kentucky (United States)

9:30 COIMG-133 Joint density map and continuous angular refinement in Cryo-EM,Mona Zehni1, Laurène Donati2, Emmanuel Soubies2, Zhizhen Zhao1, Minh Do1, and Michael Unser2; 1University of Illinois at Urbana-Champaign (United States) and 2École Polytechnique Fédérale de Lausanne (EPFL) (Switzerland)

9:50 COIMG-134 Point source localization from projection lines using rotation invariant features, Mona Zehni, Shuai Huang, Ivan Dokmanic, and Zhizhen Zhao, University of Illinois at Urbana-Champaign (United States)

10:00 am – 7:00 pm Industry Exhibition

10:10 – 10:40 am Coffee Break

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 51 Computational Imaging XVII COIMG Wednesday January 16, 2019 Tuesday Plenary 2:00 – 3:00 pm 10:00 am – 3:30 pm Industry Exhibition Grand Peninsula Ballroom D 10:10 – 11:00 am Coffee Break The Quest for Vision Comfort: Head-Mounted Light Field Displays for Virtual and Augmented Reality, Hong Hua, professor of optical 12:30 – 2:00 pm Lunch sciences, University of Arizona (United States) Hong Hua will discuss the high promises and the tremendous progress made recently toward the development of head-mounted displays Wednesday Plenary (HMD) for both virtual and augmented reality displays, developing HMDs that offer uncompromised optical pathways to both digital and 2:00 – 3:00 pm physical worlds without encumbrance and discomfort confronts many Grand Peninsula Ballroom D grand challenges, both from technological perspectives and human factors. She will particularly focus on the recent progress, challenges Light Fields and Light Stages for Photoreal Movies, Games, and and opportunities for developing head-mounted light field displays Virtual Reality, Paul Debevec, senior scientist, Google (United States) (LF-HMD), which are capable of rendering true 3D synthetic scenes Paul Debevec will discuss the technology and production processes behind with proper focus cues to stimulate natural eye accommodation respons- “Welcome to Light Fields”, the first downloadable virtual reality experience es and address the well-known vergence-accommodation conflict in based on light field capture techniques which allow the visual appear- conventional stereoscopic displays. ance of an explorable volume of space to be recorded and reprojected Dr. Hong Hua is a professor of optical sciences at the University of photorealistically in VR enabling full 6DOF head movement. The lightfields Arizona. With more than 25 years of experience, Hua is widely technique differs from conventional approaches such as 3D modelling recognized through academia and industry as an expert in wearable and photogrammetry. Debevec will discuss the theory and application of display technologies and optical imaging and engineering in gen- the technique. Debevec will also discuss the Light Stage computational eral. Hua’s current research focuses on optical technologies enabling illumination and facial scanning systems which use geodesic spheres of advanced 3D displays, especially head-mounted display technologies inward-pointing LED lights as have been used to create digital actor effects for virtual reality and augmented reality applications, and microscopic in movies such as Avatar, Benjamin Button, and Gravity, and have recently and endoscopic imaging systems for medicine. Hua has published more been used to create photoreal digital actors based on real people in mov- than 200 technical papers and filed a total of 23 patent applications ies such as Furious 7, Blade Runner: 2049, and Ready Player One. The in her specialty fields, and delivered numerous keynote addresses and lighting reproduction process of light stages allows omnidirectional lighting invited talks at major conferences and events worldwide. She is an SPIE environments captured from the real world to be accurately reproduced in Fellow and OSA senior member. She was a recipient of NSF Career a studio, and has recently be extended with multispectral capabilities to Award in 2006 and honored as UA Researchers @ Lead Edge in enable LED lighting to accurately mimic the color rendition properties of 2010. Hua and her students shared a total of 8 “Best Paper” awards daylight, incandescent, and mixed lighting environments. They have also in various IEEE, SPIE and SID conferences. Hua received her PhD in recently used their full-body light stage in conjunction with natural language optical engineering from the Beijing Institute of Technology in China processing and automultiscopic video projection to record and project (1999). Prior to joining the UA faculty in 2003, Hua was an assistant interactive conversations with survivors of the World War II Holocaust. professor with the University of Hawaii at Manoa in 2003, was a Paul Debevec is a senior scientist at Google VR, a member of Google Beckman Research Fellow at the Beckman Institute of University of Illinois VR’s Daydream team, and adjunct research professor of computer sci- at Urbana-Champaign between 1999 and 2002, and was a post-doc ence in the Viterbi School of Engineering at the University of Southern at the University of Central Florida in 1999. California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec’s computer graphics 3:00 – 3:30 pm Coffee Break research has been recognized with ACM SIGGRAPH’s first Significant New Researcher Award (2001) for “Creative and Innovative Work in 5:30 – 7:00 pm Symposium Demonstration Session the Field of Image-Based Modeling and Rendering”, a Scientific and Engineering Academy Award (2010) for “the design and engineering of the Light Stage capture devices and the image-based facial render- ing system developed for character relighting in motion pictures” with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal (2017) in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine’s “Pixel Perfect: The Scientist Behind the Digital Cloning of Actors” article by Margaret Talbot.

3:00 – 3:30 pm Coffee Break

52 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Computational Imaging XVII

4:50 EISS-709 Light Field Imaging and Display Joint Session Quest for immersion (Invited), Kari Pulli, Stealth Startup (United States) COIMG Session Chair: Gordon Wetzstein, Stanford University (United States) Dr. Kari Pulli has spent two decades in computer imaging and AR at 3:30 – 5:30 pm companies such as Intel, NVIDIA and Nokia. Before joining a stealth startup, he was the CTO of Meta, an augmented reality company Grand Peninsula Ballroom D in San Mateo, heading up computer vision, software, displays, and This session is jointly sponsored by the EI Steering Committee. hardware, as well as the overall architecture of the system. Before joining Meta, he worked as the CTO of the Imaging and Camera 3:30 EISS-706 Technologies Group at Intel, influencing the architecture of future IPU’s Light fields - From shape recovery to sparse reconstruction (Invited), in hardware and software. Prior, he was vice president of computa- Ravi Ramamoorthi, University of California, San Diego (United States) tional imaging at Light, where he developed algorithms for combining Prof. Ravi Ramamoorthi is the Ronald L. Graham Professor of Computer images from a heterogeneous camera array into a single high-quality Science, and Director of the Center for Visual Computing, at the image. He previously led research teams as a senior director at University of California, San Diego. Ramamoorthi received his PhD NVIDIA Research and as a Nokia fellow at Nokia Research, where in computer science (2002) from Stanford University. Prior to joining he focused on computational photography, computer vision, and AR. UC San Diego, Ramamoorthi was associate professor of EECS at the Pulli holds computer science degrees from the University of Minnesota University of California, Berkeley, where he developed the complete (BSc), University of Oulu (MSc, Lic. Tech), and University of Washington graphics curricula. His research centers on the theoretical foundations, (PhD), as well as an MBA from the University of Oulu. He has taught mathematical representations, and computational algorithms for under- and worked as a researcher at Stanford, University of Oulu, and MIT. standing and rendering the visual appearance of objects, exploring 5:10 EISS-710 topics in frequency analysis and sparse sampling and reconstruction Industrial scale light field printing (Invited),Matthew Hirsch, Lumii Inc. of visual appearance datasets a digital data-driven visual appearance (United States) pipeline; light-field cameras and 3D photography; and physics-based computer vision. Ramamoorthi is an ACM Fellow for contributions Dr. Matthew Hirsch is a co-founder and chief technical officer of Lumii. to computer graphics rendering and physics-based computer vision, He worked with Henry Holtzman’s Information Ecology Group and awarded Dec. 2017, and an IEEE Fellow for contributions to founda- Ramesh Raskar’s Camera Culture Group at the MIT Media Lab, making tions of computer graphics and computer vision, awarded Jan. 2017. the next generation of interactive and glasses-free 3D displays. Hirsch received his bachelors from Tufts University in computer engineering, 4:10 EISS-707 and his Masters and Doctorate from the MIT Media Lab. Between The beauty of light fields (Invited), David Fattal, LEIA Inc. (United States) degrees, he worked at Analogic Corp. as an imaging engineer, where Dr. David Fattal is co-founder and CEO at LEIA Inc., where he is in he advanced algorithms for image reconstruction and understanding charge of bringing their mobile holographic display technology to in volumetric x-ray scanners. His work has been funded by the NSF market. Fattal received his PhD in physics from Stanford University and the Media Lab consortia, and has appeared in SIGGRAPH, CHI, (2005). Prior to founding LEIA Inc., Fattal was a research scientist with and ICCP. Hirsch has also taught courses at SIGGRAPH on a range of HP Labs, HP Inc. At LEIA Inc., the focus is on immersive mobile, with subjects in computational imaging and display, with a focus on DIY. screens that come alive in richer, deeper, more beautiful ways. Flipping seamlessly between 2D and lightfields, mobile experiences become truly immersive: no glasses, no tracking, no fuss. Alongside new display Computational Imaging XVII Interactive Posters Session technology LEIA Inc. is developing Leia Loft™ — a whole new canvas. 5:30 – 7:00 pm 4:30 EISS-708 The Grove Light field insights from my time at Lytro (Invited),Kurt Akeley, Google Inc. (United States) The following works will be presented at the EI 2019 Symposium Interactive Papers Session. Dr. Kurt Akeley is a distinguished engineer at Google Inc. Akeley received his PhD in stereoscopic display technology from Stanford COIMG-141 University (2004), where he implemented and evaluated a stereo- Adaptive loss regression for flexible graph-based semi-supervised scopic display that passively (e.g., without eye tracking) produces embedding, Fadi Dornaika and Youssof El Traboulsi, University of the nearly correct focus cues. After Stanford, Akeley worked with OpenGL Basque Country (Spain) at NVIDIA Incorporated, was a principal researcher at Microsoft COIMG-142 Corporation, and a consulting professor at Stanford University. In An efficient motion correction method for frequency-domain images 2010, he joined Lytro Inc. as CTO. During his seven-year tenure as based on Fast Robust Correlation, Yuan Bian1, Stanley Reeves1, and Lytro’s CTO, he guided and directly contributed to the development Ronald Beyers2; 1Auburn University and 2Auburn University MRI Research of two consumer light-field cameras and their related display systems, Center (United States) and also to a cinematic capture and processing service that supported immersive, six-degree-of-freedom virtual reality playback. COIMG-143 Compton camera imaging with spherical movement, Kiwoon Kwon1 and Sungwhan Moon2; 1Dongguk University and 2Kyungpook National University (Republic of Korea)

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 53 Computational Imaging XVII COIMG Thursday January 17, 2019

Medical Imaging - Computational Joint Session 8:50 – 10:10 am Grand Peninsula Ballroom A This medical imaging session is jointly sponsored by: Computational Imaging XVII, Human Vision and Electronic Imaging 2019, and Imaging and Multimedia Analytics in a Web and Mobile World 2019.

8:50 IMAWM-145 Smart fetal care, Jane You1, Qin Li2, Qiaozhu Chen3, Zhenhua Guo4, and Hongbo Yang5; 1The Hong Kong Polytechnic University (Hong Kong), 2Shenzhen Institute of Information Technology (China), 3Guangzhou Women and Children Medical Center (China), 4Tsinghua University (China), and 5Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences (China)

9:10 COIMG-146 Self-contained, passive, non-contact, photoplethysmography: Real-time extraction of heart rates from live view within a Canon Powershot, Henry Dietz, Chadwick Parrish, and Kevin Donohue, University of Kentucky (United States)

9:30 COIMG-147 Edge-preserving total variation regularization for dual-energy CT images, Sandamali Devadithya and David Castañón, Boston University (United States)

9:50 COIMG-148 Fully automated dental panoramic radiograph by using internal mandible curves of dental volumetric CT, Sanghun Lee1, Seongyoun Woo1, Joonwoo Lee2, Jaejun Seo2, and Chulhee Lee1; 1Yonsei University and 2Dio Implant (Republic of Korea)

10:10 – 11:00 am Coffee Break

54 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Conference Chairs and Program Committee: The Engineering Reality of Virtual Reality 2019 Margaret Dolinsky, Indiana University (United States), and lan E. McDowall, Intuitive Conference overview Surgical / Fakespace Labs (United States) Virtual and augmented reality systems are evolving. In addition to research, the trend to- ward content building continues and practitioners find that technologies and disciplines must be tailored and integrated for specific visualization and interactive applications. This conference serves as a forum where advances and practical advice toward both crea- tive activity and scientific investigation are presented and discussed. Research results can ERVR be presented and applications can be demonstrated.

Highlights This year ERVR is expanding into joint sessions on Tuesday and Wednesday. On Tuesday ERVR is co-hosting the “Visualization Facilities” joint session with Stereoscopic Displays and Applications XXX. On Wednesday morning, ERVR is co-hosting the “360, 3D, and VR” ses- sion with Stereoscopic Displays and Applications XXX. Then on Wednesday afternoon, the ERVR program includes the Light Field Imaging and Display theme day symposium session.

On Thursday the core ERVR conference sessions kick off with sessions “Going Places with VR,” “Recognizing Experiences: Expanding VR,” and “Reaching Beyond: VR in Translation.” Finally, if you are following the EI 2019 medical imaging virtual track, note the final session in that track is the ERVR “3D Medical Imaging VR” session on Thursday afternoon.

electronicimaging.org #EI2019 55

THE ENGINEERING REALITY OF VIRTUAL REALITY 2019

Tuesday, January 15, 2019 3:30 SD&A-641 Tiled stereoscopic 3D display wall – Concept, applications and 7:15 – 8:45 am Women in Electronic Imaging Breakfast evaluation, Björn Sommer, Alexandra Diehl, Karsten Klein, Philipp 10:00 am – 7:00 pm Industry Exhibition Meschenmoser, David Weber, Michael Aichem, Daniel Keim, and Falk Schreiber, University of Konstanz (Germany) 10:10 – 11:00 am Coffee Break ERVR 12:30 – 2:00 pm Lunch 3:50 SD&A-642 The quality of stereo disparity in the polar regions of a stereo panorama, Daniel Sandin1,2, Haoyu Wang3, Alexander Guo1, Ahmad Tuesday Plenary Atra1, Dick Ainsworth4, Maxine Brown3, and Tom DeFanti2; 1Electronic Visualization Lab (EVL), University of Illinois at Chicago, 2California Institute 2:00 – 3:00 pm for Telecommunications and Information Technology (Calit2), University of Grand Peninsula Ballroom D California San Diego, 3University of Illinois at Chicago, and 4Ainsworth & The Quest for Vision Comfort: Head-Mounted Light Field Displays Partners, Inc. (United States) for Virtual and Augmented Reality, Hong Hua, professor of optical 4:10 SD&A-644 sciences, University of Arizona (United States) Opening a 3-D museum - A case study of 3-D SPACE, Eric Kurland, 3-D Hong Hua will discuss the high promises and the tremendous progress SPACE (United States) made recently toward the development of head-mounted displays (HMD) 4:30 SD&A-645 for both virtual and augmented reality displays, developing HMDs that State of the art of multi-user virtual reality display systems, Juan Munoz offer uncompromised optical pathways to both digital and physical worlds Arango, Dirk Reiners, and Carolina Cruz-Neira, University of Arkansas at without encumbrance and discomfort confronts many grand challenges, Little Rock (United States) both from technological perspectives and human factors. She will particu- larly focus on the recent progress, challenges and opportunities for de- 4:50 SD&A-646 veloping head-mounted light field displays (LF-HMD), which are capable StarCAM - A 16K stereo panoramic video camera with a novel parallel of rendering true 3D synthetic scenes with proper focus cues to stimulate interleaved arrangement of sensors, Dominique Meyer1, Daniel Sandin2, natural eye accommodation responses and address the well-known Christopher McFarland1, Eric Lo1, Gregory Dawe1, Haoyu Wang2, Ji Dai1, vergence-accommodation conflict in conventional stereoscopic displays. Maxine Brown2, Truong Nguyen1, Harlyn Baker3, Falko Kuester1, and Tom DeFanti1; 1University of California, San Diego, 2University of Illinois at Dr. Hong Hua is a professor of optical sciences at the University of Chicago, and 3EPIImaging, LLC (United States) Arizona. With more than 25 years of experience, Hua is widely recognized through academia and industry as an expert in wearable 5:30 – 7:00 pm Symposium Demonstration Session display technologies and optical imaging and engineering in gen- eral. Hua’s current research focuses on optical technologies enabling advanced 3D displays, especially head-mounted display technologies Wednesday January 16, 2019 for virtual reality and augmented reality applications, and microscopic and endoscopic imaging systems for medicine. Hua has published more than 200 technical papers and filed a total of 23 patent applications 360, 3D, and VR Joint Session in her specialty fields, and delivered numerous keynote addresses and Session Chairs: Neil Dodgson, Victoria University of Wellington (New invited talks at major conferences and events worldwide. She is an SPIE Zealand) and Ian McDowall, Intuitive Surgical / Fakespace Labs (United Fellow and OSA senior member. She was a recipient of NSF Career States) Award in 2006 and honored as UA Researchers @ Lead Edge in 2010. Hua and her students shared a total of 8 “Best Paper” awards 8:50 – 10:10 am in various IEEE, SPIE and SID conferences. Hua received her PhD in Grand Peninsula Ballroom BC optical engineering from the Beijing Institute of Technology in China This session is jointly sponsored by: The Engineering Reality of Virtual (1999). Prior to joining the UA faculty in 2003, Hua was an assistant Reality 2019, and Stereoscopic Displays and Applications XXX. professor with the University of Hawaii at Manoa in 2003, was a Beckman Research Fellow at the Beckman Institute of University of Illinois 8:50 SD&A-647 at Urbana-Champaign between 1999 and 2002, and was a post-doc Enhanced head-mounted eye tracking data analysis using super- at the University of Central Florida in 1999. resolution, Qianwen Wan1, Aleksandra Kaszowska1, Karen Panetta1, Holly Taylor1, and Sos Agaian2; 1Tufts University and 2CUNY/ The College 3:00 – 3:30 pm Coffee Break of Staten Island (United States) 9:10 SD&A-648 Visualization Facilities Joint Session i Effects of binocular parallax in 360-degree VR images on viewing Session Chairs: Margaret Dolinsky, Indiana University (United States) and behavior, Yoshihiro Banchi, Keisuke Yoshikawa, and Takashi Kawai, Björn Sommer, University of Konstanz (Germany) Waseda University (Japan)

3:30 – 5:10 pm 9:30 SD&A-649 Grand Peninsula Ballroom BC Visual quality in VR head mounted device: Lessons learned with StarVR headset, Bernard Mendiburu, Starbreeze (United States) This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2019, and Stereoscopic Displays and Applications XXX.

56 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org The Engineering Reality of Virtual Reality 2019

9:50 SD&A-650 Time course of sickness symptoms with HMD viewing of 360-degree Light Field Imaging and Display Joint Session videos (JIST-first),Jukka Häkkinen1, Fumiya Ohta2, and Takashi Kawai2; 1University of Helsinki (Finland) and 2Waseda University (Japan) Session Chair: Gordon Wetzstein, Stanford University (United States) Grand Peninsula Ballroom D 10:00 am – 3:30 pm Industry Exhibition This session is jointly sponsored by the EI Steering Committee. 10:10 – 11:00 am Coffee Break 12:30 – 2:00 pm Lunch 3:30 EISS-706 Light fields - From shape recovery to sparse reconstruction (Invited), Ravi Ramamoorthi, University of California, San Diego (United States) Wednesday Plenary Prof. Ravi Ramamoorthi is the Ronald L. Graham Professor of Computer ERVR 2:00 – 3:00 pm Science, and Director of the Center for Visual Computing, at the Grand Peninsula Ballroom D University of California, San Diego. Ramamoorthi received his PhD in computer science (2002) from Stanford University. Prior to joining Light Fields and Light Stages for Photoreal Movies, Games, and UC San Diego, Ramamoorthi was associate professor of EECS at the Virtual Reality, Paul Debevec, senior scientist, Google (United States) University of California, Berkeley, where he developed the complete graphics curricula. His research centers on the theoretical foundations, Paul Debevec will discuss the technology and production processes behind mathematical representations, and computational algorithms for under- “Welcome to Light Fields”, the first downloadable virtual reality experience standing and rendering the visual appearance of objects, exploring based on light field capture techniques which allow the visual appear- topics in frequency analysis and sparse sampling and reconstruction ance of an explorable volume of space to be recorded and reprojected of visual appearance datasets a digital data-driven visual appearance photorealistically in VR enabling full 6DOF head movement. The lightfields pipeline; light-field cameras and 3D photography; and physics-based technique differs from conventional approaches such as 3D modelling computer vision. Ramamoorthi is an ACM Fellow for contributions and photogrammetry. Debevec will discuss the theory and application of to computer graphics rendering and physics-based computer vision, the technique. Debevec will also discuss the Light Stage computational awarded Dec. 2017, and an IEEE Fellow for contributions to founda- illumination and facial scanning systems which use geodesic spheres of tions of computer graphics and computer vision, awarded Jan. 2017. inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently 4:10 EISS-707 been used to create photoreal digital actors based on real people in mov- The beauty of light fields (Invited),David Fattal, LEIA Inc. (United ies such as Furious 7, Blade Runner: 2049, and Ready Player One. The States) lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in Dr. David Fattal is co-founder and CEO at LEIA Inc., where he is in charge a studio, and has recently be extended with multispectral capabilities to of bringing their mobile holographic display technology to market. Fattal enable LED lighting to accurately mimic the color rendition properties of received his PhD in physics from Stanford University (2005). Prior to daylight, incandescent, and mixed lighting environments. They have also founding LEIA Inc., Fattal was a research scientist with HP Labs, HP Inc. At recently used their full-body light stage in conjunction with natural language LEIA Inc., the focus is on immersive mobile, with screens that come alive processing and automultiscopic video projection to record and project in richer, deeper, more beautiful ways. Flipping seamlessly between 2D interactive conversations with survivors of the World War II Holocaust. and lightfields, mobile experiences become truly immersive: no glasses, no tracking, no fuss. Alongside new display technology LEIA Inc. is develop- Paul Debevec is a senior scientist at Google VR, a member of Google ing Leia Loft™ — a whole new canvas. VR’s Daydream team, and adjunct research professor of computer sci- ence in the Viterbi School of Engineering at the University of Southern 4:30 EISS-708 California, working within the Vision and Graphics Laboratory at the Light field insights from my time at Lytro (Invited),Kurt Akeley, Google USC Institute for Creative Technologies. Debevec’s computer graphics Inc. (United States) research has been recognized with ACM SIGGRAPH’s first Significant Dr. Kurt Akeley is a distinguished engineer at Google Inc. Akeley New Researcher Award (2001) for “Creative and Innovative Work in received his PhD in stereoscopic display technology from Stanford the Field of Image-Based Modeling and Rendering”, a Scientific and University (2004), where he implemented and evaluated a stereo- Engineering Academy Award (2010) for “the design and engineering scopic display that passively (e.g., without eye tracking) produces of the Light Stage capture devices and the image-based facial render- nearly correct focus cues. After Stanford, Akeley worked with OpenGL ing system developed for character relighting in motion pictures” with at NVIDIA Incorporated, was a principal researcher at Microsoft Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Corporation, and a consulting professor at Stanford University. In Medal (2017) in recognition of his achievements and ongoing work 2010, he joined Lytro Inc. as CTO. During his seven-year tenure as in pioneering techniques for illuminating computer-generated objects Lytro’s CTO, he guided and directly contributed to the development based on measurement of real-world illumination and their effective of two consumer light-field cameras and their related display systems, commercial application in numerous Hollywood films. In 2014, he and also to a cinematic capture and processing service that supported was profiled in The New Yorker magazine’s “Pixel Perfect: The Scientist immersive, six-degree-of-freedom virtual reality playback. Behind the Digital Cloning of Actors” article by Margaret Talbot.

3:00 – 3:30 pm Coffee Break

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 57 The Engineering Reality of Virtual Reality 2019

9:50 ERVR-177 4:50 EISS-709 BinocularsVR – A VR experience for the exhibition “From Lake Konstanz Quest for immersion (Invited), Kari Pulli, Stealth Startup (United States) to Africa, a long distance travel with ICARUS”, Björn Sommer1, Stefan 1 1 1 1 Dr. Kari Pulli has spent two decades in computer imaging and AR at com- Feyer , Daniel Klinkhammer , Karsten Klein , Jonathan Wieland , Daniel 1 1 2 2 panies such as Intel, NVIDIA and Nokia. Before joining a stealth startup, Fink , Moritz Skowronski , Mate Nagy , Martin Wikelski , Harald 1 1 1 2 he was the CTO of Meta, an augmented reality company in San Mateo, Reiterer , and Falk Schreiber ; University of Konstanz and Max Planck heading up computer vision, software, displays, and hardware, as well as Institute for Ornithology (Germany) the overall architecture of the system. Before joining Meta, he worked as 10:10 ERVR-178 the CTO of the Imaging and Camera Technologies Group at Intel, influenc- 3D visualization of 2D/360° image and navigation in virtual reality ERVR ing the architecture of future IPU’s in hardware and software. Prior, he was through motion processing via smart phone sensors, Md. Ashraful vice president of computational imaging at Light, where he developed Alam, Maliha Tasnim Aurini, and Shitab Mushfiq-ul Islam, BRAC University algorithms for combining images from a heterogeneous camera array into (Bangladesh) a single high-quality image. He previously led research teams as a senior director at NVIDIA Research and as a Nokia fellow at Nokia Research, 10:30 – 10:50 am Coffee Break where he focused on computational photography, computer vision, and AR. Pulli holds computer science degrees from the University of Minnesota Recognizing Experiences: Expanding VR (BSc), University of Oulu (MSc, Lic. Tech), and University of Washington (PhD), as well as an MBA from the University of Oulu. He has taught and Session Chair: Margaret Dolinsky, Indiana University (United States) worked as a researcher at Stanford, University of Oulu, and MIT. 10:50 am – 12:30 pm 5:10 EISS-710 Grand Peninsula Ballroom BC Industrial scale light field printing (Invited),Matthew Hirsch, Lumii Inc. (United States) 10:50 ERVR-179 Overcoming limitations of the HoloLens for use in product assembly, Dr. Matthew Hirsch is a co-founder and chief technical officer of Lumii. Jack Miller, Melynda Hoover, and Eliot Winer, Iowa State University He worked with Henry Holtzman’s Information Ecology Group and (United States) Ramesh Raskar’s Camera Culture Group at the MIT Media Lab, making the next generation of interactive and glasses-free 3D displays. Hirsch 11:10 ERVR-180 received his bachelors from Tufts University in computer engineering, Both-hands motion recognition and reproduction characteristics in and his Masters and Doctorate from the MIT Media Lab. Between front/side/rear view, Tatsunosuke Ikeda, Mie University (Japan) degrees, he worked at Analogic Corp. as an imaging engineer, where 11:30 ERVR-181 he advanced algorithms for image reconstruction and understanding Collaborative virtual reality environment for a real-time emergency in volumetric x-ray scanners. His work has been funded by the NSF evacuation of a nightclub disaster, Sharad Sharma1, Isaac Amo- and the Media Lab consortia, and has appeared in SIGGRAPH, CHI, Fempong1, David Scribner2, Jock Grynovicki2, and Peter Grazaitis2; and ICCP. Hirsch has also taught courses at SIGGRAPH on a range of 1Bowie State University and 2Army Research Laboratory (United States) subjects in computational imaging and display, with a focus on DIY. 11:50 ERVR-182 PlayTIME: A tangible approach to designing digital experiences, Daniel 5:30 – 7:00 pm Symposium Interactive Papers (Poster) Session Buckstein1, Michael Gharbharan2, and Andrew Hogue2; 1Champlain College (United States) and 2University of Ontario Institute of Technology Thursday January 17, 2019 (Canada) 12:10 ERVR-183 Augmented reality education sysyem for developing countries, Going Places with VR Md. Ashraful Alam, Intisar Hasnain Faiyaz, Sheakh Fahim Ahmmed Joy, Mehedi Session Chair: Ian McDowall, Intuitive Surgical / Fakespace Labs (United Hasan, and Ashikuzzaman Bhuiyan, BRAC University (Bangladesh) States) 12:30 – 1:30 pm Lunch 9:10 – 10:30 am Grand Peninsula Ballroom BC Reaching Beyond: VR in Translation

9:10 ERVR-175 Session Chair: Ian McDowall, Intuitive Surgical / Fakespace Labs (United ARFurniture: Augmented reality indoor decoration style colorization, States) Qianwen Wan1, Aleksandra Kaszowska1, Karen Panetta1, Holly Taylor1, and Sos Agaian2; 1Tufts University and 2CUNY/ The College of Staten 2:00 – 3:20 pm Island (United States) Grand Peninsula Ballroom BC

9:30 ERVR-176 2:00 ERVR-184 Artificial intelligence agents for crowd simulation in an immersive Enhancing mobile VR immersion: A multimodal system of neural environment for emergency response, Sharad Sharma1, Phillip networks approach to an IMU Gesture Controller, Juan Niño1,2, Jocelyne Devreaux1, Jock Grynovicki2, David Scribner2, and Peter Grazaitis2; Kiss1,2, Geoffrey Edwards1,2, Ernesto Morales1,2, Sherezada Ochoa1,2, and 1Bowie State University and 2Army Research Laboratory (United States) Bruno Bernier1; 1Laval University and 2Center for Interdisciplinary Research in Rehabilitation and Social Integration (Canada)

58 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org The Engineering Reality of Virtual Reality 2019

2:20 ERVR-185 Augmented cross-modality: Translating the physiological responses, knowledge and impression to audio-visual information in virtual reality (JIST-first),Yutaro Hirao and Takashi Kawai, Waseda University (Japan)

2:40 ERVR-186 Real-time photo-realistic augmented reality under dynamic ambient lighting conditions, Kamran Alipour and Jürgen Schulze, University of California, San Diego (United States)

3:00 ERVR-187 AR in VR: Simulating augmented reality glasses for image fusion, Fayez Lahoud and Sabine Süsstrunk, École Polytechnique Fédérale de Lausanne ERVR (EPFL) (Switzerland)

3:20 – 3:40 pm Coffee Break

3D Medical Imaging VR

Session Chair: Margaret Dolinsky, Indiana University (United States) 3:40 – 4:00 pm Grand Peninsula Ballroom BC

3:40 ERVR-188 3D medical image segmentation in virtual reality, Shea Yonker, Oleksandr Korshak, Timothy Hedstrom, Alexander Wu, Siddharth Atre, and Jürgen Schulze, University of California, San Diego (United States)

Panel Discussion: The State of VR/AR Today 4:00 – 5:00 pm Grand Peninsula Ballroom BC This session is jointly sponsored by the EI Steering Committee. Panel Moderator: Margaret Dolinsky, Indiana University (United States)

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 59 Conference Chairs: Damon Chandler, Shizuoka Human Vision and Electronic Imaging 2019 University (Japan); Mark McCourt, North Dakota State University (United States); and Jeffrey Conference overview Mulligan, NASA Ames Research Center (United The conference on Human Vision and Electronic Imaging explores the role of human percep- States) tion and cognition in the design, analysis, and use of electronic media systems. It brings together researchers, technologists and artists, from all over the world, for a rich and lively Program Committee: Albert Ahumada, NASA exchange of ideas. We believe that understanding the human observer is fundamental to Ames Research Center (United States); Kjell the advancement of electronic media systems, and that advances in these systems and Brunnström, Acreo AB (Sweden); Claus- applications drive new research into the perception and cognition of the human observer. Christian Carbon, University of Bamberg Scott Daly, Every year, we introduce new topics through our Special Sessions, centered on areas driv- (Germany); Dolby Laboratories, Inc. (United States); Huib de Ridder, Technische ing innovation at the intersection of perception and emerging media technologies. The HVEI Universiteit Delft (the Netherlands); Ulrich website (https://jbmulligan.github.io/HVEI/) includes additional information and updates. Engelke, Commonwealth Scientific and Industrial Research Organisation (Australia); Award Elena Fedorovskaya, Rochester Institute of Best Paper Award Technology (United States); James Ferwerda, Rochester Institute of Technology (United States); Jennifer Gille, Oculus VR (United States); Sergio HVEI Events Goma, Daily End-of-Day Discussions Qualcomm Technologies Inc. (United States); Hari Kalva, Florida Atlantic University Wednesday evening HVEI Banquet and Talk (United States); Stanley Klein, University of Thursday afternoon sessions honoring Christopher Tyler California, Berkeley (United States); Patrick Le Callet, Université de Nantes (France); Lora Likova, Smith-Kettlewell Eye Research Institute Conference Sponsors (United States); Mónica López-González, La Petite Noiseuse Productions (United States); Laura McNamara, Sandia National Laboratories (United States); Thrasyvoulos Pappas, Northwestern University (United States); Adar Pelah, University of York (United Kingdom); Eliezer Peli, Schepens Eye Research Institute (United States); Sylvia Pont, Technische Universiteit Delft (the Netherlands); Judith Redi, Exact (the Netherlands); Hawley Rising, Consultant (United States); Bernice Rogowitz, Visual Perspectives (United States); Sabine Süsstrunk, École Polytechnique Fédérale de Lausanne (Switzerland); Christopher Tyler, Smith- Kettlewell Eye Research Institute (United States); Andrew Watson, Apple Inc. (United States); and Michael Webster, University of Nevada, Reno (United States)

60 #EI2019 electronicimaging.org

HUMAN VISION AND ELECTRONIC IMAGING 2019

Monday, January 14, 2019 Monday Plenary 2:00 – 3:00 pm Grand Peninsula Ballroom D Human and Machine Perception 3D Shapes Autonomous Driving Technology and the OrCam MyEye, Amnon 10:40 – 11:40 am Shashua, President and CEO, Mobileye, an Intel Company, and senior Grand Peninsula Ballroom A vice president, Intel Corporation (United States) HVEI-200 The field of transportation is undergoing a seismic change with the KEYNOTE: Human and machine perception of 3D shape from coming introduction of autonomous driving. The technologies required contour, James Elder, York University (Canada) to enable computer driven cars involves the latest cutting edge artifi­cial James Elder is a professor and York research chair in human and com- intelligence algorithms along three major thrusts: Sensing, Planning and puter vision at York University, Toronto, Canada. He is jointly appointed Mapping. Shashua will describe the challenges and the kind of com- to the department of psychology and the department of electrical engi- puter vi­sion and machine learning algorithms involved, but will do that neering & computer science at York, and is a member of York’s Centre through the perspective of Mobileye’s activity in this domain. He will for Vision Research (CVR) and Vision: Science to Applications (VISTA) then describe how OrCam leverages computer vision, situation aware- program. He is also director of the NSERC CREATE Training Program ness and language process­ing to enable blind and visually impaired to HVEI in Data Analytics & Visualization (NSERC CREATE DAV) and principal interact with the world through a miniature wearable device. investigator of the Intelligent Systems for Sustainable Urban Mobility Prof. Amnon Shashua holds the Sachs chair in computer science at (ISSUM) project. His research seeks to improve machine vision systems the Hebrew University of Jerusalem. His field of expertise is computer through a better understanding of visual processing in biological vision and machine learning. Shashua has founded three startups in systems. Elder’s current research is focused on natural scene statistics, the computer vision and ma­chine learning fields. In 1995 he founded perceptual organization, contour processing, shape perception, single- CogniTens that specializes in the area of industrial metrology and is view 3D reconstruction, attentive vision systems and machine vision today a division of the Swedish Corporation Hexagon. In 1999 he systems for dynamic 3D urban awareness. cofounded Mobileye with his partner Ziv Aviram. Mobileye develops system-on-chips and computer vision algorithms for driving assistance systems and is developing a platform for autonomous driving to be launched in 2021. Today, approximately 32 million cars rely on Medical Imaging - Perception I Mobileye technology to make their vehicles safer to drive. In August 11:40 am – 12:00 pm 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising Grand Peninsula Ballroom A $1B at a market cap of $5.3B. In August 2017, Mobileye became an Intel company in the largest Israeli acquisition deal ever of $15.3B. This is the first of several medical imaging sessions throughout the week. Today, Shashua is the president and CEO of Mobileye and a senior HVEI-225 vice president of Intel Corporation. In 2010 Shashua co-founded Do different radiologists perceive medical images the same way? Some OrCam which harnesses computer vision and artificial intelligence to insights from Representational Similarity Analysis (Invited), Jay Hegde, assist people who are visually impaired or blind. Augusta University (United States)

3:00 – 3:30 pm Coffee Break 12:00 – 2:00 pm Lunch

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 61 Human Vision and Electronic Imaging 2019

Tuesday January 15, 2019 Symmetry in Vision and Image Processing 3:30 – 4:30 pm 7:15 – 8:45 am Women in Electronic Imaging Breakfast Grand Peninsula Ballroom A

HVEI-201 Material Appearance Perception Joint Session KEYNOTE: The role of symmetry in vision and image processing, Session Chair: Ingeborg Tastl, HP Labs, HP Inc. (United States) Zygmunt Pizlo, University of California, Irvine (United States) 9:10 – 10:10 am Professor Zygmunt Pizlo holds the Falmagne Endowed Chair in mathe- Grand Peninsula Ballroom D matical psychology in the department of cognitive sciences at University of California-Irvine. Pizlo received his MSc in electrical engineering This session is jointly sponsored by: Human Vision and Electronic Imaging (1978) from Politechnika, Warsaw, Poland, and PhD in electrical 2019, and Material Appearance 2019. engineering (1982) from the Institute of Electron Technology, Warsaw, Poland. He then decided to pursue his interests in, and passion for, 9:10 MAAP-202 natural sciences. Having been already exposed to elements of AI, he Material appearance: Ordering and clustering, Davit Gigilashvili, Jean- became absolutely fascinated with the possibility of studying the human Baptiste Thomas, Marius Pedersen, and Jon Yngve Hardeberg, Norwegian mind. In 1982, he started his research on human vision at the Nencki University of Science and Technology (NTNU) (Norway) Institute of Experimental Biology in the Polish Academy of Sciences in 9:30 MAAP-203 HVEI Warsaw. Delving into visual psychophysics as the most mature branch A novel translucency classification for computer graphics,Morgane of experimental psychology, Pizlo received his PhD in psychology from Gerardin1, Lionel Simonot2, Jean-Philippe Farrugia3, Jean-Claude Iehl3, the University of Maryland at College Park. Bob Steinman and Azriel Thierry Fournel4, and Mathieu Hebert4; 1Institut d’Optique Graduate Rosenfeld were his advisers. In 1988, he moved to the University School, 2Université de Poitiers, 3LIRIS, and 4Université Jean Monnet de Saint of Maryland in College Park, MD where he received his PhD in Etienne (France) Psychology (1991). He was a professor of psychological sciences at Purdue University for 26 years. In 2017, he moved to UC Irvine. Pizlo’s 9:50 MAAP-204 research focuses on psychophysics and computational modeling of 3D Constructing glossiness perception model of computer graphics with shape perception. He authored and co-authored two books on shape sounds, Takumi Nakamura, Keita Hirai, and Takahiko Horiuchi, Chiba (MIT Press, 2008 and Oxford University Press, 2014) and co-edited University (Japan) a book on shape perception in human and computer vision (Springer, 2013). His interest in vision research extends to depth, motion, figure- 10:00 am – 7:00 pm Industry Exhibition ground, color, eye movement, as well as image and video processing. He has also done work on human problem solving where he adapted 10:10 – 10:50 am Coffee Break multiresolution/multiscale pyramids used in visual models to solve combinatorial optimization problems such as the Traveling Salesman Vision Potpourri: Eye Movements, Eyeballs & Colors Problem. Most recently, he has been exploring the role that symmetry and the least-action principle can play in a theoretical formalism that 10:50 am – 12:10 pm can explain perception and cognition. Grand Peninsula Ballroom A 10:50 HVEI-205 Object-based and multi-frame motion information predict human eye movement patterns during video viewing, Zheng Ma1, Jiaxin Wu2, 2 1 1 End of Day Discussion Sheng-hua Zhong , and Stephen Heinen ; Smith-Kettlewell Eye Research Institute (United States) and 2Shenzhen University (China) 4:30 – 5:00 pm Grand Peninsula Ballroom A 11:10 HVEI-206 Discovery of activities via statistical clustering of fixation patterns, Jeffrey Moderators: Damon Chandler, Shizuoka University (Japan); Mark Mulligan, NASA Ames Research Center (United States) McCourt, North Dakota State University (United States); Jeffrey Mulligan, NASA Ames Research Center (United States) 11:30 HVEI-207 Investigation of the effect of pupil diameter on visual acuity using a Please join us for a lively discussion of today’s presentations. Participate neuro-physiological model of the human eye, Csilla Timár-Fülep and in an interactive, moderated discussion, where key topics and ques- Gábor Erdei, Budapest University of Technology and Economics (Hungary) tions are discussed from many perspectives, reflecting the diverse HVEI community. 11:50 HVEI-208 What is the opposite of blue?: The language of colour wheels (JPI- pending), Neil Dodgson, Victoria University of Wellington (New Zealand) 5:00 – 6:00 pm All-Conference Welcome Reception 12:30 – 2:00 pm Lunch

62 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Human Vision and Electronic Imaging 2019

Tuesday Plenary Computational Models for Human Optics Joint Session 2:00 – 3:00 pm Session Chair: Jennifer Gille, Oculus VR (United States) Grand Peninsula Ballroom D 3:30 – 5:30 pm The Quest for Vision Comfort: Head-Mounted Light Field Displays Grand Peninsula Ballroom D for Virtual and Augmented Reality, Hong Hua, professor of optical This session is jointly sponsored by the EI Steering Committee. sciences, University of Arizona (United States) 3:30 EISS-704 Hong Hua will discuss the high promises and the tremendous progress Eye model implementation (Invited), Andrew Watson, Apple Inc. made recently toward the development of head-mounted displays (United States) (HMD) for both virtual and augmented reality displays, developing HMDs that offer uncompromised optical pathways to both digital and Dr. Andrew Watson is the chief vision scientist at Apple Inc., where physical worlds without encumbrance and discomfort confronts many he specializes in vision science, psychophysics display human factors, grand challenges, both from technological perspectives and human visual human factors, computation modeling of vision, and image factors. She will particularly focus on the recent progress, challenges and video compression. For thirty-four years prior to joining Apple, Dr. and opportunities for developing head-mounted light field displays Watson was the senior scientist for vision research at NASA. Watson (LF-HMD), which are capable of rendering true 3D synthetic scenes received his PhD in psychology from the University of Pennsylvania with proper focus cues to stimulate natural eye accommodation respons- (1977) and followed that with post doc work in vision at the University es and address the well-known vergence-accommodation conflict in of Cambridge. conventional stereoscopic displays. 3:50 EISS-700 HVEI Dr. Hong Hua is a professor of optical sciences at the University of Wide field-of-view optical model of the human eye (Invited),James Arizona. With more than 25 years of experience, Hua is widely Polans, Verily Life Sciences (United States) recognized through academia and industry as an expert in wearable display technologies and optical imaging and engineering in gen- Dr. James Polans is an engineer who works on surgical robotics at eral. Hua’s current research focuses on optical technologies enabling Verily Life Sciences in South San Francisco. Polans received his PhD in advanced 3D displays, especially head-mounted display technologies biomedical engineering from Duke University under the mentorship of for virtual reality and augmented reality applications, and microscopic Joseph Izatt. His doctoral work explored the design and development and endoscopic imaging systems for medicine. Hua has published more of wide field-of-view optical coherence tomography systems for retinal than 200 technical papers and filed a total of 23 patent applications imaging. He also has a MS in electrical engineering from the University in her specialty fields, and delivered numerous keynote addresses and of Illinois at Urbana-Champaign. invited talks at major conferences and events worldwide. She is an SPIE 4:10 EISS-702 Fellow and OSA senior member. She was a recipient of NSF Career Evolution of the Arizona Eye Model (Invited), Jim Schwiegerling, Award in 2006 and honored as UA Researchers @ Lead Edge in University of Arizona (United States) 2010. Hua and her students shared a total of 8 “Best Paper” awards in various IEEE, SPIE and SID conferences. Hua received her PhD in optical engineering from the Beijing Institute of Technology in China Prof. Jim Schwiegerling is a professor in the College of Optical (1999). Prior to joining the UA faculty in 2003, Hua was an assistant Sciences at the University of Arizona. His research interests include the professor with the University of Hawaii at Manoa in 2003, was a design of ophthalmic systems such as corneal topographers, ocular Beckman Research Fellow at the Beckman Institute of University of Illinois wavefront sensors and retinal imaging systems. In addition to these at Urbana-Champaign between 1999 and 2002, and was a post-doc systems, Schwiegerling has designed a variety of multifocal intraocular at the University of Central Florida in 1999. and contact lenses and has expertise in diffractive and extended depth of focus systems.

3:00 – 3:30 pm Coffee Break 4:30 EISS-705 Berkeley Eye Model (Invited), Brian Barsky, University of California, Berkeley (United States)

Prof. Brian Barsky is professor of computer science and affiliate profes- sor of optometry and vision science at UC Berkeley. He attended McGill University, Montréal, received a DCS in engineering and a BSc in mathematics and computer science. He studied computer graphics and computer science at Cornell University, Ithaca, where he earned an MS. His PhD is in computer science from the University of Utah, Salt Lake City. He is a fellow of the American Academy of Optometry. His research interests include computer aided geometric design and modeling, interactive three-dimensional computer graphics, visualization in scientific computing, computer aided cornea modeling and visualiza- tion, medical imaging, and virtual environments for surgical simulation.

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 63 Human Vision and Electronic Imaging 2019

4:50 EISS-701 Wednesday January 16, 2019 Modeling retinal image formation for light field displays (Invited), Hekun Huang, Mohan Xu, and Hong Hua, University of Arizona Models of Perception (United States) 9:10 – 10:10 am Prof. Hong Hua is a professor of optical sciences at the University Grand Peninsula Ballroom A of Arizona. With more than 25 years of experience, Hua is widely recognized through academia and industry as an expert in wearable 9:10 HVEI-209 display technologies and optical imaging and engineering in general. On the role of edge orientation in stereo vision, Alfredo Restrepo and Hua’s current research focuses on optical technologies enabling ad- Julian Quiroga, Pontificia Universidad Javeriana (Colombia) vanced 3D displays, especially head-mounted display technologies for 9:30 HVEI-210 virtual reality and augmented reality applications, and microscopic and Neurocomputational lightness model explains the perception of real endoscopic imaging systems for medicine. Hua has published more surfaces viewed under Gelb illumination, Michael Rudd, University of than 200 technical papers and filed a total of 23 patent applications Washington (United States) in her specialty fields, and delivered numerous keynote addresses and invited talks at major conferences and events worldwide. She is an SPIE 9:50 HVEI-211 Fellow and OSA senior member. She was a recipient of NSF Career Accelerated cue combination for multi-cue depth perception, Christopher Award in 2006 and honored as UA Researchers @ Lead Edge in Tyler, Smith-Kettlewell Eye Research Institute (United States) 2010. Hua and her students shared a total of 8 “Best Paper” awards HVEI 10:00 am – 3:30 pm Industry Exhibition in various IEEE, SPIE and SID conferences. Hua received her PhD in optical engineering from the Beijing Institute of Technology in China 10:10 – 10:50 am Coffee Break (1999). Prior to joining the UA faculty in 2003, Hua was an assistant professor with the University of Hawaii at Manoa in 2003, was a Perceived Image Quality Beckman research fellow at the Beckman Institute of University of Illinois at Urbana-Champaign between 1999 and 2002, and was a post-doc 10:50 am – 12:10 pm at the University of Central Florida in 1999. Grand Peninsula Ballroom A

5:10 EISS-703 10:50 HVEI-212 Ray-tracing 3D spectral scenes through human optics (Invited), Trisha A visual model for predicting chromatic banding artifacts, Gyorgy Lian, Kevin MacKenzie, and Brian Wandell, Stanford University (United Denes, George Ash, and Rafal Mantiuk, University of Cambridge (United States) Kingdom) Trisha Lian is an electrical engineering PhD student at Stanford 11:10 HVEI-213 University. Before Stanford, she received her bachelor’s in biomedical NARVAL: A no-reference video quality tool for real-time engineering from Duke University. She is currently advised by Professor communications, Augustin Lemesle1,2, Alexis Marion1,3, Ludovic Roux1, and Brian Wandell and works on interdisciplinary topics that involve image Alexandre Gouaillard1; 1CoSMo Software (Singapore), 2Centrale Supelec systems simulations. These range from novel camera designs to simula- (France), and 3Centrale Marseille (France) tions of the human visual system. 11:30 HVEI-214 An improved objective metric to predict image quality using deep neural networks, Pinar Akyazi and Touradj Ebrahimi, EPFL (Switzerland)

11:50 HVEI-215 End of Day Discussion Analyze and predict the perceptibility of UHD video contents, Steve 5:30 – 6:00 pm Göring, Julian Zebelein, Simon Wedel, Dominik Keller, and Alexander Raake, Technische University Ilmenau (Germany) Grand Peninsula Ballroom A Moderators: Damon Chandler, Shizuoka University (Japan); Mark 12:30 – 2:00 pm Lunch McCourt, North Dakota State University (United States); and Jeffrey Mulligan, NASA Ames Research Center (United States) Please join us for a lively discussion of today’s presentations. Participate in an interactive, moderated discussion, where key topics and ques- tions are discussed from many perspectives, reflecting the diverse HVEI community.

5:30 – 7:00 pm Symposium Demonstration Session

64 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Human Vision and Electronic Imaging 2019

Wednesday Plenary Immersive QoE Joint Session 2:00 – 3:00 pm Session Chair: Stuart Perry, University of Technology Sydney (Australia) Grand Peninsula Ballroom D 3:30 – 5:10 pm Light Fields and Light Stages for Photoreal Movies, Games, and Grand Peninsula Ballroom A Virtual Reality, Paul Debevec, senior scientist, Google (United States) This session is jointly sponsored by: Human Vision and Electronic Imaging Paul Debevec will discuss the technology and production processes behind 2019, and Image Quality and System Performance XVI. “Welcome to Light Fields”, the first downloadable virtual reality experience based on light field capture techniques which allow the visual appear- 3:30 HVEI-216 Complexity measurement and characterization of 360-degree content, ance of an explorable volume of space to be recorded and reprojected 1 2 2 1 photorealistically in VR enabling full 6DOF head movement. The lightfields Francesca De Simone , Jesús Gutiérrez , and Patrick Le Callet ; CWI (the 2 technique differs from conventional approaches such as 3D modelling Netherlands) and Université de Nantes (France) and photogrammetry. Debevec will discuss the theory and application of 3:50 HVEI-217 the technique. Debevec will also discuss the Light Stage computational Using 360 VR video to improve the learning experience in veterinary illumination and facial scanning systems which use geodesic spheres of medicine university degree, Esther Guervós1, Jaime Jesús Ruiz2, Pablo inward-pointing LED lights as have been used to create digital actor effects Perez2, Juan Alberto Muñoz1, César Díaz3, and Narciso Garcia3; in movies such as Avatar, Benjamin Button, and Gravity, and have recently 1Universidad Alfonso X El Sabio, 2Nokia Bell Labs, and 3Universidad been used to create photoreal digital actors based on real people in mov- Politécnica de Madrid (Spain) ies such as Furious 7, Blade Runner: 2049, and Ready Player One. The 4:10 HVEI-218 lighting reproduction process of light stages allows omnidirectional lighting HVEI environments captured from the real world to be accurately reproduced in Quality of Experience of visual-haptic interaction in a virtual reality a studio, and has recently be extended with multispectral capabilities to simulator, Kjell Brunnstrom1,2, Elijs Dima2, Mattias Andersson2, Mårten enable LED lighting to accurately mimic the color rendition properties of Sjöström2, Tahir Qureshi3, and Mathias Johanson4; 1RISE Acreo AB, 2Mid daylight, incandescent, and mixed lighting environments. They have also Sweden University, 3HIAB AB, and 4Alkit Communications AB (Sweden) recently used their full-body light stage in conjunction with natural language 4:30 HVEI-219 processing and automultiscopic video projection to record and project Impacts of internal HMD playback processing on subjective quality interactive conversations with survivors of the World War II Holocaust. perception, Frank Hofmeyer, Stephan Fremerey, Thaden Cohrs, and Paul Debevec is a senior scientist at Google VR, a member of Google Alexander Raake, Technische Universität Ilmenau (Germany) VR’s Daydream team, and adjunct research professor of computer sci- 4:50 IQSP-220 ence in the Viterbi School of Engineering at the University of Southern Are people pixel-peeping 360° videos?, Stephan Fremerey1, Rachel California, working within the Vision and Graphics Laboratory at the Huang2, and Alexander Raake1; 1Technische Universität Ilmenau (Germany) USC Institute for Creative Technologies. Debevec’s computer graphics and 2Huawei Technologies Co., Ltd. (China) research has been recognized with ACM SIGGRAPH’s first Significant New Researcher Award (2001) for “Creative and Innovative Work in the Field of Image-Based Modeling and Rendering”, a Scientific and Engineering Academy Award (2010) for “the design and engineering End of Day Discussion of the Light Stage capture devices and the image-based facial render- ing system developed for character relighting in motion pictures” with 5:10 – 5:30 pm Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Grand Peninsula Ballroom A Medal (2017) in recognition of his achievements and ongoing work Moderators: Damon Chandler, Shizuoka University (Japan); Mark in pioneering techniques for illuminating computer-generated objects McCourt, North Dakota State University (United States); and Jeffrey based on measurement of real-world illumination and their effective Mulligan, NASA Ames Research Center (United States) commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine’s “Pixel Perfect: The Scientist Please join us for a lively discussion of today’s presentations. Participate Behind the Digital Cloning of Actors” article by Margaret Talbot. in an interactive, moderated discussion, where key topics and ques- tions are discussed from many perspectives, reflecting the diverse HVEI community. 3:00 – 3:30 pm Coffee Break

5:30 – 7:00 pm Symposium Interactive Papers (Poster) Session

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 65 Human Vision and Electronic Imaging 2019

Thursday January 17, 2019 HVEI Banquet and Speaker: Dr. Jacqueline C. Snow

7:00 – 10:00 pm Medical Imaging - Computational Joint Session Offsite - details provided on ticket 8:50 – 10:10 am Join us for a wonderful evening of conversations, a banquet dinner, Grand Peninsula Ballroom A and an enlightening speaker. This banquet is associated with the Human Vision and Electronic Imaging Conference (HVEI), but everyone This medical imaging session is jointly sponsored by: Computational interested in research at the intersection of human perception/cogni- Imaging XVII, Human Vision and Electronic Imaging 2019, and Imaging tion, imaging technologies, and art is welcome. We’ll convene over a and Multimedia Analytics in a Web and Mobile World 2019. family-style meal at a local Lebanese/Middle Eastern restaurant. 8:50 IMAWM-145 1 2 3 4 HVEI-221 Smart fetal care, Jane You , Qin Li , Qiaozhu Chen , Zhenhua Guo , 5 1 KEYNOTE: ‘WonkaVision’ and the need for a paradigm shift in vision and Hongbo Yang ; The Hong Kong Polytechnic University (Hong Kong), 2 3 research, Jacqueline Snow, University of Nevada, Reno (United States) Shenzhen Institute of Information Technology (China), Guangzhou Women and Children Medical Center (China), 4Tsinghua University Jacqueline Snow joined the cognitive and brain sciences group in (China), and 5Suzhou Institute of Biomedical Engineering and Technology, the department of psychology at the University of Nevada, Reno in Chinese Academy of Sciences (China) fall 2013. She completed her graduate training in clinical neuropsy- 9:10 COIMG-146

HVEI chology and cognitive neuroscience at the University of Melbourne, Australia, under the supervision of Professor Jason Mattingley. Snow Self-contained, passive, non-contact, photoplethysmography: Real-time completed two years of post-doctoral research in the United Kingdom extraction of heart rates from live view within a Canon Powershot, working with Professor Glyn Humphreys of University of Birmingham. Henry Dietz, Chadwick Parrish, and Kevin Donohue, University of Kentucky During this time, she developed a strong interest in functional magnetic (United States) resonance imaging (fMRI). She subsequently moved to Canada where 9:30 COIMG-147 she completed a further five years of post-doctoral research in the Edge-preserving total variation regularization for dual-energy CT laboratories of Professors Jody Culham and Melvyn Goodale at the images, Sandamali Devadithya and David Castañón, Boston University University of Western Ontario. During this time, she developed a range (United States) of special fMRI techniques to study how objects are represented in the human brain. Now an assistant professor at the University of Nevada, 9:50 COIMG-148 Reno, Snow teaches undergraduate psychology students about the Fully automated dental panoramic radiograph by using internal theory and practice of science, and graduate student seminars in func- mandible curves of dental volumetric CT, Sanghun Lee1, Seongyoun tional magnetic resonance imaging (fMRI) and clinical neuropsychol- Woo1, Joonwoo Lee2, Jaejun Seo2, and Chulhee Lee1; 1Yonsei University ogy. She also heads a research laboratory that consists of four doctoral and 2Dio Implant (Republic of Korea) students and a group of Honors Program students and undergraduate trainees. Together, they examine how humans recognize and make 10:10 – 10:50 am Coffee Break decisions about objects. They are particularly interested in studying the behavioral significance of real-world 3-D objects that one can reach Medical Imaging - Perception II Joint Session out and interact with, such as tools and snack foods, and how neural Session Chair: Sos Agaian, CUNY/ The College of Staten Island (United structures in the brain code and represent action-relevant information. States) Other research topics include how object information is integrated across sensory modalities, such as vision and touch. They use a range 10:50 am – 12:10 pm of methodological approaches, including fMRI, psychophysics and Grand Peninsula Ballroom A the study of neuropsychological patients with brain damage. The lab This medical imaging session is jointly sponsored by: Human Vision and is supported by a pilot project grant from the Center of Biomedical Electronic Imaging 2019, and Image Processing: Algorithms and Systems Research Excellence (COBRE). XVII.

10:50 IPAS-222 Specular reflection detection algorithm for endoscopic images, Viacheslav Voronin1, Evgeny Semenishchev1, and Sos Agaian2; 1Don State Technical University (Russian Federation) and 2CUNY/ The College of Staten Island (United States)

11:10 IPAS-223 Feedback alfa-rooting algorithm for medical image enhancement, Viacheslav Voronin1, Evgeny Semenishchev1, and Sos Agaian2; 1Don State Technical University (Russian Federation) and 2CUNY/ The College of Staten Island (United States)

11:30 HVEI-224 Observer classification images and efficiency in 2D and 3D search tasks (Invited), Craig Abbey, Miguel Lago, and Miguel Eckstein, University of California, Santa Barbara (United States)

66 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Human Vision and Electronic Imaging 2019

11:50 HVEI-226 The Art of Science: The ‘Magic Eyes’ of Christopher Tyler, Part 2 Image recognition depends largely on variety (Invited), Tamara Haygood1, Christina Thomas2, Tara Sagebiel2, Diana Palacio2, Myrna Session Chairs: Lora Likova, Smith-Kettlewell Eye Research Institute (United Godoy2, and Karla Evans1; 1University of York (United Kingdom) and 2UT States) and Jeffrey Mulligan, NASA Ames Research Center (United States) M.D. Anderson Cancer Center (United States) 3:50 – 5:35 pm Grand Peninsula Ballroom A 12:10 – 1:30 pm Lunch While long-standing HVEI committee member, Christopher Tyler, shows no The Art of Science: The ‘Magic Eyes’ of Christopher Tyler, Part 1 signs of retiring, his attainment of his 75th year, and his seminal contribu- tions ranging from binocular vision and stereopsis through art and con- Session Chairs: Lora Likova, Smith-Kettlewell Eye Research Institute (United sciousness to brain mechanisms and brain imaging, are certainly deserving States) and Jeffrey Mulligan, NASA Ames Research Center (United States) of recognition. With this Tylerfest session, we are honoring him, followed by a reception/discussion section (with refreshments), with a self-organized 1:50 – 3:20 pm dinner outing afterwards. The focus for this session is on Bay Area col- Grand Peninsula Ballroom A laborators and HVEI colleagues of the honoree, with each participant While long-standing HVEI committee member, Christopher Tyler, shows no presenting exciting relevant work. signs of retiring, his attainment of his 75th year, and his seminal contribu- 3:50 HVEI-233 tions ranging from binocular vision and stereopsis through art and con- Christopher Tyler through the looking glass...of HVEI, Bernice Rogowitz, sciousness to brain mechanisms and brain imaging, are certainly deserving Visual Perspectives and Columbia University (United States) of recognition. With this Tylerfest session, we are honoring him, followed by a reception/discussion section (with refreshments), with a self-organized 4:05 HVEI-234 dinner outing afterwards. The focus for this session is on Bay Area col- The role of rigorous computer-aided image analysis in fine art HVEI laborators and HVEI colleagues of the honoree, with each participant authentication, David Stork, Rambus Labs (United States) presenting exciting relevant work. 4:20 HVEI-235 1:50 HVEI-227 Forty years of human stereopsis, Anthony Norcia, Stanford University Vision scientist Chris Tyler - An appreciation of his contributions, Gerald (United States) Westheimer, University of California, Berkeley (United States) 4:35 HVEI-236 2:05 HVEI-228 Factors of the visual mind and brain: Normal individual differences in Paradoxical, quasi-ideal, spatial summation in the modelfest data, the spatiotemporal sensitivities of adults and infants, David Peterzell, Stanley Klein, University of California, Berkeley (United States) John F. Kennedy University (United States)

2:20 HVEI-229 4:50 HVEI-237 Modulate this! CWT measures the spatial sensitivity of higher-order Quantum jump into the brain, Lora Likova, Smith-Kettlewell Eye Research quantities, Jeffrey Mulligan, NASA Ames Research Center (United States) Institute (United States)

2:35 HVEI-230 5:05 HVEI-238 “Trust the Psychophysics”. Applying Tyler’s precepts to computer vision, Explorations into the light and dark sides of the visual system, Hoover Lauren Barghout, University of California, Berkeley (United States) Chan, University of California, San Francisco (United States)

2:50 HVEI-231 5:20 HVEI-239 The notorious CWT: Adventures with Christopher Tyler, Mark McCourt, Light, quanta and vision: A metaphysical evolution, Christopher Tyler, North Dakota State University (United States) Smith-Kettlewell Eye Research Institute (United States)

3:05 HVEI-232 A retrospective of our collaboration, Leonid Kontsevich, entrepreneur (United States) HVEI Conference Wrap-up Discussion

3:20 – 3:45 pm Coffee Break 5:45 – 6:30 pm Grand Peninsula Ballroom A Moderators: Damon Chandler, Shizuoka University (Japan); Mark McCourt, North Dakota State University (United States); and Jeffrey Mulligan, NASA Ames Research Center (United States) Please join us for a lively discussion of today’s presentations. Participate in an interactive, moderated discussion, where key topics and ques- tions are discussed from many perspectives, reflecting the diverse HVEI community.

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 67 Conference Chairs: Sos S. Agaian, The Image Processing: Algorithms and Systems XVII University of Texas at San Antonio (United States); Karen 0. Egiazarian, Tampere University Conference overview of Technology (Finland); and Atanas P. Gotchev, Image Processing: Algorithms and Systems continues the tradition of the past conference Tampere University of Technology (Finland) Nonlinear Image Processing and Pattern Analysis in exploring new image processing al- gorithms. It also reverberates the growing call for integration of the theoretical research on Program Committee: Gözde Bozdagi Akar, image processing algorithms with the more applied research on image processing systems. Middle East Technical University (Turkey); Junior Barrera, Universidad de São Paulo (Brazil); Jenny Benois-Pineau, Specifically, the conference aims at highlighting the importance of the interaction between Bordeaux University (France); Giacomo Boracchi, Politecnico di Milano (Italy); linear, nonlinear, and transform-based approaches for creating sophisticated algorithms and Reiner Creutzburg, Technische Hochschule building modern imaging systems for new and emerging applications. Brandenburg (Germany); Alessandro Foi, Tampere University of Technology (Finland); Paul Award D. Gader, University of Florida (United States); Best Paper John C. Handley, University of Rochester (United States); Vladimir V. Lukin, National Aerospace University (Ukraine); Vladimir Marchuk, Don State Technical University (Russian Federation); Alessandro Neri, Radiolabs (Italy); Marek R. Ogiela, AGH University of Science and Technology (Poland); Ljiljana Platisa, Universiteit Gent (Belgium); Françoise Prêteux, Ecole des Ponts ParisTech (France); Giovanni Ramponi, University degli Studi di Trieste (Italy); Ivan W. Selesnick, Polytechnic Institute of New York University (United States); and Damir Sersic, University of Zagreb (Croatia) IPAS

68 #EI2019 electronicimaging.org

IMAGE PROCESSING: ALGORITHMS AND SYSTEMS XVII

Monday, January 14, 2019

Image Restoration I Image Restoration II

Session Chairs: Karen Egiazarian, Tampere University of Technology Session Chairs: Sos Agaian, CUNY/ The College of Staten Island (United (Finland) and Atanas Gotchev, Tampere University of Technology (Finland) States) and Atanas Gotchev, Tampere University of Technology (Finland) 8:50 – 10:20 am 10:40 am – 12:10 pm Regency C Regency C

8:50 IPAS-250 Additive spatially correlated noise suppression by robust block 10:40 IPAS-254 General Adaptive Neighborhood Image Processing (GANIP) (Invited), matching and adaptive 3D filtering (JIST-first),Oleksii Rubel1, Vladimir Lukin1, and Karen Egiazarian2; 1National Aerospace University (Ukraine) Johan Debayle, Ecole Nationale Supérieure des Mines (France) 2 and Tampere University of Technology (Finland) 11:10 IPAS-255 Gradient management and algebraic reconstruction for single image 9:10 IPAS-251 1 1 A snowfall noise elimination using moving object compositing method super resolution, Leandro Delfin , Raul Pinto Elias , and Humberto de Jesus 2 1 2 adaptable to natural boundary, Yoshihiro Sato, Koya Kokubo, and Yue Ochoa Dominguez ; CENIDET and Universidad Autónoma de Ciudad Bao, Tokyo City University (Japan) Juarez (Mexico) 11:30 IPAS-256 9:30 IPAS-252 Patch-based image despeckling using low-rank Hankel matrix Image stitching by creating a virtual depth, Ahmed Eid, Brian Cooper, approach with speckle level estimation, Hansol Kim, Paul Oh, Sangyoon and Tomasz Cholewo, Lexmark (United States) Lee, and Moon Gi Kang, Yonsei University (Republic of Korea) 11:50 IPAS-257 Enhanced guided image filter using trilateral kernel for disparity error 9:50 IPAS-253 Leveraging training data in computational image reconstruction correction, Yong-Jun Chang and Yo-Sung Ho, Gwangju Institute of Science (Invited), Davis Gilton1, Greg Ongie2, and Rebecca Willett2; 1University of and Technology (Republic of Korea) Wisconsin, Madison and 2University of Chicago (United States) IPAS Phase Imaging

10:20 – 10:40 am Coffee Break Session Chairs: Sos Agaian, CUNY/ The College of Staten Island (United States) and Karen Egiazarian, Tampere University of Technology (Finland) 12:10 – 12:50 pm Regency C

12:10 IPAS-258 Phase masks optimization for broadband diffractive imaging, Nikolay Ponomarenko, Vladimir Katkovnik, and Karen Egiazarian, Tampere University of Technology (Finland)

12:30 IPAS-259 Phase extraction from interferogram using machine learning, Daichi Kando and Satoshi Tomioka, Hokkaido University (Japan)

12:50 – 2:00 pm Lunch

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 69 Image Processing: Algorithms and Systems XVII

Monday Plenary Panel: Sensing and Perceiving for Autonomous Driving Joint Session 2:00 – 3:00 pm 3:30 – 5:30 pm Grand Peninsula Ballroom D Grand Peninsula Ballroom D Autonomous Driving Technology and the OrCam MyEye, Amnon This session is jointly sponsored by the EI Steering Committee Shashua, President and CEO, Mobileye, an Intel Company, and senior Moderator: vice president, Intel Corporation (United States) Dr. Wende Zhang, technical fellow, General Motors Panelists: The field of transportation is undergoing a seismic change with the Dr. Amnon Shashua, professor of computer science, Hebrew University; coming introduction of autonomous driving. The technologies required president and CEO, Mobileye, an Intel Company, and senior vice to enable computer driven cars involves the latest cutting edge artifi­cial president, Intel Corporation intelligence algorithms along three major thrusts: Sensing, Planning and Dr. Boyd Fowler, CTO, OmniVision Technologies Mapping. Shashua will describe the challenges and the kind of com- Dr. Christoph Schroeder, head of autonomous driving N.A., Mercedes- puter vi­sion and machine learning algorithms involved, but will do that Benz R&D Development North America, Inc. through the perspective of Mobileye’s activity in this domain. He will Dr. Jun Pei, CEO and co-founder, Cepton Technologies Inc. then describe how OrCam leverages computer vision, situation aware- ness and language process­ing to enable blind and visually impaired to Driver assistance and autonomous driving rely on perceptual systems interact with the world through a miniature wearable device. that combine data from many different sensors, including camera, ultrasound, radar and lidar. The panelists will discuss the strengths and Prof. Amnon Shashua holds the Sachs chair in computer science at limitations of different types of sensors and how the data from these the Hebrew University of Jerusalem. His field of expertise is computer sensors can be effectively combined to enable autonomous driving. vision and machine learning. Shashua has founded three startups in the computer vision and ma­chine learning fields. In 1995 he founded CogniTens that specializes in the area of industrial metrology and is 5:00 – 6:00 pm All-Conference Welcome Reception today a division of the Swedish Corporation Hexagon. In 1999 he cofounded Mobileye with his partner Ziv Aviram. Mobileye develops system-on-chips and computer vision algorithms for driving assistance Tuesday January 15, 2019 systems and is developing a platform for autonomous driving to be

IPAS launched in 2021. Today, approximately 32 million cars rely on 7:15 – 8:45 am Women in Electronic Imaging Breakfast Mobileye technology to make their vehicles safer to drive. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising Image Quality $1B at a market cap of $5.3B. In August 2017, Mobileye became Session Chairs: Marco Carli, Università degli Studi Roma TRE (Italy) and an Intel company in the largest Israeli acquisition deal ever of $15.3B. Karen Egiazarian, Tampere University of Technology (Finland) Today, Shashua is the president and CEO of Mobileye and a senior vice president of Intel Corporation. In 2010 Shashua co-founded 8:50 – 10:10 am OrCam which harnesses computer vision and artificial intelligence to Regency C assist people who are visually impaired or blind. 8:50 IPAS-260 Combined no-reference IQA metric and its performance analysis 3:00 – 3:30 pm Coffee Break (Invited), Oleg Ieremeiev1, Vladimir Lukin1, Nikolay Ponomarenko1,2, and Karen Egiazarian2; 1National Aerospace University (Ukraine) and 2Tampere University of Technology (Finland)

9:10 IPAS-261 Evaluating the effectiveness of image quality metrics in a light field scenario, Giuliano Arru, Marco Carli, and Federica Battisti, Università degli Studi Roma TRE (Italy)

9:30 IPAS-262 Parameter optimization in H.265 rate-distortion by single frame semantic scene analysis, Ahmed Hamza1, Abdelrahman Abdelazim2, and Djamel Ait-Boudaoud1; 1University of Portsmouth and 2Blackpool and the Fylde College (United Kingdom)

9:50 IPAS-263 Additional lossless compression of JPEG images based on BPG, Nikolay Ponomarenko1, Oleksandr Miroshnichenko2, Vladimir Lukin2, and Karen Egiazarian1; 1Tampere University of Technology (Finland) and 2National Aerospace University (Ukraine)

10:00 am – 7:00 pm Industry Exhibition

10:10 – 10:50 am Coffee Break

70 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org electronicimaging.org #EI2019 With, EindhovenUniversityofTechnology (theNetherlands) Groot,EgorBondarev,and betterpersondetection,Herman andPeterde bycustomizeddataset Improving personre-identificationperformance 12:10 (Japan) Tomoyuki Takanashi, KeitaHirai,andTakahiko Horiuchi,ChibaUniversity facialimageprocessing, and itsapplicationtodeeplearning-based offacial emotiondatabasethroughsubjectiveexperiments Construction 11:50 With, EindhovenUniversityofTechnology (theNetherlands) AmirGhahremani,Yitian Kong,EgorBondarev,surveillance, andPeterde Multi-class detectionandorientationrecognitionofvesselsinmaritime 11:30 Wencheng Wu, UniversityofRochester(UnitedStates) classifier,On-street parkedvehicledetectionviaview-normalized 11:10 cameras, Yiling Qiao in imageprocessingandegomotionanalysisfrombodyworn multilabelclassification quantificationforsemi-supervised Uncertainty 10:50 Regency C 10:50 am–12:30pm States) andAtanasGotchev, Tampere UniversityofTechnology (Finland) Session Chairs:SosAgaian,CUNY/TheCollegeofStatenIsland(United Object Recognition I San LuisObispo,and Haberland mage

P 1,2 rocessing , AndrewStuart 1 3 , ChangShi California InstituteofTechnologyCalifornia (UnitedStates) 12:30 –2:00pmLunch 3 : A , andAndreaBertozzi lgorithms 1 , ChenjianWang

and Paper titlesandauthors asof1Jan2019;finalproceedings manuscriptmayvary. 1 ; 1 1 , HaoLi UCLA, S ystems 2 1 Cal Poly , Matthew IP IP IP IP IP AS-268 AS-267 AS-266 AS-265 AS-264 XVII at the University of Central Florida in 1999. at theUniversityofCentralFloridain1999. at Urbana-Champaignbetween1999and2002,wasapost-doc Beckman ResearchFellowattheInstituteofUniversityIllinois professor withtheUniversityofHawaiiatManoain2003,wasa (1999). PriortojoiningtheUAfacultyin2003,Huawasanassistant optical engineeringfromtheBeijingInstituteofTechnology inChina in variousIEEE,SPIEandSIDconferences.HuareceivedherPhD 2010. Huaandherstudentssharedatotalof8“BestPaper”awards Award in2006andhonoredasUAResearchers@LeadEdge Fellow andOSAseniormember. ShewasarecipientofNSFCareer invited talksatmajorconferencesandeventsworldwide.SheisanSPIE in herspecialtyfields,anddeliverednumerouskeynoteaddresses than 200technicalpapersandfiledatotalof23patentapplications and endoscopicimagingsystemsformedicine.Huahaspublishedmore realityand augmentedrealityapplications,andmicroscopic for virtual advanced 3Ddisplays,especiallyhead-mounteddisplaytechnologies eral. Hua’s currentresearchfocusesonopticaltechnologiesenabling display technologiesandopticalimagingengineeringingen- inwearable asanexpert recognized throughacademiaandindustry Arizona. Withmorethan25yearsofexperience,Huaiswidely Dr. HongHuaisaprofessorofopticalsciencesattheUniversity conventional stereoscopicdisplays. es andaddressthewell-knownvergence-accommodationconflictin with properfocuscuestostimulatenaturaleyeaccommodationrespons (LF-HMD), whicharecapableofrenderingtrue3Dsyntheticscenes fordevelopinghead-mountedlightfielddisplays and opportunities focusontherecentprogress,challenges factors. Shewillparticularly grand challenges,bothfromtechnologicalperspectivesandhuman confrontsmany physical worldswithoutencumbranceanddiscomfort uncompromisedopticalpathwaystobothdigitaland HMDs thatoffer andaugmentedrealitydisplays,developing (HMD) forbothvirtual made recentlytowardthedevelopmentofhead-mounteddisplays Hong Huawilldiscussthehighpromisesandtremendousprogress sciences, UniversityofArizona(UnitedStates) andAugmentedReality,for Virtual HongHua,professorofoptical Head-Mounted LightFieldDisplays The QuestforVisionComfort: Grand PeninsulaBallroomD 2:00 –3:00pm Tuesday Plenary 3:00 – 3:30 pm Coffee Break 3:00 –3:30pmCoffee 71 -

IPAS Image Processing: Algorithms and Systems XVII

4:50 EISS-701 Computational Models for Human Optics Joint Session Modeling retinal image formation for light field displays (Invited), Hekun Huang, Mohan Xu, and Hong Hua, University of Arizona Session Chair: Jennifer Gille, Oculus VR (United States) (United States) 3:30 – 5:30 pm Prof. Hong Hua is a professor of optical sciences at the University of Grand Peninsula Ballroom D Arizona. With more than 25 years of experience, Hua is widely recog- This session is jointly sponsored by the EI Steering Committee. nized through academia and industry as an expert in wearable display technologies and optical imaging and engineering in general. Hua’s 3:30 EISS-704 current research focuses on optical technologies enabling advanced 3D Eye model implementation (Invited), Andrew Watson, Apple Inc. displays, especially head-mounted display technologies for virtual reality (United States) and augmented reality applications, and microscopic and endoscopic Dr. Andrew Watson is the chief vision scientist at Apple Inc., where he imaging systems for medicine. Hua has published more than 200 techni- specializes in vision science, psychophysics display human factors, visual cal papers and filed a total of 23 patent applications in her specialty human factors, computation modeling of vision, and image and video fields, and delivered numerous keynote addresses and invited talks at compression. For thirty-four years prior to joining Apple, Dr. Watson was major conferences and events worldwide. She is an SPIE Fellow and OSA the senior scientist for vision research at NASA. Watson received his PhD senior member. She was a recipient of NSF Career Award in 2006 and in psychology from the University of Pennsylvania (1977) and followed that honored as UA Researchers @ Lead Edge in 2010. Hua and her students with post doc work in vision at the University of Cambridge. shared a total of 8 “Best Paper” awards in various IEEE, SPIE and SID conferences. Hua received her PhD in optical engineering from the Beijing 3:50 EISS-700 Institute of Technology in China (1999). Prior to joining the UA faculty in Wide field-of-view optical model of the human eye (Invited),James 2003, Hua was an assistant professor with the University of Hawaii at Polans, Verily Life Sciences (United States) Manoa in 2003, was a Beckman research fellow at the Beckman Institute of University of Illinois at Urbana-Champaign between 1999 and 2002, Dr. James Polans is an engineer who works on surgical robotics at Verily and was a post-doc at the University of Central Florida in 1999. Life Sciences in South San Francisco. Polans received his PhD in biomedi- cal engineering from Duke University under the mentorship of Joseph 5:10 EISS-703 Izatt. His doctoral work explored the design and development of wide Ray-tracing 3D spectral scenes through human optics (Invited), Trisha field-of-view optical coherence tomography systems for retinal imaging. Lian, Kevin MacKenzie, and Brian Wandell, Stanford University (United IPAS He also has a MS in electrical engineering from the University of Illinois at States) Urbana-Champaign. Trisha Lian is an electrical engineering PhD student at Stanford University. 4:10 EISS-702 Before Stanford, she received her bachelor’s in biomedical engineering Evolution of the Arizona Eye Model (Invited), Jim Schwiegerling, from Duke University. She is currently advised by Professor Brian Wandell University of Arizona (United States) and works on interdisciplinary topics that involve image systems simula- tions. These range from novel camera designs to simulations of the human Prof. Jim Schwiegerling is a professor in the College of Optical Sciences at visual system. the University of Arizona. His research interests include the design of oph- thalmic systems such as corneal topographers, ocular wavefront sensors and retinal imaging systems. In addition to these systems, Schwiegerling has designed a variety of multifocal intraocular and contact lenses and has 5:30 – 7:00 pm Symposium Demonstration Session expertise in diffractive and extended depth of focus systems.

4:30 EISS-705 Berkeley Eye Model (Invited), Brian Barsky, University of California, Berkeley (United States) Prof. Brian Barsky is professor of computer science and affiliate professor of optometry and vision science at UC Berkeley. He attended McGill University, Montréal, received a DCS in engineering and a BSc in mathematics and computer science. He studied computer graphics and computer science at Cornell University, Ithaca, where he earned an MS. His PhD is in computer science from the University of Utah, Salt Lake City. He is a fellow of the American Academy of Optometry. His research inter- ests include computer aided geometric design and modeling, interactive three-dimensional computer graphics, visualization in scientific computing, computer aided cornea modeling and visualization, medical imaging, and virtual environments for surgical simulation.

72 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org electronicimaging.org #EI2019 Wednesday 16,2019 January I mage Behind the Digital Cloning of Actors” article byMargaretTalbot.Behind theDigitalCloningofActors”article was profiledinTheNewYorker magazine’s TheScientist “PixelPerfect: commercial applicationinnumerousHollywoodfilms.In2014,he based onmeasurementofreal-worldilluminationandtheireffective in pioneeringtechniquesforilluminatingcomputer-generatedobjects Medal (2017)inrecognitionofhisachievementsandongoingwork Tim Hawkins,JohnMonos,andMarkSagar, andtheSMPTEProgress ing systemdevelopedforcharacterrelightinginmotionpictures”with of theLightStagecapturedevicesandimage-basedfacialrender- Engineering AcademyAward (2010)for“thedesign and engineering the FieldofImage-BasedModelingandRendering”,aScientific New ResearcherAward (2001)for“CreativeandInnovativeWork in research hasbeenrecognizedwithACMSIGGRAPH’s firstSignificant USC InstituteforCreativeTechnologies. Debevec’s computergraphics atthe workingwithintheVision andGraphicsLaboratory California, ence intheViterbiSchoolofEngineeringatUniversitySouthern VR’s Daydreamteam,andadjunctresearchprofessorofcomputersci- Paul DebevecisaseniorscientistatGoogleVR,memberof oftheWorldinteractive conversationswithsurvivors War IIHolocaust. processing andautomultiscopicvideoprojectiontorecordproject recently usedtheirfull-bodylightstageinconjunctionwithnaturallanguage daylight, incandescent,andmixedlightingenvironments.Theyhavealso of enable LEDlightingtoaccuratelymimicthecolorrenditionproperties a studio,andhasrecentlybeextendedwithmultispectralcapabilitiesto environments capturedfromtherealworldtobeaccuratelyreproducedin lighting reproductionprocessoflightstagesallowsomnidirectional ies suchasFurious7,BladeRunner:2049,andReadyPlayerOne.The been usedtocreatephotorealdigitalactorsbasedonrealpeopleinmov- in moviessuchasAvatar, BenjaminButton, andGravity, andhaverecently inward-pointing LEDlightsashavebeenusedtocreatedigitalactoreffects illumination andfacialscanningsystemswhichusegeodesicspheresof the technique.DebevecwillalsodiscussLightStagecomputational and photogrammetry.applicationof Debevecwilldiscussthetheory from conventionalapproachessuchas3Dmodelling technique differs photorealistically inVRenablingfull6DOFheadmovement.Thelightfields ance ofanexplorablevolumespacetoberecordedandreprojected based onlightfieldcapturetechniqueswhichallowthevisualappear- “Welcome realityexperience toLightFields”,thefirstdownloadablevirtual Paul Debevecwilldiscussthetechnologyandproductionprocessesbehind Reality,Virtual PaulDebevec,seniorscientist,Google(UnitedStates) Light FieldsandStagesforPhotorealMovies,Games, Grand PeninsulaBallroomD 2:00 –3:00pm Wednesday Plenary P rocessing 10:00 am – 3:30 pm Industry Exhibition 10:00 am–3:30pmIndustry 10:10 – 11:00 am Coffee Break 10:10 –11:00amCoffee 3:00 – 3:30 pm Coffee Break 3:00 –3:30pmCoffee 12:30 –2:00pmLunch : A lgorithms

and Paper titlesandauthors asof1Jan2019;finalproceedings manuscriptmayvary. S ystems XVII ing Leia Loft™ — a whole new canvas. ing LeiaLoft™—awholenewcanvas. tracking, nofuss.AlongsidenewdisplaytechnologyLEIAInc.isdevelop- and lightfields,mobileexperiencesbecometrulyimmersive:noglasses, in richer, deeper, morebeautifulways.Flippingseamlesslybetween2D LEIA Inc.,thefocusisonimmersivemobile,withscreensthatcomealive founding LEIAInc.,FattalwasaresearchscientistwithHPLabs,Inc.At received hisPhDinphysicsfromStanfordUniversity(2005).Priorto of bringingtheirmobileholographicdisplaytechnologytomarket.Fattal Dr. DavidFattalisco-founderandCEOatLEIAInc.,whereheincharge States) Light fields - From shape recovery to sparse reconstruction (Invited), tosparsereconstruction Light fields-Fromshaperecovery 3:30 EISS-706 This sessionisjointlysponsoredbytheEISteeringCommittee. Grand PeninsulaBallroomD 3:30 –5:30pm Session Chair:GordonWetzstein, StanfordUniversity(UnitedStates) Light FieldImagingandDisplay Light fieldinsightsfrommytimeat Lytro (Invited), 4:30 EISS-708 The beautyoflightfields(Invited), 4:10 EISS-707 awarded Jan. 2017. awarded Jan.2017. for contributionstofoundationsofcomputergraphicsandvision, physics-based computervision,awardedDec.2017,andanIEEEFellow an ACMFellowforcontributionstocomputergraphicsrenderingand is 3D photography;andphysics-basedcomputervision.Ramamoorthi a digitaldata-drivenvisualappearancepipeline;light-fieldcamerasand and sparsesamplingreconstructionofvisualappearancedatasets the visualappearanceofobjects,exploringtopicsinfrequencyanalysis sentations, andcomputationalalgorithmsforunderstandingrendering His researchcentersonthetheoreticalfoundations,mathematicalrepre- Berkeley,California, wherehedevelopedthecompletegraphicscurricula. wasassociateprofessorofEECSattheUniversity Ramamoorthi science (2002)fromStanfordUniversity. PriortojoiningUCSanDiego, receivedhisPhD incomputer SanDiego.Ramamoorthi of California, Science, andDirectoroftheCenterforVisualComputing,atUniversity istheRonaldL.GrahamProfessorofComputer Prof. RaviRamamoorthi SanDiego (UnitedStates) UniversityofCalifornia, Ravi Ramamoorthi, ported immersive, six-degree-of-freedom virtual reality playback. realityplayback. immersive,six-degree-of-freedomvirtual ported thatsup- systems, andalsotoacinematiccaptureprocessingservice development oftwoconsumerlight-fieldcamerasandtheirrelateddisplay seven-year tenureasLytro’s CTO,heguidedanddirectlycontributedtothe at StanfordUniversity. In2010,hejoinedLytro Inc.asCTO.Duringhis a principalresearcheratMicrosoftCorporation,andconsultingprofessor After Stanford,AkeleyworkedwithOpenGLatNVIDIAIncorporated,was passively (e.g.,withouteyetracking)producesnearlycorrectfocuscues. (2004), whereheimplementedandevaluatedastereoscopicdisplaythat his PhDinstereoscopicdisplaytechnologyfromStanfordUniversity Dr. AkeleyisadistinguishedengineeratGoogleInc.received Kurt Google Inc.(UnitedStates)

DavidFattal,LEIAInc.(United Kurt Akeley, Kurt J oint

S ession 73

IPAS Image Processing: Algorithms and Systems XVII

IPAS-273 4:50 EISS-709 Illumination invariant NIR face recognition using directional visibility, Quest for immersion (Invited), Kari Pulli, Stealth Startup (United States) Srijith Rajeev1, Shreyas Kamath1, Qianwen Wan1, Karen Panetta1, and 2 1 2 Dr. Kari Pulli has spent two decades in computer imaging and AR at com- Sos Agaian ; Tufts University and CUNY/ The College of Staten Island panies such as Intel, NVIDIA and Nokia. Before joining a stealth startup, (United States) he was the CTO of Meta, an augmented reality company in San Mateo, IPAS-274 heading up computer vision, software, displays, and hardware, as well as Microscope image matching in scope of multi-resolution observation the overall architecture of the system. Before joining Meta, he worked as system, Evan Eka Putranto1, Usuki Shin2, and Kenjiro Miura1; 1Shizuoka the CTO of the Imaging and Camera Technologies Group at Intel, influenc- University and 2Research Institute of Electronics (Japan) ing the architecture of future IPU’s in hardware and software. Prior, he was vice president of computational imaging at Light, where he developed IPAS-275 algorithms for combining images from a heterogeneous camera array into Multi-frame super-resolution utilizing spatially adaptive regularization a single high-quality image. He previously led research teams as a senior for ToF camera, Haegeun Lee, Jonghyun Kim, Jaeduk Han, and Moon Gi director at NVIDIA Research and as a Nokia fellow at Nokia Research, Kang, Yonsei University (Republic of Korea) where he focused on computational photography, computer vision, and AR. Pulli holds computer science degrees from the University of Minnesota IPAS-276 Pixelwise JPEG compression detection and quality factor estimation (BSc), University of Oulu (MSc, Lic. Tech), and University of Washington based on convolutional neural network, Kazutaka Uchida1, Masayuki (PhD), as well as an MBA from the University of Oulu. He has taught and 2,1 1 1 worked as a researcher at Stanford, University of Oulu, and MIT. Tanaka , and Masatoshi Okutomi ; Tokyo Institute of Technology and 2National Institute of Advanced Industrial Science and Technology (Japan) 5:10 EISS-710 Industrial scale light field printing (Invited),Matthew Hirsch, Lumii Inc. IPAS-277 The quaternion-based anisotropic gradient for the color images, (United States) Viacheslav Voronin1, Vladimir Frants2, and Sos Agaian3; 1Don State Dr. Matthew Hirsch is a co-founder and chief technical officer of Lumii. He Technical University (Russian Federation), 2Moscow State University of worked with Henry Holtzman’s Information Ecology Group and Ramesh Technology “STANKIN” (Russian Federation), and 3CUNY/ The College of Raskar’s Camera Culture Group at the MIT Media Lab, making the next Staten Island (United States) generation of interactive and glasses-free 3D displays. Hirsch received his bachelors from Tufts University in computer engineering, and his Masters

IPAS and Doctorate from the MIT Media Lab. Between degrees, he worked at Thursday January 17, 2019 Analogic Corp. as an imaging engineer, where he advanced algorithms for image reconstruction and understanding in volumetric x-ray scanners. Imaging Systems Joint Session His work has been funded by the NSF and the Media Lab consortia, and has appeared in SIGGRAPH, CHI, and ICCP. Hirsch has also taught Session Chairs: Atanas Gotchev, Tampere University of Technology courses at SIGGRAPH on a range of subjects in computational imaging (Finland) and Michael Kriss, MAK Consultants (United States) and display, with a focus on DIY. 8:50 – 10:10 am Regency B This session is jointly sponsored by: Image Processing: Algorithms and Image Processing: Algorithms and Systems XVII Interactive Posters Session Systems XVII, and Photography, Mobile, and Immersive Imaging 2019. Session Chairs: Federica Battisti, Università degli Studi di Roma Tre (Italy) and Viacheslav Voronin, Don State Technical University (Russian Federation) 8:50 PMII-278 EDICT: Embedded and distributed intelligent capture technology 5:30 – 7:00 pm (Invited), Scott Campbell, Timothy Macmillan, and Katsuri Rangam, Area4 The Grove Professional Design Services (United States)

The following works will be presented at the EI 2019 Symposium 9:10 IPAS-279 Interactive Papers Session. Modeling lens optics and rendering virtual views from fisheye imagery, IPAS-269 Filipe Gama, Mihail Georgiev, and Atanas Gotchev, Tampere University of Background subtraction using Multi-Channel Fused Lasso, Xin Liu and Technology (Finland) Guoying Zhao, University of Oulu (Finland) 9:30 PMII-280 IPAS-270 Digital distortion correction to measure spatial resolution from cameras 1 2 1 Depth from stacked light field images using generative adversarial with wide-angle lenses, Brian Rodricks and Yi Zhang ; SensorSpace, 2 network, Ji-Hun Mun and Yo-Sung Ho, Gwangju Institute of Science and LLC and Facebook Inc. (United States) Technology (GIST) (Republic of Korea) 9:50 IPAS-281 IPAS-271 LiDAR assisted large-scale privacy protection in street view cycloramas, 1 2 1 1 Depth-based saliency estimation for omnidirectional images, Federica Clint Sebastian , Bas Boom , Egor Bondarev , and Peter de With ; 1 2 Battisti and Marco Carli, Università degli Studi Roma TRE (Italy) Eindhoven University of Technology and CycloMedia Technology B.V. (the Netherlands) IPAS-272 Driver drowsiness detection in facial images, Fadi Dornaika, Jorge Reta, 10:10 – 10:50 am Coffee Break Ignacio Arganda-Carreras, and Abdelmalik Moujahid, University of the Basque Country (Spain)

74 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Technical University(RussianFederation)and Technical University(RussianFederation)and Godoy electronicimaging.org #EI2019 Haygood Image recognitiondependslargelyonvariety(Invited), 11:50 SantaBarbara(UnitedStates) California, (Invited), CraigAbbey, MiguelLago,andEckstein,Universityof in2Dand3Dsearchtasks classificationimagesandefficiency Observer 11:30 Viacheslav Voronin Feedback alfa-rootingalgorithmformedicalimageenhancement, 11:10 Viacheslav Voronin Specular reflectiondetectionalgorithmforendoscopicimages, 10:50 XVII. Electronic Imaging2019,andImageProcessing:AlgorithmsSystems This medicalimagingsessionisjointlysponsoredby:HumanVisionand Grand PeninsulaBallroomA 10:50 am–12:10pm States) Session Chair:SosAgaian,CUNY/TheCollegeofStatenIsland(United Medical Imaging-PerceptionII I Staten Island(UnitedStates) Staten Island(UnitedStates) M.D. AndersonCancerCenter(UnitedStates) mage

2 , andKarlaEvans 1 P , ChristinaThomas rocessing 1 1 , EvgenySemenishchev , EvgenySemenishchev 1 ; 1 2 University ofYork (UnitedKingdom)and : A , Tara Sagebiel

lgorithms 1 1 , andSosAgaian , andSosAgaian 2 2 2 CUNY/ TheCollegeof CUNY/ TheCollegeof , DianaPalacio

and Paper titlesandauthors asof1Jan2019;finalproceedings manuscriptmayvary. S Tamara ystems J oint 2 2 2 ; ; , Myrna , Myrna 1 1 Don State Don State

S HVEI-224 HVEI-226 IP IP ession AS-223 AS-222 2 UT XVII 75

IPAS Conference Chairs: Nicolas Bonnier, Apple Inc. Image Quality and System Performance XVI (United States); and Stuart Perry, University of Technology Sydney (Australia) Conference overview We live in a visual world. The perceived quality of images is of crucial importance in Program Committee: Alan Bovik, University industrial, medical, and entertainment application environments. Developments in camera of Texas at Austin (United States); Peter sensors, image processing, 3D imaging, display technology, and digital printing are ena- Burns, Burns Digital Imaging (United States); bling new or enhanced possibilities for creating and conveying visual content that informs or Brian Cooper, Lexmark International, Inc. entertains. Wireless networks and mobile devices expand the ways to share imagery and (United States); Luke Cui, Amazon (United autonomous vehicles bring image processing into new aspects of society. States); Mylène Farias, University of Brasilia (Brazil); Susan Farnand, Rochester Institute of Technology (United States); Frans Gaykema, The power of imaging rests directly on the visual quality of the images and the performance Océ Technologies B.V. (the Netherlands); Jukka of the systems that produce them. As the images are generally intended to be viewed by Häkkinen, University of Helsinki (Finland); Dirk humans, a deep understanding of human visual perception is key to the effective assessment Hertel, E Ink Corporation (United States); Robin of image quality. Jenkin, NVIDIA Corporation (United States); Elaine Jin, NVIDIA Corporation (United States); This conference brings together engineers and scientists from industry and academia who Mohamed-Chaker Larabi, University of Poitiers strive to understand what makes a high-quality image, and how to specify the requirements (France); Göte Nyman, University of Helsinki Jonathan Phillips, and assess the performance of modern imaging systems. It focuses on objective and sub- (Finland); Google Inc. (United States); Sophie Triantaphillidou, University of jective methods for evaluating the perceptual quality of images, and includes applications Westminster (United Kingdom); and Clément throughout the imaging chain from image capture, through processing, to output, printed or Viard, DxOMark Image Labs (United States) displayed, video or still, 2D or 3D, virtual, mixed or augmented reality, LDR or HDR.

Awards Best Student Paper Conference Sponsors Best Paper IQSP

76 #EI2019 electronicimaging.org

IMAGE QUALITY AND SYSTEM PERFORMANCE XVI

Monday, January 14, 2019 Monday Plenary

Automotive Image Quality Joint Session 2:00 – 3:00 pm Grand Peninsula Ballroom D Session Chairs: Patrick Denny, Valeo (Ireland); Stuart Perry, University of Technology Sydney (Australia); and Peter van Beek, Intel Corporation Autonomous Driving Technology and the OrCam MyEye, Amnon (United States) Shashua, President and CEO, Mobileye, an Intel Company, and senior vice president, Intel Corporation (United States) 8:50 – 10:10 am Grand Peninsula Ballroom D The field of transportation is undergoing a seismic change with the coming introduction of autonomous driving. The technologies required This session is jointly sponsored by: Autonomous Vehicles and Machines to enable computer driven cars involves the latest cutting edge artifi­cial 2019, and Image Quality and System Performance XVI. intelligence algorithms along three major thrusts: Sensing, Planning and 8:50 AVM-026 Mapping. Shashua will describe the challenges and the kind of com- Updates on the progress of IEEE P2020 Automotive Imaging Standards puter vi­sion and machine learning algorithms involved, but will do that Working Group, Robin Jenkin, NVIDIA Corporation (United States) through the perspective of Mobileye’s activity in this domain. He will then describe how OrCam leverages computer vision, situation aware- 9:10 AVM-027 ness and language process­ing to enable blind and visually impaired to Signal detection theory and automotive imaging, Paul Kane, ON interact with the world through a miniature wearable device. Semiconductor (United States) Prof. Amnon Shashua holds the Sachs chair in computer science at 9:30 AVM-029 the Hebrew University of Jerusalem. His field of expertise is computer Digital camera characterisation for autonomous vehicles applications, vision and machine learning. Shashua has founded three startups in Paola Iacomussi and Giuseppe Rossi, INRIM (Italy) the computer vision and ma­chine learning fields. In 1995 he founded CogniTens that specializes in the area of industrial metrology and is 9:50 AVM-030 today a division of the Swedish Corporation Hexagon. In 1999 he Contrast detection probability - Implementation and use cases, Uwe cofounded Mobileye with his partner Ziv Aviram. Mobileye develops Artmann1, Marc Geese2, and Max Gäde1; 1Image Engineering GmbH & system-on-chips and computer vision algorithms for driving assistance Co KG and 2Robert Bosch GmbH (Germany) systems and is developing a platform for autonomous driving to be launched in 2021. Today, approximately 32 million cars rely on 10:10 – 11:00 am Coffee Break Mobileye technology to make their vehicles safer to drive. In August 12:30 – 2:00 pm Lunch 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising $1B at a market cap of $5.3B. In August 2017, Mobileye became an Intel company in the largest Israeli acquisition deal ever of $15.3B. Today, Shashua is the president and CEO of Mobileye and a senior vice president of Intel Corporation. In 2010 Shashua co-founded OrCam which harnesses computer vision and artificial intelligence to

assist people who are visually impaired or blind. IQSP

3:00 – 3:30 pm Coffee Break

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 77 Image Quality and System Performance XVI

Printing System Performance Image Quality Modeling II Session Chair: Mylène Farias, University of Brasilia (Brazil) Session Chair: Stuart Perry, University of Technology Sydney (Australia) 3:30 – 4:50 pm 9:30 – 10:10 am Grand Peninsula Ballroom E Grand Peninsula Ballroom E IQSP-306 3:30 IQSP-300 KEYNOTE: Conscious of streaming (Quality), Detection of streaks on printed pages, Runzhe Zhang1, Eric Maggard2, Alan Bovik, The Renee Jessome2, Yousun Bang2, Minki Cho2, and Jan Allebach1; 1Purdue University of Texas at Austin (United States) 2 University and HP, Inc. (United States) Alan Bovik is the Cockrell Family Regents Endowed Chair professor at The University of Texas at Austin. He has received many major 3:50 IQSP-301 Segmentation-based detection of local defects on printed pages, Qiulin international awards, including the 2019 IEEE Fourier Award, the Chen1, Eric Maggard2, Renee Jessome2, Yousun Bang3, Minki Cho3, and 2017 Edwin H. Land Medal from IS&T/OSA, the 2015 Primetime Jan Allebach1; 1Purdue University (United States), 2HP, Inc. (United States), Emmy Award for Outstanding Achievement in Engineering Development and 3HP-Korea (Republic of Korea) from the Academy of Television Arts and Sciences, and the ‘Society’ and ‘Sustained Impact’ Awards of the IEEE Signal Processing Society. 4:10 IQSP-302 His is a Fellow of IEEE, OSA, and SPIE. His books include The Banding estimation for print quality, Wan-Eih Huang1, Eric Maggard2, Handbook of Image and Video Processing, Modern Image Quality Renee Jessome2, Yousun Bang2, Minki Cho2, and Jan Allebach1; 1Purdue Assessment, and The Essential Guides to Image and Video Processing. University and 2HP, Inc. (United States) Bovik co-founded and was the longest-serving editor-in-chief of the IEEE Transactions on Image Processing and created the IEEE International 4:30 IQSP-303 Conference on Image Processing in Austin, Texas, in November 1994. Blockwise detection of local defects on printed pages, Xiaoyu Xiang1, Eric Maggard2, Renee Jessome2, Yousun Bang2, Minki Cho2, and Jan 1 1 2 Allebach ; Purdue University and HP, Inc. (United States) 10:00 am – 7:00 pm Industry Exhibition

5:00 – 6:00 pm All-Conference Welcome Reception 10:10 – 10:50 am Coffee Break

Display Performance Tuesday January 15, 2019 Session Chair: Nicolas Bonnier, Apple Inc. (United States) 7:15 – 8:45 am Women in Electronic Imaging Breakfast 10:50 am – 12:30 pm Grand Peninsula Ballroom E Image Quality Modeling I 10:50 IQSP-307 Session Chair: Stuart Perry, University of Technology Sydney (Australia) Combining quality metrics using machine learning for improved and robust HDR image quality assessment, Anustup Choudhury and Scott

IQSP 8:50 – 9:30 am Grand Peninsula Ballroom E Daly, Dolby Laboratories, Inc. (United States) 11:10 IQSP-308 8:50 IQSP-304 Subjective evaluations on perceptual image brightness in high dynamic A referenceless image quality assessment based on BSIF, CLBP, LPQ, and range television, Yoshitaka Ikeda and Yuichi Kusakabe, NHK (Japan LCP texture descriptors, Pedro Garcia Freitas, Luisa Eira, Samuel Santos, Broadcasting Corporation) (Japan) and Mylène Farias, University of Brasilia (Brazil) 11:30 IQSP-309 9:10 IQSP-305 Image quality evaluation on an HDR OLED display, Dalin Tian, Lihao Xu, Compensating MTF measurements for chart quality limitations, Norman and Ming Ronnier Luo, Zhejiang University (China) Koren, Imatest LLC (United States) 11:50 IQSP-310 A comprehensive framework for visual quality assessment of light field tensor displays, Irene Viola1, Keita Takahashi2, Toshiaki Fujii2, and Touradj Ebrahimi1; 1École Polytechnique Fédérale de Lausanne (EPFL) (Switzerland) and 2Nagoya University (Japan)

12:10 IQSP-311 Semantic label bias in subjective video quality evaluation: A standardization perspective, Mihai Mitrea1, Rania Bensaied1, and Patrick Le Callet2; 1Institut Mines-Telecom and 2Université de Nantes (France)

12:30 – 2:00 pm Lunch

78 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org electronicimaging.org #EI2019 4 2 Perry data bytheJPEGCommittee,Stuart Study ofsubjectiveandobjectivequalityevaluation3Dpointcloud 3:30 Grand PeninsulaBallroomE 3:30 –4:50pm Session Chair:JonathanPhillips,GoogleInc.(UnitedStates) Special SessiononImageQualityinStandardization I Dumic mage University of Coimbra (Portugal) University ofCoimbra(Portugal) University ofBeiraInterior(Portugal), at the University of Central Florida in 1999. at theUniversityofCentralFloridain1999. at Urbana-Champaignbetween1999and2002,wasapost-doc Beckman ResearchFellowattheInstituteofUniversityIllinois professor withtheUniversityofHawaiiatManoain2003,wasa (1999). PriortojoiningtheUAfacultyin2003,Huawasanassistant optical engineeringfromtheBeijingInstituteofTechnology inChina in variousIEEE,SPIEandSIDconferences.HuareceivedherPhD 2010. Huaandherstudentssharedatotalof8“BestPaper”awards Award in2006andhonoredasUAResearchers@LeadEdge Fellow andOSAseniormember. ShewasarecipientofNSFCareer invited talksatmajorconferencesandeventsworldwide.SheisanSPIE in herspecialtyfields,anddeliverednumerouskeynoteaddresses than 200technicalpapersandfiledatotalof23patentapplications and endoscopicimagingsystemsformedicine.Huahaspublishedmore realityand augmentedrealityapplications,andmicroscopic for virtual advanced 3Ddisplays,especiallyhead-mounteddisplaytechnologies eral. Hua’s currentresearchfocusesonopticaltechnologiesenabling display technologiesandopticalimagingengineeringingen- inwearable asanexpert recognized throughacademiaandindustry Arizona. Withmorethan25yearsofexperience,Huaiswidely Dr. HongHuaisaprofessorofopticalsciencesattheUniversity conventional stereoscopicdisplays. es andaddressthewell-knownvergence-accommodationconflictin with properfocuscuestostimulatenaturaleyeaccommodationrespons (LF-HMD), whicharecapableofrenderingtrue3Dsyntheticscenes fordevelopinghead-mountedlightfielddisplays and opportunities focusontherecentprogress,challenges factors. Shewillparticularly grand challenges,bothfromtechnologicalperspectivesandhuman confrontsmany physical worldswithoutencumbranceanddiscomfort uncompromisedopticalpathwaystobothdigitaland HMDs thatoffer andaugmentedrealitydisplays,developing (HMD) forbothvirtual made recentlytowardthedevelopmentofhead-mounteddisplays Hong Huawilldiscussthehighpromisesandtremendousprogress sciences, UniversityofArizona(UnitedStates) andAugmentedReality,for Virtual HongHua,professorofoptical Head-Mounted LightFieldDisplays The QuestforVisionComfort: Grand PeninsulaBallroomD 2:00 –3:00pm Tuesday Plenary

3 , andLuisCruz Q uality 4 3:00 – 3:30 pm Coffee Break 3:00 –3:30pmCoffee

; and 1 University ofTechnology Sydney(Australia), S ystem 3 University North (Croatia),and University North P 1 , AntonioPinheiro erformance Paper titlesandauthors asof1Jan2019;finalproceedings manuscriptmayvary. 2 , Emil XVI IQSP-312 - Institute ofStandardsandTechnology (UnitedStates) Bovik Center, and and BenjaminTseng Ramesh 2 Visual noiserevisionforISO15739, 4:30 IQSP-315 Ramachandra Rao to parametricvideoqualitymodelITU-TP.1203, RakeshRao Extensions Adaptive videostreamingwithcurrentcodecsandformats: 4:10 IQSP-314 Reducing thecross-labvariationofimagequalitymetrics, 3:50 IQSP-313 Image qualityassessmentusingcomputervision, 9:10 IQSP-317 Multivariate statisticalmodelingforimagequalityprediction, 8:50 IQSP-316 Grand PeninsulaBallroomE 8:50 –9:30am DigitalImaging(UnitedStates) Burns Session Chair:PeterBurns, Camera ImageQualityI Wednesday 16,2019 January Gupta and NaoyahKato Alexander Raake Jose VillamarVillarreal Sony (Japan) Elaine Jin,NVIDIACorporation(UnitedStates) Mylène Farias,UniversityofBrasilia(Brazil) DigitalImaging(UnitedStates) Burns Peter Burns, Nicolas Bonnier, AppleInc.(UnitedStates) Panelists: (Australia) Perry,Panel Moderator:Stuart UniversityofTechnology Sydney Grand PeninsulaBallroomE 4:50 –5:30pm IQSP oftheFuture 1 ; 1 , ChristosBampis 2,3 1 The UniversityofTexas atAustin, , andChu-hengLiu 5:30 –7:00pmSymposiumDemonstrationSession 3 Xerox Corporation(UnitedStates) 1 ; 2 1 ; , SteveGöring 1 2 TU Ilmenauand 1 ; Image Engineering GmbH & Co. KG (Germany) and Image EngineeringGmbH&Co.KG(Germany) 1 1 , Werner Robitza Imatest LLCand 2 , JackGlover 3 ; 1 Purdue University, 1 , PatrickVogel 2 Deutsche Telekom (Germany) Dietmar Wueller 3 , NicholasPaulter 2 1 Apkudo (UnitedStates) , PeterList 2 Netflix Inc.,and 1 , NicolasPachatz 2 2 Palo AltoResearch Zhi Li , Bernhard Feiten , Bernhard 1 , AkiraMatsui 1 3 , Palghat , andAlan 3 National Henry Koren Henry Praful 1 , Juan 2 2 , and , 79 1

IQSP Image Quality and System Performance XVI

12:20 IQSP-324 Camera Image Quality II Analyzing the influence of cross-modal IP-based degradations on the perceived audio-visual quality, Helard Becerra and Mylène Farias, Session Chair: Peter Burns, Burns Digital Imaging (United States) University of Brasilia (Brazil) 9:30 – 10:10 am Grand Peninsula Ballroom E 12:40 – 2:00 pm Lunch IQSP-318 KEYNOTE: Benchmarking image quality for billions of images, Jonathan Phillips, Google Inc. (United States) Wednesday Plenary Jonathan Phillips is co-author of Camera Image Quality Benchmarking, 2:00 – 3:00 pm a 2018 addition to the Wiley-IS&T Series in Imaging Science and Grand Peninsula Ballroom D Technology collection. His experience in the imaging industry spans Light Fields and Light Stages for Photoreal Movies, Games, and nearly 30 years, having worked at Kodak in both chemical and Virtual Reality, Paul Debevec, senior scientist, Google (United States) electronic photography for more than 20 years followed by image scientist positions with NVIDIA and Google. Currently, he is managing Paul Debevec will discuss the technology and production processes behind a color science team at Google responsible for the display color of the “Welcome to Light Fields”, the first downloadable virtual reality experience Pixel phone product line. He was awarded the International Imaging based on light field capture techniques which allow the visual appear- Industry Association (I3A) Achievement Award for his groundbreaking ance of an explorable volume of space to be recorded and reprojected work on modeling consumer-facing camera phone image quality, photorealistically in VR enabling full 6DOF head movement. The lightfields which is now incorporated into the IEEE Standard for Camera technique differs from conventional approaches such as 3D modelling Phone Image Quality. Phillips has been project lead for numerous and photogrammetry. Debevec will discuss the theory and application of photography standards published by I3A, IEEE, and ISO. His graduate the technique. Debevec will also discuss the Light Stage computational studies were in color science at Rochester Institute of Technology and illumination and facial scanning systems which use geodesic spheres of his undergraduate studies were in chemistry and music at Wheaton inward-pointing LED lights as have been used to create digital actor effects College (IL). in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in mov- 10:00 am – 3:30 pm Industry Exhibition ies such as Furious 7, Blade Runner: 2049, and Ready Player One. The lighting reproduction process of light stages allows omnidirectional lighting 10:10 – 10:40 am Coffee Break environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of Video Quality daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjunction with natural language Session Chair: Elaine Jin, NVIDIA Corporation (United States) processing and automultiscopic video projection to record and project 10:40 am – 12:40 pm interactive conversations with survivors of the World War II Holocaust. Grand Peninsula Ballroom E Paul Debevec is a senior scientist at Google VR, a member of Google VR’s Daydream team, and adjunct research professor of computer sci- IQSP 10:40 IQSP-319 ence in the Viterbi School of Engineering at the University of Southern Best practices for imaging system MTF measurement, David Haefner, California, working within the Vision and Graphics Laboratory at the Stephen Burks, Josh Doe, and Bradley Preece, NVESD (United States) USC Institute for Creative Technologies. Debevec’s computer graphics 11:00 IQSP-320 research has been recognized with ACM SIGGRAPH’s first Significant Quantify aliasing – A new approach to make resolution measurement New Researcher Award (2001) for “Creative and Innovative Work in more robust, Uwe Artmann, Image Engineering GmbH & Co. KG the Field of Image-Based Modeling and Rendering”, a Scientific and (Germany) Engineering Academy Award (2010) for “the design and engineering of the Light Stage capture devices and the image-based facial render- 11:20 IQSP-321 ing system developed for character relighting in motion pictures” with Subjective analysis of an end-to-end streaming system, Christos Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Bampis1, Zhi Li1, Ioannis Katsavounidis2, Te-Yuan Huang1, Chaitanya Medal (2017) in recognition of his achievements and ongoing work Ekanadham1, and Alan Bovik3; 1Netflix Inc.,2 Facebook, Inc., and 3The in pioneering techniques for illuminating computer-generated objects University of Texas at Austin (United States) based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he 11:40 IQSP-322 was profiled in The New Yorker magazine’s “Pixel Perfect: The Scientist Saliency-based perceptual quantization method for HDR video quality Behind the Digital Cloning of Actors” article by Margaret Talbot. enhancement, Naty Sidaty, Wassim Hamidouche, Yi Liu, and Olivier Deforges, IETR/INSA (France)

12:00 IQSP-323 3:00 – 3:30 pm Coffee Break Subjective and objective quality assessment for volumetric video compression, Emin Zerman, Pan Gao, Cagri Ozcinar, and Aljosa Smolic, Trinity College Dublin (Ireland)

80 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Netherlands) and Sweden University, Huang electronicimaging.org #EI2019 and JanAllebach,PurdueUniversity(UnitedStates) ZhiLi, predictor usinglikesanddislikes,RachelBilbo,KendalNorman, Understanding fashionaesthetics:Training aneuralnetworkbased Ming RonnierLuo,ZhejiangUniversity(China) The characterizationofanHDROLEDdisplay, DalinTian, LihaoXu,and University ofAlabamainHuntsville(UnitedStates) inrangeimages,JacobHauensteinandTimothy Newman,The curvature ofnoiselevelonmethodstodetermine An examinationoftheeffects Interactive PapersSession. The followingworkswillbepresentedattheEI2019Symposium The Grove 5:30 –7:00pm Are peoplepixel-peeping360°videos?,StephanFremerey 4:50 Alexander Raake,Technische UniversitätIlmenau(Germany) perception, FrankHofmeyer, StephanFremerey, ThadenCohrs,and HMDplayback processingonsubjectivequality Impacts ofinternal 4:30 simulator, KjellBrunnstrom reality Quality ofExperiencevisual-hapticinteractioninavirtual 4:10 1 medicine universitydegree,EstherGuervós experienceinveterinary Using 360VRvideotoimprovethelearning 3:50 Francesca DeSimone Complexity measurementandcharacterizationof360-degreecontent, 3:30 XVI. 2019, andImageQualitySystemPerformance This sessionisjointlysponsoredby:HumanVisionandElectronicImaging Grand PeninsulaBallroomA 3:30 –5:10pm Perry,Session Chair:Stuart UniversityofTechnology Sydney(Australia) Immersive QoE I Politécnica deMadrid(Spain) Sjöström and Image Quality and System Performance XVIInteractivePostersSession Image QualityandSystemPerformance Perez mage Universidad AlfonsoXElSabio,

2 2 Huawei Technologies Co.,Ltd.(China) , Juan Alberto Muñoz , JuanAlberto 2 , andAlexanderRaake 2 , Tahir Qureshi Q uality

2 Université deNantes(France) 3 HIAB AB,and 1

, JesúsGutiérrez and 3 , andMathiasJohanson 1,2 1 , CésarDíaz , ElijsDima S 1 ; ystem 1 2 Technische UniversitätIlmenau(Germany) Nokia BellLabs,and 4 Alkit CommunicationsAB(Sweden) 2 , andPatrickLeCallet 2 , MattiasAndersson 3 P , andNarcisoGarcia 1 erformance , JaimeJesúsRuiz 4 ; 1 Paper titlesandauthors asof1Jan2019;finalproceedings manuscriptmayvary. RISE AcreoAB, 3 Universidad 1 2 J , Rachel , Mårten , Mårten 2 oint 2 ; , Pablo 1 3 XVI CWI (the ;

S IQSP-327 IQSP-326 IQSP-325 IQSP-220 HVEI-218 HVEI-217 HVEI-216 HVEI-219 ession 2 Mid 81

IQSP IMSE

Image Sensors and Imaging Systems 2018

Conference Chairs: Arnaud Darmont, APHESA Image Sensors and Imaging Systems 2019 SPRL (Belguim), Arnaud Peizerat, Commissariat à l’Énergie Atomique (France); and Ralf Widenhorn, Conference overview Portland State University (United States) Solid state optical sensors and solid state cameras have established themselves as the imaging systems of choice for many demanding professional applications such as scien- Program Committee: Nick Bulitka, Lumenera tific and industrial applications. The advantages of low-power, low-noise, high-resolution, Corp. (Canada); Calvin Chao, Taiwan high­geometric fidelity, broad spectral sensitivity, and extremely high quantum efficiency Semiconductor Manufacturing Company have led to a number of revolutionary uses. (Taiwan); Glenn H. Chapman, Simon Fraser University (Canada); Tobi Delbrück, Institute of Neuroinformatics, University of Zurich and ETH This conference aims at being a place of exchanges and at giving the opportunity to a quick Zurich (Switzerland); James A. DiBella, Imperx publication of new works in the areas of solid state detectors, solid state cameras, (United States); Antoine Dupret, Commissariat new optical concepts, and novel applications. To encourage young talent, a best student à l’Énergie Atomique (France); Boyd A. Fowler, paper contest is organized. OminVision Technologies, Inc. (United States); Eiichi Funatsu, OmniVision Technologies, Inc. Award (jointly with the PMII conference) (United States); Rihito Kuroda, Tohoku University Arnaud Darmont Best Paper Award* (Japan); Kevin J. Matherson, Microsoft Corp. (United States); Min-Woong Seo, Samsung Electronics, Semiconductor R&D Center (Republic of Korea); Gilles Sicard, Commissariat a I’Energie Atomique (France); Nobukazu *The Arnaud Darmont Best Paper Award is given in recognition of IMSE Conference Chair Arnaud Teranishi, University of Hyogo (Japan); Jean- Darmont who passed away unexpectedly in September 2018. Michel Tualle, University Paris 13 (France); Orly Yadid-Pecht, University of Calgary (Canada); Arnaud dedicated his professional life to the computer vision industry. After completing his degree in and Xinyang Wang, GPIXEL (China) electronic engineering from the University of Liège in Belgium (2002) he launched his career in the field of CMOS image sensors and high dynamic range imaging, founding APHESA in 2008. He was fiercely dedicated to disseminating knowledge about sensors, computer vision, and custom electronics design of imaging devices as witnessed by his years of teaching courses at the Electronic Imaging Symposium and Photonics West Conference, as well as his authorship of several publications. At the Conference Sponsor time of his death, Arnaud was in the final stages of revising the second edition of “High Dynamic Range Imaging – Sensors and Architectures”, first published in 2013. An active member of the EMVA 1288 standardization group, he was also the standards manager for the organization where he oversaw the development of EMVA standards and fostered cooperation with other imaging associations worldwide on the development and the dissemination of vision standards. His dedication, knowledge, and bound- less energy will be missed by the IS&T and Electronic Imaging communities. IMSE

82 #EI2019 electronicimaging.org

IMAGE SENSORS AND IMAGING SYSTEMS 2019

Monday, January 14, 2019 Panel: Sensing and Perceiving for Autonomous Driving Joint Session 10:10 – 11:00 am Coffee Break 3:30 – 5:30 pm 12:30 – 2:00 pm Lunch Grand Peninsula Ballroom D This session is jointly sponsored by the EI Steering Committee.

Monday Plenary Moderator: Dr. Wende Zhang, technical fellow, General Motors 2:00 – 3:00 pm Panelists: Dr. Amnon Shashua, professor of computer science, Hebrew University; Grand Peninsula Ballroom D president and CEO, Mobileye, an Intel Company, and senior vice Autonomous Driving Technology and the OrCam MyEye, Amnon president, Intel Corporation Shashua, President and CEO, Mobileye, an Intel Company, and senior Dr. Boyd Fowler, CTO, OmniVision Technologies vice president, Intel Corporation (United States) Dr. Christoph Schroeder, head of autonomous driving N.A., Mercedes- Benz R&D Development North America, Inc. The field of transportation is undergoing a seismic change with the Dr. Jun Pei, CEO and co-founder, Cepton Technologies Inc coming introduction of autonomous driving. The technologies required to enable computer driven cars involves the latest cutting edge artifi­cial Driver assistance and autonomous driving rely on perceptual systems intelligence algorithms along three major thrusts: Sensing, Planning and that combine data from many different sensors, including camera, Mapping. Shashua will describe the challenges and the kind of com- ultrasound, radar and lidar. The panelists will discuss the strengths and puter vi­sion and machine learning algorithms involved, but will do that limitations of different types of sensors and how the data from these through the perspective of Mobileye’s activity in this domain. He will sensors can be effectively combined to enable autonomous driving. then describe how OrCam leverages computer vision, situation aware- ness and language process­ing to enable blind and visually impaired to 5:00 – 6:00 pm All-Conference Welcome Reception interact with the world through a miniature wearable device. Prof. Amnon Shashua holds the Sachs chair in computer science at the Hebrew University of Jerusalem. His field of expertise is computer Wednesday January 16, 2019 vision and machine learning. Shashua has founded three startups in

the computer vision and ma­chine learning fields. In 1995 he founded Medical Imaging - Camera Systems Joint Session CogniTens that specializes in the area of industrial metrology and is today a division of the Swedish Corporation Hexagon. In 1999 he Session Chairs: Jon McElvain, Dolby Laboratories (United States) and Ralf cofounded Mobileye with his partner Ziv Aviram. Mobileye develops Widenhorn, Portland State University (United States) system-on-chips and computer vision algorithms for driving assistance 8:50 – 10:30 am systems and is developing a platform for autonomous driving to be Grand Peninsula Ballroom D launched in 2021. Today, approximately 32 million cars rely on Mobileye technology to make their vehicles safer to drive. In August This medical imaging session is jointly sponsored by: Image Sensors and 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising Imaging Systems 2019, and Photography, Mobile, and Immersive Imaging $1B at a market cap of $5.3B. In August 2017, Mobileye became 2019. an Intel company in the largest Israeli acquisition deal ever of $15.3B. Today, Shashua is the president and CEO of Mobileye and a senior 8:50 PMII-350 vice president of Intel Corporation. In 2010 Shashua co-founded Plenoptic medical cameras (Invited), Liang Gao, University of Illinois OrCam which harnesses computer vision and artificial intelligence to Urbana-Champaign (United States) assist people who are visually impaired or blind. 9:10 PMII-351 Simulating a multispectral imaging system for oral cancer screening 3:00 – 3:30 pm Coffee Break (Invited), Joyce Farrell, Stanford University (United States) IMSE 9:30 PMII-352 Imaging the body with miniature cameras, towards portable healthcare (Invited), Ofer Levi, University of Toronto (Canada)

9:50 PMII-353 Self-calibrated surface acquisition for integrated positioning verification in medical applications, Sven Jörissen1, Michael Bleier2, and Andreas Nüchter1; 1University of Wuerzburg and 2Zentrum für Telematik e.V. (Germany)

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 83 Image Sensors and Imaging Systems 2019

10:10 IMSE-354 Measurement and suppression of multipath effect in time-of-flight depth Automotive Image Sensing II Joint Session imaging for endoscopic applications, Ryota Miyagi1, Yuta Murakami1, Session Chairs: Kevin Matherson, Microsoft Corporation (United States); Keiichiro Kagawa1, Hajime Ngahara2, Kenji Kawashima3, Keita Arnaud Peizerat, CEA (France); and Peter van Beek, Intel Corporation Yasutomi1, and Shoji Kawahito1; 1Shizuoka University, 2Osaka University, (United States) and 3Tokyo Medical and Dental University (Japan) 12:10 – 12:50 pm 10:00 am – 3:30 pm Industry Exhibition Grand Peninsula Ballroom D 10:10 – 10:50 am Coffee Break This session is jointly sponsored by: Autonomous Vehicles and Machines 2019, Image Sensors and Imaging Systems 2019, and Photography, Mobile, and Immersive Imaging 2019.

Automotive Image Sensing I Joint Session 12:10 PMII-052 Driving, the future – The automotive imaging revolution (Invited), Patrick Session Chairs: Kevin Matherson, Microsoft Corporation (United Denny, Valeo (Ireland) States); Arnaud Peizerat, CEA (France); and Peter van Beek, Intel Corporation (United States) 12:30 AVM-053 A system for generating complex physically accurate sensor images 10:50 am – 12:10 pm for automotive applications, Zhenyi Liu1,2, Minghao Shen1, Jiaqi Zhang3, Grand Peninsula Ballroom D Shuangting Liu3, Henryk Blasinski2, Trisha Lian2, and Brian Wandell2; This session is jointly sponsored by: Autonomous Vehicles and 1Jilin University (China), 2Stanford University (United States), and 3Beihang Machines 2019, Image Sensors and Imaging Systems 2019, and University (China) Photography, Mobile, and Immersive Imaging 2019. 12:50 – 2:00 pm Lunch 10:50 IMSE-050 KEYNOTE: Recent trends in the image sensing technologies, Vladimir Koifman, Analog Value Ltd. (Israel) Vladimir Koifman is a founder and CTO of Analog Value Ltd. Prior to that, he was co-founder of Advasense Inc., acquired by Pixim/ Sony Image Sensor Division. Prior to co-founding Advasense, Koifman co-established the AMCC analog design center in Israel and led the analog design group for three years. Before AMCC, Koifman worked for 10 years in Motorola Semiconductor Israel (Freescale) managing an analog design group. He has more than 20 years of experience in VLSI industry and has technical leadership in analog chip design, mixed signal chip/system architecture and electro-optic device develop- ment. Koifman has more than 80 granted patents and several papers. Koifman also maintains Image Sensors World blog.

11:30 AVM-051 KEYNOTE: Solid-state LiDAR sensors: The future of autonomous vehicles, Louay Eldada, Quanergy Systems, Inc. (United States) Louay Eldada is CEO and co-founder of Quanergy Systems, Inc. Eldada is a serial entrepreneur, having founded and sold three businesses to Fortune 100 companies. Quanergy is his fourth start-up. Eldada is a techni- cal business leader with a proven track record at both small and large companies and with 71 patents, is a recognized expert in quantum optics,

IMSE nanotechnology, photonic integrated circuits, advanced optoelectronics, sensors and robotics. Prior to Quanergy, he was CSO of SunEdison, after serving as CTO of HelioVolt, which was acquired by SK Energy. Eldada was earlier CTO of DuPont Photonic Technologies, formed by the acquisi- tion of Telephotonics where he was founding CTO. His first job was at Honeywell, where he started the Telecom Photonics business and sold it to Corning. He studied business administration at Harvard, MIT and Stanford, and holds a PhD in optical engineering from Columbia University.

84 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Image Sensors and Imaging Systems 2019

Depth Sensing Wednesday Plenary Session Chair: Min-Woong Seo, Samsung Electronics (Republic of Korea) 2:00 – 3:00 pm Grand Peninsula Ballroom D 3:30 – 4:50 pm Regency C Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, senior scientist, Google (United States) 3:30 IMSE-355 Paul Debevec will discuss the technology and production processes behind Measurement of disparity for depth extraction in monochrome CMOS “Welcome to Light Fields”, the first downloadable virtual reality experience image sensor with offset pixel apertures, Jimin Lee1, Byoung-Soo Choi1, based on light field capture techniques which allow the visual appear- Seunghyuk Chang2, JongHo Park2, Sang-Jin Lee2, and Jang-Kyoo Shin1; ance of an explorable volume of space to be recorded and reprojected 1Kyungpook National University and 2Center for Integrated Smart Sensors photorealistically in VR enabling full 6DOF head movement. The lightfields (Republic of Korea) technique differs from conventional approaches such as 3D modelling and photogrammetry. Debevec will discuss the theory and application of 3:50 IMSE-356 A range-shifting multi-zone time-of-flight measurement technique using the technique. Debevec will also discuss the Light Stage computational a 4-tap lock-in-pixel CMOS range image sensor based on a built-in drift illumination and facial scanning systems which use geodesic spheres of field photodiode, 1 1 1 inward-pointing LED lights as have been used to create digital actor effects Keita Kondo , Keita Yasutomi , Kohei Yamada , Akito Komazawa1, Yukitaro Handa1, Yushi Okura1, Tomoya Michiba1, Satoshi in movies such as Avatar, Benjamin Button, and Gravity, and have recently 2 1,2 1 2 been used to create photoreal digital actors based on real people in mov- Aoyama , and Shoji Kawahito ; Shizuoka University and Brookman ies such as Furious 7, Blade Runner: 2049, and Ready Player One. The Technology Inc. (Japan) lighting reproduction process of light stages allows omnidirectional lighting 4:10 IMSE-357 environments captured from the real world to be accurately reproduced in A range-gated CMOS SPAD array for real-time 3D range imaging, a studio, and has recently be extended with multispectral capabilities to Henna Ruokamo, Lauri Hallman, and Juha Kostamovaara, University of enable LED lighting to accurately mimic the color rendition properties of Oulu (Finland) daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjunction with natural language 4:30 IMSE-358 processing and automultiscopic video projection to record and project 3D scanning measurement using a time-of-flight range imager interactive conversations with survivors of the World War II Holocaust. with improved range resolution, Yushi Okura, Keita Yasutomi, Taishi Takasawa, Keiichiro Kagawa, and Shoji Kawahito, Shizuoka University Paul Debevec is a senior scientist at Google VR, a member of Google (Japan) VR’s Daydream team, and adjunct research professor of computer sci- ence in the Viterbi School of Engineering at the University of Southern Image Sensors and Imaging Systems 2019 Interactive Posters Session California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec’s computer graphics 5:30 – 7:00 pm research has been recognized with ACM SIGGRAPH’s first Significant The Grove New Researcher Award (2001) for “Creative and Innovative Work in the Field of Image-Based Modeling and Rendering”, a Scientific and The following works will be presented at the EI 2019 Symposium Engineering Academy Award (2010) for “the design and engineering Interactive Papers Session. of the Light Stage capture devices and the image-based facial render- ing system developed for character relighting in motion pictures” with IMSE-359 Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress How hot pixel defect rate growth from pixel size shrinkage creates Medal (2017) in recognition of his achievements and ongoing work image degradation, Glenn Chapman1, Rohan Thomas1, Klinsmann in pioneering techniques for illuminating computer-generated objects Meneses1, Israel Koren2, and Zahava Koren2; 1Simon Fraser University based on measurement of real-world illumination and their effective (Canada) and 2University of Massachusetts Amherst (United States) commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine’s “Pixel Perfect: The Scientist IMSE-360 Hybrid image-based defect detection for railroad maintenance, Behind the Digital Cloning of Actors” article by Margaret Talbot. Gaurang Gavai, PARC (United States)

3:00 – 3:30 pm Coffee Break IMSE-361

Real time enhancement of low light images for low cost embedded IMSE platforms, Navinprashath R R, Radhesh Bhat, Narendra Kumar Chepuri, Tom Korah Manalody, and Dipanjan Ghosh, PathPartner Technology Pvt. Ltd. (India)

IMSE-362 Spline-based colour correction for monotonic nonlinear CMOS image sensors, Syed Hussain and Dileepan Joseph, University of Alberta (Canada)

IMSE-363 System-on-Chip design flow for the image signal processor of a nonlinear CMOS imaging system, Maikon Nascimento and Dileepan Joseph, University of Alberta (Canada)

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 85 Image Sensors and Imaging Systems 2019

Thursday January 17, 2019

Technology and Sensor Design I Image Sensor Noise

Session Chair: Arnaud Peizerat, CEA (France) Session Chair: Ralf Widenhorn, Portland State University (United States) 8:50 – 9:30 pm 10:40 – 11:40 am Regency C Regency C IMSE-364 KEYNOTE: How CIS pixels moved from standard CMOS process to 10:40 IMSE-367 semiconductor process flavors even more dedicated than CCD ever Noise suppression effect of folding-integration applied to a column- was, Martin Waeny, TechnologiesMW (Switzerland) parallel 3-stage pipeline ADC in a 2.1μm 33-megapixel CMOS image sensor, Kohei Tomioka1, Toshio Yasue1, Ryohei Funatsu1, Tomoki Martin Waeny graduated in microelectronics from IMT Neuchâtel (1997). Matsubara1, Tomohiko Kosugi2, Sung-Wook Jun2, Takashi Watanabe2,3, In 1998 he worked on CMOS image sensors at IMEC. In 1999 he Masanori Nagase2, Toshiaki Kitajima2, Satoshi Aoyama2, and Shoji joined the CSEM, as a PhD student in the field of digital CMOS image Kawahito2,3; 1Japan Broadcasting Corporation (NHK), 2Brookman sensors. In 2000 he won the Vision prize for the invention of the LINLOG Technology, and 3Shizuoka University (Japan) Technology and in 2001 the Photonics circle of excellence award of SPIE. In 2001 he co-founded the Photonfocus AG. In 2004 he founded 11:00 IMSE-368 AWAIBA Lda, a design-house and supplier for specialty area and linescan Correlated Multiple Sampling impact analysis on 1/fE noise for image image sensors and miniature wafer level camera modules for medical sensors, Arnaud Peizerat, CEA (France) endoscopy. AWAIBA merged 2014 into CMOSIS (www.cmosis.com) and 2015 in AMS (www.ams.com). At AMS, Waeny served as member of 11:20 IMSE-369 the CIS technology office and acted as director of marketing for the micro A comparison between noise reduction & analysis techniques for RTS camera modules. Since 2017 he has been CEO of TechnologiesMW, an pixels, Benjamin Hendrickson, Ralf Widenhorn, Morley Blouke, and Erik independent consulting company. Waeny was a member of the founding Bodegom, Portland State University (United States) board of EMVA the European machine vision association and the 1288 vision standard working group. His research interests are in miniaturized optoelectronic modules and application systems of such modules, 2D and Color and Spectral Imaging 3D imaging and image sensors and use of computer vision in emerging application areas. Session Chair: Ralf Widenhorn, Portland State University (United States) 11:40 am – 12:20 pm Regency C Technology and Sensor Design II IMSE-370 KEYNOTE: The new effort for hyperspectral standarization - IEEE Session Chair: Arnaud Peizerat, CEA (France) P4001, Christopher Durell, Labsphere, Inc. (United States 9:30 – 10:10 am Christopher Durell holds a BSEE and an MBA and has worked for Regency C Labsphere, Inc. in many executive capacities. He is currently leading Business Development for Remote Sensing Technology. He has lead prod- 9:30 IMSE-365 uct development efforts in optical systems, light measurement and remote On the implementation of asynchronous sun sensors, Juan A. Leñero- sensing systems for more than two decades. He is a member of SPIE, IEEE, Bardallo1, Ricardo Carmona-Galán2, and Angel Rodríguez-Vázquez3,4; IES, ASTM, CIE, CORM, and ICDM and is a participant in CEOS/IVOS, 1University of Oslo (Norway), 2Seville Institute of Microelectronics (Spain), QA4EO and other remote sensing groups. As of early 2018, Durell ac- 3University of Seville (Spain), and 4AnaFocus-e2v (Spain) cepted the chair position on the new IEEE P4001 Hyperspectral Standards Working Group. 9:50 IMSE-366 A low-noise nondestructive-readout pixel for computational imaging, IMSE Takuya Nabeshima1, Keita Yasutomi1, Keiichiro Kagawa1, Hajime Ngahara2, Taishi Takasawa1, and Shoji Kawahito1; 1Shizuoka University and 2Osaka University (Japan) Color and Image Sensing Session Chair: Ralf Widenhorn, Portland State University (United States) 10:10 – 10:40 am Coffee Break 12:20 – 12:40 pm Regency C

IMSE-371 Method for the optimal approximation of the spectral response of multicomponent image, Pierre Gouton, Jacques Matanga, and Eric Bourillot, Université de Bourgogne (France)

12:40 – 2:10 pm Lunch

86 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Image Sensors and Imaging Systems 2019

Embedded Image Signal Processing

Session Chair: Nick Bulitka, Lumenera Corporation (Canada) 2:10 – 2:50 pm Regency C

2:10 IMSE-372 Digital circuit methods to correct and filter noise of nonlinear CMOS image sensors (JIST-first),Maikon Nascimento, Jing Li, and Dileepan Joseph, University of Alberta (Canada)

2:30 IMSE-373 Auto white balance stabilization in digital video, Niloufar Pourian and Rastislav Lukac, Intel Corporation (United States)

Novel Vision Techniques and Applications

Session Chair: Nick Bulitka, Lumenera Corporation (Canada) 2:50 – 3:30 pm Regency C

2:50 IMSE-374 Fish-eye camera calibration using horizontal and vertical laser planes projected from a laser level, Tai Yen-Chou, Yu-Hsiang Chiu, Jen-Hui Chuang, Yi-Yu Hsieh, and Yong-Sheng Chen, National Chiao Tung University (Taiwan)

3:10 IMSE-375 Focused light field camera for depth reconstruction model,Piotr Osinski, Robert Sitnik, and Marcin Malesa, Warsaw University of Technology (Poland) IMSE

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 87 IMAWM

Conference Chairs: Jan P. Allebach, Purdue Imaging and Multimedia Analytics in a Web and University (United States); Zhigang Fan, Apple Inc. (United States); and Qian Lin, HP Inc. Mobile World 2019 (United States) Conference overview Program Committee: Vijayan Asari, University The recent progress in web, social networks, and mobile capture and presentation of Dayton (United States); Raja Bala, PARC technologies has created a new wave of interest in imaging and multimedia topics, from (United States); Reiner Fageth, CEWE Stiftung multimedia analytics to content creation and repurposing, from engineering challenges to & Co. KGaA (Germany); Michael Gormish, aesthetics and legal issues, from content sharing on social networks to content ac- Ricoh Innovations, Inc. (United States); Yandong cess from Smart Phones with cloud-based content repositories and services. Compared Guo, XMotors (United States); Ramakrishna to many subjects in traditional imaging, these topics are more multi-disciplinary in nature. Kakarala, Picartio Inc. (United States); Yang This conference provides a forum for researchers and engineers from various related areas, Lei, HP Labs (United States); Xiaofan Lin, both academic and industrial to exchange ideas and share research results in this rapidly A9.COM, Inc. (United States); Changsong Liu, Yucheng Liu, evolving field. Tsinghua University (China); Facebook Inc. (United States); Yung-Hsiang Lu, Purdue University (United States); Binu Nair, Award United Technologies Research Center (United Best Paper Award States); Mu Qiao, Shutterfly, Inc. (United States); Alastair Reed, Digimarc Corporation (United States); Andreas Savakis, Rochester Institute of Technology (United States); Bin Shen, Google Inc. (United States); Wiley Wang (United States); Jane You, The Hong Kong Polytechnic University (Hong Kong, China); and Tianli Yu, Morpx Inc. (China)

Conference Sponsor IMAWM

88 #EI2019 electronicimaging.org

IMAGING AND MULTIMEDIA ANALYTICS IN A WEB AND MOBILE WORLD 2019

Wednesday, January 16, 2019 Deep Learning I

Deep Learning for Face Recognition Session Chair: Qian Lin, HP Labs, HP Inc. (United States)

Session Chair: Qian Lin, HP Labs, HP Inc. (United States) 10:50 – 11:50 am Harbour AB 8:50 – 10:30 am IMAWM-405 Harbour AB KEYNOTE: Deep learning in the VIPER Laboratory, Edward Delp, Purdue University (United States) 8:50 IMAWM-400 Face set recognition, Tongyang Liu1, Xiaoyu Xiang1, Qian Lin2, and Jan Prof. Edward Delp is the Charles William Harrison distinguished Allebach1; 1Purdue University and 2HP Labs, HP Inc. (United States) professor of electrical and computer engineering, professor of biomedical engineering, and professor of psychological sciences 9:10 IMAWM-401 (Courtesy) at Purdue University. Delp was born in Cincinnati, Ohio. Dense prediction for micro-expression spotting based on deep sequence He received his BSEE (cum laude) and MS from the University of model, Khanh Tran, Xiaopeng Hong, Quang-Nhat Vo, and Guoying Cincinnati, and his PhD from Purdue University. In May 2002 he Zhao, University of Oulu (Finland) received an Honorary Doctor of Technology from the Tampere University of Technology in Tampere, Finland. In 2014 Delp received 9:30 IMAWM-402 the Morrill Award from Purdue University. This award honors a faculty Real time facial expression recognition using deep learning, Shaoyuan member’s outstanding career achievements and is Purdue’s highest Xu1, Qian Lin2, and Jan Allebach1; 1Purdue University and 2HP Labs, HP career achievement recognition for a faculty member. The Office of Inc. (United States) the Provost gives the Morrill Award to faculty members who have 9:50 IMAWM-403 excelled as teachers, researchers and scholars, and in engagement Face alignment via 3D-assisted features, Song Guo1, Fei Li1, Hajime missions. The award is named for Justin Smith Morrill, the Vermont Nada2, Hidetsugu Uchida2, Tomoaki Matsunami2, and Narishige Abe2; congressman who sponsored the 1862 legislation that bears his name 1Fujitsu Research & Development Center Co., Ltd. (China) and 2Fujitsu and allowed for the creation of land-grant college and universities Laboratories Ltd. (Japan) in the United States. In 2015 Delp was named Electronic Imaging Scientist of the Year by IS&T and SPIE. The Scientist of the Year award 10:10 IMAWM-404 is given annually to a member of the electronic imaging community Face recognition by the construction of matching cliques of points, who has demonstrated excellence and commanded the respect of his/ Frederick Stentiford, UCL (United Kingdom) her peers by making significant and substantial contributions to the field of electronic imaging via research, publications and service. He was 10:00 am – 3:30 pm Industry Exhibition cited for his contributions to multimedia security and image and video compression. Delp is a fellow of IEEE, SPIE, IS&T, and the American 10:10 – 10:50 am Coffee Break Institute of Medical and Biological Engineering.

Deep Learning II

Session Chair: Wiley Wang, Ditto.com (United States) 11:50 am – 12:10 pm Harbour AB

IMAWM-406 Comparison of texture retrieval techniques using deep convolutional features, Otavio Gomes1, Augusto Valente1, Guilherme Megeto1, Fábio Perez1, Marcos Cascone1, and Qian Lin2; 1Eldorado Research Institute (Brazil) and 2HP Labs, HP Inc. (United States)

12:30 – 2:00 pm Lunch IMAWM

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 89 Imaging and Multimedia Analytics in a Web and Mobile World 2019

Computer Vision and Artificial Intelligence for Health & Beauty Applications Wednesday Plenary Session Chair: Raja Bala, PARC (United States) 2:00 – 3:00 pm Grand Peninsula Ballroom D 3:30 – 5:10 pm Harbour AB Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, senior scientist, Google (United States) 3:30 IMAWM-407 Paul Debevec will discuss the technology and production processes behind Diagnostic and personalized skin care via artificial intelligence (Invited), “Welcome to Light Fields”, the first downloadable virtual reality experience Ankur Purwar1 and Matthew Shreve2; 1Procter & Gamble (Singapore) and based on light field capture techniques which allow the visual appear- 2Palo Alto Research Center (United States) ance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement. The lightfields 4:00 IMAWM-408 Computer vision in imaging diagnostics, technique differs from conventional approaches such as 3D modelling Andre Esteva, Stanford and photogrammetry. Debevec will discuss the theory and application of University (United States) the technique. Debevec will also discuss the Light Stage computational 4:20 IMAWM-409 illumination and facial scanning systems which use geodesic spheres of A new model to reliably predict human facial appearance, Paul Matts1 inward-pointing LED lights as have been used to create digital actor effects and Brian D’Alessandro2; 1Proctor & Gamble (United Kingdom) and in movies such as Avatar, Benjamin Button, and Gravity, and have recently 2Canfield Scientific (United States) been used to create photoreal digital actors based on real people in mov- ies such as Furious 7, Blade Runner: 2049, and Ready Player One. The 4:40 IMAWM-410 lighting reproduction process of light stages allows omnidirectional lighting The intersection of artificial intelligence and augmented reality (Invited), environments captured from the real world to be accurately reproduced in Parham Arabi, University of Toronto (Canada) a studio, and has recently be extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of 5:30 – 7:00 pm Symposium Interactive Papers (Poster) Session daylight, incandescent, and mixed lighting environments. They have also recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project Thursday January 17, 2019 interactive conversations with survivors of the World War II Holocaust. Paul Debevec is a senior scientist at Google VR, a member of Google Medical Imaging - Computational Joint Session VR’s Daydream team, and adjunct research professor of computer sci- 8:50 – 10:10 am ence in the Viterbi School of Engineering at the University of Southern Grand Peninsula Ballroom A California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec’s computer graphics This medical imaging session is jointly sponsored by: Computational research has been recognized with ACM SIGGRAPH’s first Significant Imaging XVII, Human Vision and Electronic Imaging 2019, and Imaging New Researcher Award (2001) for “Creative and Innovative Work in and Multimedia Analytics in a Web and Mobile World 2019. the Field of Image-Based Modeling and Rendering”, a Scientific and Engineering Academy Award (2010) for “the design and engineering 8:50 IMAWM-145 of the Light Stage capture devices and the image-based facial render- Smart fetal care, Jane You1, Qin Li2, Qiaozhu Chen3, Zhenhua Guo4, ing system developed for character relighting in motion pictures” with and Hongbo Yang5; 1The Hong Kong Polytechnic University (Hong Kong), Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress 2Shenzhen Institute of Information Technology (China), 3Guangzhou Medal (2017) in recognition of his achievements and ongoing work Women and Children Medical Center (China), 4Tsinghua University in pioneering techniques for illuminating computer-generated objects (China), and 5Suzhou Institute of Biomedical Engineering and Technology, based on measurement of real-world illumination and their effective Chinese Academy of Sciences (China) commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine’s “Pixel Perfect: The Scientist 9:10 COIMG-146 Behind the Digital Cloning of Actors” article by Margaret Talbot. Self-contained, passive, non-contact, photoplethysmography: Real-time extraction of heart rates from live view within a Canon Powershot, Henry Dietz, Chadwick Parrish, and Kevin Donohue, University of Kentucky (United States) 3:00 – 3:30 pm Coffee Break 9:30 COIMG-147 Edge-preserving total variation regularization for dual-energy CT images, Sandamali Devadithya and David Castañón, Boston University (United States)

9:50 COIMG-148

IMAWM Fully automated dental panoramic radiograph by using internal mandible curves of dental volumetric CT, Sanghun Lee1, Seongyoun Woo1, Joonwoo Lee2, Jaejun Seo2, and Chulhee Lee1; 1Yonsei University and 2Dio Implant (Republic of Korea)

10:10 – 10:40 am Coffee Break

90 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Lee Allebach Allebach Session Chair:Yandong Guo,XMotors(UnitedStates) Multimedia AnalyticsinOnline&MobileSystems States) Alexander Gokan Sathya Sundaram electronicimaging.org #EI2019 University ofMalta(Malta) British Waterways boattr-towpathassocialcommons, 3:00 2 fashion marketplace,KendalNorman New resultsfornaturallanguageprocessingappliedtoanon-line 2:40 Johnathan Tripp, FrostburgStateUniversity(UnitedStates) Paint codeidentificationusingmobilecolordetector, XunyuPanand 2:20 June Life,Inc.(UnitedStates) cookingforcamera-enabledmultifunctionoven, Smart 2:00 Harbour A 2:00 –3:20pm Detecting non-nativecontentinon-linefashionimages, 12:10 States) Brad Sorg,TheusAspiras,andVijayanAsari,UniversityofDayton(United Edge/region fusionnetworkforscenelabelingininfraredimagery, 11:50 Yang Detecting anddecodingbarcodeinon-linefashionimage, 11:30 Litao Hu A heuristicapproachfordetectingframesinonlinefashionimages, 11:10 Eigen, Clarifai(UnitedStates) inobjectdetectionarchitectures(Invited), Similarity anddifference 10:40 Harbour A 10:40 am–12:30pm Session Chair:ZhigangFan,AppleInc.(UnitedStates) forDetection&SegmentationI Deep Learning I maging Poshmark Inc.(UnitedStates) 2 , andJanAllebach

1 , GautamGolwala

1 1 1 , GautamGolwala ; ; 1 1 Purdue Universityand Purdue Universityand

and 2 1 , Perry Lee , Perry , ZhiLi M 3:20 – 3:40 pm Coffee Break 3:20 –3:40pmCoffee 1 ; ultimedia 2 12:30 –2:00pmLunch 1 1 , SathyaSundaram Purdue Universityand , GautamGolwala 2 2 , Perry Lee , Perry , andJanAllebach 2 2 Poshmark Inc.(UnitedStates) Poshmark Inc.(UnitedStates) 1 A 2 , ZhiLi , SathyaSundaram nalytics 2 2 , Perry Lee , Perry , SathyaSundaram 1 , GautamGolwala 2 1 Poshmark Inc.(United ; 1 Paper titlesandauthors asof1Jan2019;finalproceedings manuscriptmayvary. Purdue Universityand

2 in , andJan Wiley Wang, Zhenxun Yuan Adnan Hadzi, 2

, andJan a Qingyu W IMA IMA IMA IMA IMA IMA IMA IMA IMA 2 2 , Perry , Perry WM-417 WM-419 WM-418 WM-416 WM-415 WM-414 WM-413 WM-412 WM-411 eb , David

and 1 , M University ofPostsandTelecommunications (China) segmentation, ChengZhang,JichaoJiao,andZhongliangDeng,Beijing methodoffusionandsiftingforRGB-Dsemantic A simplebutefficient 4:20 Vision-based drivingexperienceimprovement(Invited), 3:40 Harbour A 3:40 –4:40pm Session Chair:JanAllebach,PurdueUniversity(UnitedStates) forDetection&SegmentationII Deep Learning XMotors (UnitedStates) obile W orld 2019 Yandong Guo, IMAWM-421 IMAWM-420 91

IMAWM Conference Chairs: Henry Y.T. Ngan, ENPS Intelligent Robotics and Industrial Applications Hong Kong (China); Kurt Niel, Upper Austria University of Applied Sciences (Austria); and using Computer Vision 2019 Juha Röning, University of Oulu (Finland) Conference overview Program Committee: Philip Bingham, Oak This conference brings together real-world practitioners and researchers in intelligent robots Ridge National Laboratory (United States); and computer vision to share recent applications and developments. Topics of interest in- Ewald Fauster, Montan Universitat Leoben clude the integration of imaging sensors supporting hardware, computers, and algorithms (Austria); Steven Floeder, 3M Company (United for intelligent robots, manufacturing inspection, characterization, and/or control. States); David Fofi, University de Bourgogne (France); Shaun Gleason, Oak Ridge National The decreased cost of computational power and vision sensors has motivated the rapid Lab (United States); B. Keith Jenkins, The proliferation of machine vision technology in a variety of industries, including aluminum, University of Southern California (United States); Olivier Laligant, automotive, forest products, textiles, glass, steel, metal casting, aircraft, chemicals, food, University de Bourgogne (France); Edmund Lam, The University of fishing, agriculture, archaeological products, medical products, artistic products, etc. Other Hong Kong (Hong Kong, China); Dah-Jye industries, such as semiconductor and electronics manufacturing, have been employing ma- Lee, Brigham Young University (United States); chine vision technology for several decades. Machine vision supporting handling robots is Junning Li, Keck School of Medicine, University another main topic. With respect to intelligent robotics another approach is sensor fusion – of Southern California (United States); Wei Liu, combining multi-modal sensors in audio, location, image and video data for signal process- The University of Sheffield (United Kingdom); ing, machine learning and computer vision, and additionally other 3D capturing devices. Charles McPherson, Draper Laboratory (United States); Fabrice Meriaudeau, University de There is a need of accurate, fast, and robust detection of objects and their position in space. Bourgogne (France); Yoshihiko Nomura, Mie Lucas Paletta, Their surface, the background and illumination is uncontrolled; in most cases the objects University (Japan); JOANNEUM Research Forschungsgesellschaft mbH (Austria); of interest are within a bulk of many others. For both new and existing industrial users of Vincent Paquit, Oak Ridge National Laboratory machine vision, there are numerous innovative methods to improve productivity, quality, (United States); Daniel Raviv, Florida Atlantic and compliance with product standards. There are several broad problem areas that have University (United States); Hamed Sari-Sarraf, received significant attention in recent years. For example, some industries are collecting Texas Tech University (United States); Ralph enormous amounts of image data from product monitoring systems. New and efficient Seulin, University de Bourgogne (France); methods are required to extract insight and to perform process diagnostics based on this Christophe Stolz, University de Bourgogne historical record. Regarding the physical scale of the measurements, microscopy techniques (France); Svorad Štolc, AIT Austrian Institute are nearing resolution limits in fields such as semiconductors, biology, and other nano-scale of Technology GmbH (Austria); Bernard technologies. Techniques such as resolution enhancement, model-based methods, and sta- Theisen, U.S. Army Tank Automotive Research, tistical imaging may provide the means to extend these systems beyond current capabilities. Development and Engineering Center (United States); Seung-Chul Yoon, United States Furthermore, obtaining real-time and robust measurements in-line or at-line in harsh industrial Department of Agriculture Agricultural Research environments is a challenge for machine vision researchers, especially when the manufac- Service (United States); Gerald Zauner, FH OÖ– turer cannot make significant changes to their facility or process. Forschungs & Entwicklungs GmbH (Austria); and Dili Zhang, Monotype Imaging (United States) Awards Best Paper Best Student Paper

92 #EI2019 electronicimaging.org

INTELLIGENT ROBOTICS AND INDUSTRIAL APPLICATIONS

USING COMPUTER VISION 2019 IRIACV

Wednesday January 16, 2019 12:00 IRIACV-458 ECDNet: Efficient Siamese convolutional network for real-time small object change detection from ground vehicles, Sander Klomp1, Dennis Robotics and Inspection van de Wouw1,2, and Peter de With1; 1Eindhoven University of Technology 2 Session Chair: Juha Röning, University of Oulu (Finland) and ViNotion B.V. (the Netherlands)

8:50 – 10:10 am 12:30 – 2:00 pm Lunch Regency B

8:50 IRIACV-450 Wednesday Plenary Laser quadrat and photogrammetry based autonomous coral reef 2:00 – 3:00 pm mapping ocean robot, Sidhant Gupta, Thanh Bui, King Lui, and Edmund Grand Peninsula Ballroom D Lam, The University of Hong Kong (Hong Kong) Light Fields and Light Stages for Photoreal Movies, Games, and 9:10 IRIACV-451 Virtual Reality, Paul Debevec, senior scientist, Google (United States) Multimodal localization for autonomous agents, Robert Relyea, Darshan Ramesh Bhanushali, Abhishek Vashist, Amlan Ganguly, Andres Kwasinski, Paul Debevec will discuss the technology and production processes behind Michael Kuhl, and Ray Ptucha, Rochester Institute of Technology (United States) “Welcome to Light Fields”, the first downloadable virtual reality experience based on light field capture techniques which allow the visual appear- 9:30 IRIACV-452 ance of an explorable volume of space to be recorded and reprojected Automatic estimation of the position and orientation of the drill to be photorealistically in VR enabling full 6DOF head movement. The lightfields grasped and manipulated by the disaster response robot based on technique differs from conventional approaches such as 3D modelling analyzing depth information, Keishi Nishikawa, Waseda University (Japan) and photogrammetry. Debevec will discuss the theory and application of 9:50 IRIACV-453 the technique. Debevec will also discuss the Light Stage computational Automated optical inspection for abnormal-shaped packages, Wei Lin, illumination and facial scanning systems which use geodesic spheres of Chang-Tao Hsu, Chi Chang, and Jen-Hui Chuang, National Chiao Tung inward-pointing LED lights as have been used to create digital actor effects University (Taiwan) in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in mov- ies such as Furious 7, Blade Runner: 2049, and Ready Player One. The 10:00 am – 3:30 pm Industry Exhibition lighting reproduction process of light stages allows omnidirectional lighting 10:10 – 10:40 am Coffee Break environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to Machine Vision and Learning enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. They have also Session Chair: Juha Röning, University of Oulu (Finland) recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project 10:40 am – 12:20 pm interactive conversations with survivors of the World War II Holocaust. Regency B Paul Debevec is a senior scientist at Google VR, a member of Google 10:40 IRIACV-454 VR’s Daydream team, and adjunct research professor of computer sci- Foreground-aware statistical models for background estimation, Edgar ence in the Viterbi School of Engineering at the University of Southern Bernal1, Qun Li2, and Wencheng Wu1; 1University of Rochester and California, working within the Vision and Graphics Laboratory at the 2Microsoft Corporation (United States) USC Institute for Creative Technologies. Debevec’s computer graphics research has been recognized with ACM SIGGRAPH’s first Significant 11:00 IRIACV-455 New Researcher Award (2001) for “Creative and Innovative Work in Change detection in Cadastral 3D models and point clouds and its the Field of Image-Based Modeling and Rendering”, a Scientific and use for improved texturing, Sander Klomp1, Bas Boom2, Thijs van Engineering Academy Award (2010) for “the design and engineering Lankveld2, and Peter de With1; 1Eindhoven University of Technology and of the Light Stage capture devices and the image-based facial render- 2CycloMedia Technology B.V. (the Netherlands) ing system developed for character relighting in motion pictures” with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress 11:20 IRIACV-456 Medal (2017) in recognition of his achievements and ongoing work Study on selection of construction waste using sensor fusion, Masaya in pioneering techniques for illuminating computer-generated objects Nyumura and Yue Bao, Tokyo City University (Japan) based on measurement of real-world illumination and their effective 11:40 IRIACV-457 commercial application in numerous Hollywood films. In 2014, he Exploring variants of fully convolutional networks with local and global was profiled in The New Yorker magazine’s “Pixel Perfect: The Scientist contexts in semantic segmentation problem, Dong-won Shin, Jun-Yong Behind the Digital Cloning of Actors” article by Margaret Talbot. Park, Chan-Young Sohn, and Yo-Sung Ho, Gwangju Institute of Science and Technology (GIST) (Republic of Korea) 3:00 – 3:30 pm Coffee Break

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 93 Intelligent Robotics and Industrial Applications using Computer Vision 2019 IRIACV

Machine Vision Applications Intelligent Robotics and Industrial Applications using Computer Vision 2019 Interactive Posters Session Session Chair: Kurt Niel, University of Applied Sciences Upper Austria (Austria) 5:30 – 7:00 pm 3:30 – 5:30 pm The Grove Regency B The following works will be presented at the EI 2019 Symposium Interac- tive Papers Session. 3:30 IRIACV-459 People recognition and position measurement in workplace by fisheye IRIACV-465 camera, Haike Guan and Makoto Shinnishi, Ricoh Company, Ltd. (Japan) Improved 3D scene modeling for image registration in change detection, Sjors van Riel, Dennis van de Wouw, and Peter de With, 3:50 IRIACV-460 Optical system of industrial camera that achieves both short minimum Eindhoven University of Technology (the Netherlands) focusing distance and high resolution, Yoshifumi Sudoh, Ricoh Company, IRIACV-466 Ltd. (Japan) Single Shot Appearance Model (SSAM) for multi-target tracking, Mohib Ullah and Faouzi Alaya Cheikh, Norwegian University of Science and 4:10 IRIACV-461 Investigating camera calibration methods for naturalistic driving studies, Technology (Norway) Jeffrey Paone1, Thomas Karnowski2, Deniz Aykac2, Regina Ferrell2, Jim Goddard2, and Austin Albright2; 1Colorado School of Mines and 2Oak Ridge National Laboratory (United States)

4:30 IRIACV-462 Application of semantic segmentation for an autonomous rail tamping assistance system, Gerald Zauner1, Tobias Mueller2, Andreas Theiss2, Martin Buerger2, and Florian Auer2; 1University of Applied Sciences Upper Austria and 2Plasser & Theurer GmbH (Austria)

4:50 IRIACV-463 Hazmat label recognition and localization for rescue robots in disaster scenarios, Raimund Edlinger, Gerald Zauner, Ralph Slabihoud, and Michael Zauner, University of Applied Sciences Upper Austria (Austria)

5:10 IRIACV-464 Industrial computer vision in academic education - Is there a need besides so many professional business models supporting ready to go solutions?, Kurt Niel, University of Applied Sciences Upper Austria (Austria)

94 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Conference Chairs: Mathieu Hebert, Université Material Appearance 2019 Jean Monnet de Saint Etienne (France); Lionel Simonot, Université de Poitiers (France); and Conference overview Ingeborg Tastl, HP Labs, HP Inc. (United States) The rapid and continuous development of rendering simulators and devices such as displays and printers offers interesting challenges related to how the appearance of materials is un- Program Committee: Marc Ellens, X-Rite, Inc. derstood. Over the years, researchers from different disciplines, including metrology, optical (United States); Susan P. Farnand, Rochester modeling and digital simulation, have studied the interaction of incident light with the texture Institute of Technology (United States); Roland and surface geometry of a given object, as well as the optical properties of distinct materi- Fleming, Justus-Liebig-Universität Giessen als. Thanks to those efforts, we have been able to propose methods for characterizing the (Germany); Jon Yngve Hardeberg, Norwegian optical and visual properties of many materials, propose affordable measurement methods, University of Science and Technology (Norway); Francisco H. Imai, Apple Inc. (United States); predict optical properties or appearance attributes, and render 2.5D and 3D objects and MAAP Susanne Klein, University of the West of England scenes with high accuracy. (United Kingdom); Gael Obein, Conservatoire National des Arts et Metiers (France); Maria This conference offers the possibility to share research results and establish new collabora- Ortiz Segovia, Océ Print Logic Technologies tions between academic and industrial researchers from these related fields. (France); Carinna Parraman, University of the West of England (United Kingdom); Holly Award Rushmeier, Yale University (United States); Takuroh Sone, Sabine Best Paper Award Ricoh Japan (Japan); Süsstrunk, École Polytechnique Fédérale de Lausanne (Switzerland); Shoji Tominaga, Chiba University (Japan); and Philipp Urban, Fraunhofer Institute for Computer Graphics Research IGD (Germany)

Conference Sponsors

®

electronicimaging.org #EI2019 95

MATERIAL APPEARANCE 2019

Monday January 14, 2019 Appearance Design and 3D Printing I

Session Chair: Jon Yngve Hardeberg, Norwegian University of Science Measurement and Evaluation of Appearance I and Technology (NTNU) (Norway)

MAAP Session Chair: Mathieu Hebert, Université Jean Monnet de Saint 10:50 – 11:30 am Etienne (France) and Takuroh Sone, Ricoh Company, Ltd. (Japan) Cypress A 8:50 – 9:30 am MAAP-478 KEYNOTE: Beyond printing: How to expand 3D applications through Cypress A postprocessing, Isabel Sanz, HP Inc. (Spain) MAAP-475 KEYNOTE: On the acquisition and reproduction of material Isabel Sanz received an MSc in mechanical engineering from the appearance, Jon Yngve Hardeberg, Norwegian University of Science Technical University of Valencia (Spain) and from RWTH Aachen and Technology (NTNU) (Norway) (Germany). Her current position is 3D Printing Advanced Technical Jon Yngve Hardeberg (1971) is a professor in the department of Consultant at HP Inc. She complemented her studies with a master in computer science at NTNU in Gjøvik. He has a MSc in signal process- project management from La Salle, in Barcelona (Spain). Her career ing from NTNU, and a PhD in signal and image processing from the at HP started as R&D mechanical engineer in the HP Large Format Ecole Nationale Supérieure des Télécommunications in Paris, France. Printing business. After that experience, she moved into the 3D Printing Hardeberg is a member of the Norwegian Colour and Visual Computing business. There, Sanz started the benchmark printing process for Multi Laboratory where he teaches, supervises graduate students, manages Jet Fusion customers. Nowadays, she is technically developing new ap- international study programs and research projects. He has co-authored plications and helping customers to introduce and grow the 3D printing more than 200 publications. His research interests include multispectral opportunities in their products and processes. She holds 9 patents and colour imaging, print and image quality, colorimetric device charac- 1 publication and she keeps looking for new and innovative ways of terization, colour management, cultural heritage imaging, and medical doing things, evangelizing the movement to additive manufacturing. imaging.

Appearance Design and 3D Printing II

Measurement and Evaluation of Appearance II Session Chair: Jon Yngve Hardeberg, Norwegian University of Science Session Chair: Takuroh Sone, Ricoh Company, Ltd. (Japan) and Technology (NTNU) (Norway) 9:30 – 10:10 am 11:30 am – 12:30 pm Cypress A Cypress A

9:30 MAAP-476 11:30 MAAP-479 Evaluation of sparkle impression considering observation distance, A soft-proofing workflow for color 3D printing - Addressing needs Shuhei Watanabe and Takuroh Sone, Ricoh Company, Ltd. (Japan) for the future, Ingeborg Tastl1, Miguel A. Lopez-Alvarez2, Alexandra Ju2, Morgan Schramm2, Jordi Roca2, and Matthew Shepherd2; 1HP Labs, HP 2 9:50 MAAP-477 Inc. and HP Inc. (United States) Comparative analysis of transmittance measurement geometries and apparatus, Marjan Shahpaski1, Luis Sapaico2, and Sabine Süsstrunk1; 11:50 MAAP-480 1École Polytechnique Fédérale de Lausanne (EPFL) (Switzerland) and 2Océ Improving aesthetics through post-processing for 3D printed parts, Print Logic Technologies S.A. (France) Alexandra Ju, Andrew Fitzhugh, Jiwon Jun, and Mary Baker, HP Inc. (United States) 10:10 – 10:50 am Coffee Break 12:10 MAAP-481 Refractive index of inks and colored gloss (Invited), Lionel Simonot1, Oussama Sari2, and Mathieu Hebert2; 1Université de Poitiers and 2Université Jean Monnet de Saint Etienne (France)

12:30 – 2:00 pm Lunch

96 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Material Appearance 2019

3:00 – 3:30 pm Coffee Break Monday Plenary

2:00 – 3:00 pm Color Rendering of Materials I Joint Session Grand Peninsula Ballroom D Session Chair: Lionel Simonot, Université de Poitiers (France) Autonomous Driving Technology and the OrCam MyEye, Amnon Shashua, President and CEO, Mobileye, an Intel Company, and senior 3:30 – 4:10 pm vice president, Intel Corporation (United States) Cypress A The field of transportation is undergoing a seismic change with the This session is jointly sponsored by: Color Imaging XXIV: Displaying, coming introduction of autonomous driving. The technologies required Processing, Hardcopy, and Applications, and Material Appearance to enable computer driven cars involves the latest cutting edge artifi­cial 2019.

intelligence algorithms along three major thrusts: Sensing, Planning and MAAP-075 MAAP Mapping. Shashua will describe the challenges and the kind of com- KEYNOTE: Capturing appearance in text: The Material Definition puter vi­sion and machine learning algorithms involved, but will do that Language (MDL), Andy Kopra, NVIDIA Advanced Rendering Center through the perspective of Mobileye’s activity in this domain. He will (Germany) then describe how OrCam leverages computer vision, situation aware- Andy Kopra is a technical writer at the NVIDIA Advanced Rendering ness and language process­ing to enable blind and visually impaired to Center in Berlin, Germany. With more than 35 years of professional interact with the world through a miniature wearable device. computer graphics experience, he writes and edits documentation Prof. Amnon Shashua holds the Sachs chair in computer science at for NVIDIA customers on a wide variety of topics. He also designs, the Hebrew University of Jerusalem. His field of expertise is computer programs, and maintains the software systems used in the production of vision and machine learning. Shashua has founded three startups in the documentation websites and printed materials. the computer vision and ma­chine learning fields. In 1995 he founded CogniTens that specializes in the area of industrial metrology and is today a division of the Swedish Corporation Hexagon. In 1999 he Color Rendering of Materials II Joint Session cofounded Mobileye with his partner Ziv Aviram. Mobileye develops system-on-chips and computer vision algorithms for driving assistance Session Chair: Lionel Simonot, Université de Poitiers (France) systems and is developing a platform for autonomous driving to be 4:10 – 4:50 pm launched in 2021. Today, approximately 32 million cars rely on Cypress A Mobileye technology to make their vehicles safer to drive. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising This session is jointly sponsored by: Color Imaging XXIV: Displaying, $1B at a market cap of $5.3B. In August 2017, Mobileye became Processing, Hardcopy, and Applications, and Material Appearance an Intel company in the largest Israeli acquisition deal ever of $15.3B. 2019. Today, Shashua is the president and CEO of Mobileye and a senior vice president of Intel Corporation. In 2010 Shashua co-founded 4:10 COLOR-076 OrCam which harnesses computer vision and artificial intelligence to Real-time accurate rendering of color and texture of car coatings, assist people who are visually impaired or blind. Eric Kirchner1, Ivo Lans1, Pim Koeckhoven1, Khalil Huraibat2, Francisco Martinez-Verdu2, Esther Perales2, Alejandro Ferrero3, and Joaquin Campos3; 1AkzoNobel (the Netherlands), 2University of Alicante (Spain), and 3CSIC (Spain)

4:30 COLOR-077 Recreating Van Gogh’s original colors on museum displays, Eric Kirchner1, Muriel Geldof2, Ella Hendriks3, Art Ness Proano Gaibor2, Koen Janssens4, John Delaney5, Ivo Lans1, Frank Ligterink2, Luc Megens2, Teio Meedendorp6, and Kathrin Pilz6; 1AkzoNobel (the Netherlands), 2RCE (the Netherlands), 3University of Amsterdam (the Netherlands), 4University of Antwerp (Belgium), 5National Gallery (United States), and 6Van Gogh Museum (the Netherlands)

5:00 – 6:00 pm All-Conference Welcome Reception

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 97 Material Appearance 2019

Tuesday January 15, 2019 11:50 MAAP-485 Appearance reconstruction of mutual illumination effect between plane and curved fluorescent objects,Shoji Tominaga1,2, Keita Hirai3, and 7:15 – 8:45 am Women in Electronic Imaging Breakfast Takahiko Horiuchi3; 1Norwegian University of Science and Technology (NTNU) (Norway), 2Nagano University (Japan), and 3Chiba University Material Appearance Perception Joint Session (Japan) Session Chair: Ingeborg Tastl, HP Labs, HP Inc. (United States) 9:10 – 10:10 am 12:10 MAAP-486 Accurate physico-realistic ray tracing simulation of displays, Pierre

MAAP Grand Peninsula Ballroom D Boher1, Thierry Leroux1, Thomas Muller2, and Philippe Porral2; 1ELDIM and This session is jointly sponsored by: Human Vision and Electronic Imaging 2United Visual Researchers (France) 2019, and Material Appearance 2019. 12:30 – 2:00 pm Lunch 9:10 MAAP-202 Material appearance: Ordering and clustering, Davit Gigilashvili, Jean- Baptiste Thomas, Marius Pedersen, and Jon Yngve Hardeberg, Norwegian Tuesday Plenary University of Science and Technology (NTNU) (Norway) 2:00 – 3:00 pm Grand Peninsula Ballroom D 9:30 MAAP-203 A novel translucency classification for computer graphics,Morgane The Quest for Vision Comfort: Head-Mounted Light Field Displays Gerardin1, Lionel Simonot2, Jean-Philippe Farrugia3, Jean-Claude Iehl3, for Virtual and Augmented Reality, Hong Hua, professor of optical Thierry Fournel4, and Mathieu Hebert4; 1Institut d’Optique Graduate sciences, University of Arizona (United States) School, 2Université de Poitiers, 3LIRIS, and 4Université Jean Monnet de Saint Etienne (France) Hong Hua will discuss the high promises and the tremendous progress made recently toward the development of head-mounted displays (HMD) for both virtual and augmented reality displays, developing 9:50 MAAP-204 HMDs that offer uncompromised optical pathways to both digital and Constructing glossiness perception model of computer graphics with physical worlds without encumbrance and discomfort confronts many sounds, Takumi Nakamura, Keita Hirai, and Takahiko Horiuchi, Chiba grand challenges, both from technological perspectives and human University (Japan) factors. She will particularly focus on the recent progress, challenges and opportunities for developing head-mounted light field displays 10:00 am – 7:00 pm Industry Exhibition (LF-HMD), which are capable of rendering true 3D synthetic scenes 10:10 – 10:50 am Coffee Break with proper focus cues to stimulate natural eye accommodation respons- es and address the well-known vergence-accommodation conflict in Appearance Design and Computation conventional stereoscopic displays.

Session Chair: Mathieu Hebert, Université Jean Monnet de Saint Etienne Dr. Hong Hua is a professor of optical sciences at the University of (France) Arizona. With more than 25 years of experience, Hua is widely recognized through academia and industry as an expert in wearable 10:50 am – 12:30 pm display technologies and optical imaging and engineering in gen- Cypress A eral. Hua’s current research focuses on optical technologies enabling advanced 3D displays, especially head-mounted display technologies 10:50 MAAP-482 for virtual reality and augmented reality applications, and microscopic Image-based BRDF design, Ezra Davis1, Weiqi Shi1, Hongzhi Wu2, and endoscopic imaging systems for medicine. Hua has published more Julie Dorsey1, and Holly Rushmeier1; 1Yale University (United States) and than 200 technical papers and filed a total of 23 patent applications 2Zhejiang University (China) in her specialty fields, and delivered numerous keynote addresses and invited talks at major conferences and events worldwide. She is an SPIE 11:10 MAAP-483 Fellow and OSA senior member. She was a recipient of NSF Career Hair tone estimation at roots via imaging device with embedded deep Award in 2006 and honored as UA Researchers @ Lead Edge in learning, Panagiotis-Alexandros Bokaris, Emmanuel Malherbe, Thierry 2010. Hua and her students shared a total of 8 “Best Paper” awards Wasserman, Michaël Haddad, and Matthieu Perrot, L’Oreal Research & in various IEEE, SPIE and SID conferences. Hua received her PhD in Innovation (France) optical engineering from the Beijing Institute of Technology in China (1999). Prior to joining the UA faculty in 2003, Hua was an assistant 11:30 MAAP-484 professor with the University of Hawaii at Manoa in 2003, was a CNN based parameter optimization for texture synthesis, Jiangpeng Beckman Research Fellow at the Beckman Institute of University of Illinois He1, Kyle Ziga1, Judy Bagchi2, and Fengqing Zhu1; 1Purdue University and at Urbana-Champaign between 1999 and 2002, and was a post-doc 2Dzine Steps (United States) at the University of Central Florida in 1999.

3:00 – 3:30 pm Coffee Break

98 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Material Appearance 2019

Describing Material Appearance Demo Session on Material Appearance Session Chair: Lionel Simonot, Université de Poitiers (France) 4:20 – 5:20 pm 3:30 – 4:00 pm Cypress A Cypress A Talking about material appearance is good. Looking at materials and MAAP-487 Enough (data) already!, Marc Ellens, X-Rite, Inc. (United States) speaking about them is even better! Every innovative or traditional object, material, image or simulation is welcome to be presented in the final session of the MAAP conference. Collectively they will serve as an inspiration for future innovative objects, materials, images or simulations. Discussion: Rarely Asked Questions on Material Appearance

4:00 – 4:20 pm MAAP Cypress A 5:30 – 7:00 pm Symposium Demonstration Session Moderators: Mathieu Hebert, Université Jean Monnet de Saint Etienne (France); Lionel Simonot, Université de Poitiers (France); and Ingeborg Tastl, HP Labs, HP Inc. (United States) MAAP-478 Can perception models for gloss and color be combined to assess the appearance of metallic objects or lusterware? Can translucency be assessed by simple optical measurement? Do commercially available color measurement devices determine the degree of light scattering that occurs in translucent materials and do the numbers correspond with the visual perception? Appearance is an area where questions are much more numerous than answers and where questions expressed today can become guidelines for scientific research in the future. Everyone is welcome to share their own questions arising from their own profes- sional experience! This is meant to be a truly interactive session.

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 99 MWSF

Media Watermarking, Security, and Forensics 2016

Document Processing and Media Conference Chairs: Adnan M. Alattar, Digimarc Media Watermarking, Security, Corporation (United States), Nasir D. Memon, Security Tandon School of Engineering, New York Uni- and Forensics 2019 versity (United States), and Gaurav Sharma, University of Rochester (United States) Conference overview The ease of capturing, manipulating, distributing, and consuming digital me- Program Committee: Mauro Barni, University dia (e.g., images, audio, video, graphics, and text) has enabled new applications and degli Studi di Siena (Italy); Sebastiane Battiato, brought a number of important security challenges to the forefront. These challenges University degli Studi di Catania (Italy); Marc have prompted significant research and development in the areas of digital watermarking, Chaumont, Laboratory d’lnformatique de steganography, data hiding, forensics, media identification, biometrics, and encryption to Robotique et de Microelectronique de Montpellier protect owners’ rights, establish provenance and veracity of content, and to preserve pri- (France); Scott A. Craver, Binghamton University vacy. Research results in these areas has been translated into new paradigms and applica- (United States); Edward J. Delp, Purdue University tions for monetizing media while maintaining ownership rights, new biometric and forensic (United States); Jana Dittmann, Otto-von-Guericke- Gwenaël identification techniques for novel methods for ensuring privacy. University Magdeburg (Germany); Doërr, ContentArmor SAS (France); Maha El Choubassi, Intel Corporation (United States); The Media Watermarking, Security, and Forensics conference is a premier destination for Jessica Fridrich, Binghamton University (United

MWSF disseminating high-quality, cutting-edge research in these areas. The conference provides States); Anthony T. S. Ho, University of Surrey an excellent venue for researchers and practitioners to present their innovative work as well (United Kingdom); Jiwu Huang, Sun Yat-Sen as to keep abreast of the latest developments in watermarking, security, and forensics. Early University (China); Andrew D. Ker, University of results and fresh ideas are particularly encouraged and supported by the conference review Oxford (United Kingdom); Matthias Kirchner, format: only a structures abstract describing the work in progress and preliminary results is Binghamton University (United States); Alex initially required and the full paper is requested just before the conference. A strong focus C. Kot, Nanyang Technological University Chang-Tsun Li, on how research results are applied in practice by industry also gives the conference (Singapore); The University of Warwick (United Kingdom); William Puech, its unique flavor. Laboratory d’Informatique de Robotique et de Microelectronique de Montpellier (France); Husrev Taha Sencar, TOBB University of Economics and Technology (Turkey); Yun-Qing Shi, New Jersey Institute of Technology (United States); Ashwin Swaminathan, Magic Leap, Inc. (United States); Robert Ulichney, HP Labs, HP Inc. (United States); Claus Vielhauer, Otto-von-Guericke- University Magdeburg (Germany); Svyatoslav V. Voloshynovskiy, University de Genève (Switzerland); and Chang Dong Yoo, Korea Advanced Institute of Science and Technology (Republic of Korea)

100 #EI2019 electronicimaging.org

MEDIA WATERMARKING, SECURITY, AND FORENSICS 2019

Monday January 14, 2019 Monday Plenary 2:00 – 3:00 pm Capture to Publication: Authenticating Digital Imagery Grand Peninsula Ballroom D Session Chair: Nasir Memon, New York University (United States) Autonomous Driving Technology and the OrCam MyEye, Amnon 9:00 – 10:00 am Shashua, President and CEO, Mobileye, an Intel Company, and senior Cypress C vice president, Intel Corporation (United States) MWSF-525 The field of transportation is undergoing a seismic change with the KEYNOTE: From capture to publication: Authenticating digital coming introduction of autonomous driving. The technologies required imagery, its context, and its chain of custody, Matt Robben and to enable computer driven cars involves the latest cutting edge artifi­cial Daniel DeMattia, Truepic (United States) intelligence algorithms along three major thrusts: Sensing, Planning and Matt Robben is the VP of engineering for Truepic, responsible for lead- Mapping. Shashua will describe the challenges and the kind of com- ing new technology development across the Truepic authenticity platform puter vi­sion and machine learning algorithms involved, but will do that and building a world-class pool of engineering talent. Prior to Truepic, through the perspective of Mobileye’s activity in this domain. He will Robben has helped technology groups and teams at One Medical, then describe how OrCam leverages computer vision, situation aware- Dropbox, Sold. (acq. by Dropbox), and Microsoft deliver mission-critical ness and language process­ing to enable blind and visually impaired to

software products to market across a variety of verticals. Robben holds a interact with the world through a miniature wearable device. MWSF BS in computer engineering from Northwestern University. Prof. Amnon Shashua holds the Sachs chair in computer science at Daniel DeMattia is the VP of security for Truepic. He is responsible for the Hebrew University of Jerusalem. His field of expertise is computer ensuring the security and integrity of Truepic, its systems, technology vision and machine learning. Shashua has founded three startups in and data. He brings with him more than 20 years of security experi- the computer vision and ma­chine learning fields. In 1995 he founded ence in high risk environments that he applies to every aspect of Truepic CogniTens that specializes in the area of industrial metrology and is operations. Prior to Truepic, DeMattia was head of security at SpaceX today a division of the Swedish Corporation Hexagon. In 1999 he as well as Virgin Orbit, where he helped build mission critical security cofounded Mobileye with his partner Ziv Aviram. Mobileye develops and communication systems that operate both on the ground and in system-on-chips and computer vision algorithms for driving assistance space. In his early days, he acted as an independent penetration tester systems and is developing a platform for autonomous driving to be and advised on vulnerability assessment and incident response. launched in 2021. Today, approximately 32 million cars rely on Mobileye technology to make their vehicles safer to drive. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising 10:10 – 10:30 am Coffee Break $1B at a market cap of $5.3B. In August 2017, Mobileye became an Intel company in the largest Israeli acquisition deal ever of $15.3B. Watermark & Biometric Today, Shashua is the president and CEO of Mobileye and a senior vice president of Intel Corporation. In 2010 Shashua co-founded Session Chair: Husrev Taha Sencar, TOBB University (Turkey) OrCam which harnesses computer vision and artificial intelligence to assist people who are visually impaired or blind. 10:30 am – 12:10 pm Cypress C

10:30 MWSF-526 3:00 – 3:30 pm Coffee Break Printed image watermarking with synchronization using direct binary Image Forgery Detection search, Yujian Xu and Jan Allebach, Purdue University (United States) Session Chair: Robert Ulichney, HP Labs, HP Inc. (United States) 10:55 MWSF-527 Hiding in plain sight: Enabling the vision of signal rich art, Ajith Kamath1 3:30 – 4:45 pm and Harish Palani1,2; 1Digimarc and 2UC Berkeley (United States) Cypress C

11:20 MWSF-528 How re-training process affect the performance of no-reference image 3:30 MWSF-530 quality metric for face images, Xinwei Liu1,2, Christophe Charrier3, Marius Deep learning methods for event verification and image repurposing 1 2 2 Pedersen2, and Patrick Bours2; 1University of Caen (France), 2Norwegian detection, Arjuna Flenner , Michael Goebel , Lakshmanan Nataraj , and 2 1 2 University of Science and Technology (Norway), and 3Normandie B.S. Manjunath ; NAVAIR and Mayachitra Inc. (United States) University (France) 3:55 MWSF-531 11:45 MWSF-529 Dictionary learning and sparse coding for digital image forgery Forensic reconstruction of severely degraded license plates, Benedikt detection, Mohammed Aloraini, Lingdao Sha, Mehdi Sharifzadeh, and Lorch1, Shruti Agarwal2, and Hany Farid2; 1Friedrich-Alexander-University Dan Schonfeld, University of Illinois at Chicago (United States) (Germany) and 2Dartmouth College (United States)

12:30 – 2:00 pm Lunch

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 101 M edia WaTERMARKING, Security, and Forensics 2019

4:20 MWSF-532 11:20 MWSF-536 Detecting GAN generated Fake Images using co-occurrence matrices, StegoAppDB: A steganography apps forensics image database, Jennifer Lakshmanan Nataraj1, Tajuddin Manhar Mohammed1, B.S. Manjunath1, Newman1, Li Lin1, Wenhao Chen1, Stephanie Reinders1, Yangxiao Shivkumar Chandrasekaran1, Arjuna Flenner2, Jawadul Bappy3, and Amit Wang1, Yong Guan1, and Min Wu2; 1Iowa State University and 2University Roy-Chowdhury4; 1Mayachitra, Inc., 2NAVAIR, 3JD.com, and 4University of of Maryland, College Park (United States) California, Riverside (United States) 11:45 MWSF-537 Are we there yet?, Mehdi Boroumand1, Remi Cogranne2, and Jessica 5:00 – 6:00 pm All-Conference Welcome Reception Fridrich1; 1Binghamton University (United States) and 2Troyes University of Technology (France) Tuesday January 15, 2019 12:30 – 2:00 pm Lunch

7:15 – 8:45 am Women in Electronic Imaging Breakfast Tuesday Plenary 2:00 – 3:00 pm Production and Deployment I Grand Peninsula Ballroom D Blockchain to Transform Industries The Quest for Vision Comfort: Head-Mounted Light Field Displays

MWSF 9:00 – 10:00 am for Virtual and Augmented Reality, Hong Hua, professor of optical Cypress C sciences, University of Arizona (United States) MWSF-533 Hong Hua will discuss the high promises and the tremendous progress KEYNOTE: Blockchain and smart contract to transform industries made recently toward the development of head-mounted displays – Challenges and opportunities, Sachiko Yoshihama, IBM Research (HMD) for both virtual and augmented reality displays, developing (Japan) HMDs that offer uncompromised optical pathways to both digital and Dr. Sachiko Yoshihama is a senior technical staff member and senior physical worlds without encumbrance and discomfort confronts many manager at IBM Research - Tokyo. She leads a team that focuses grand challenges, both from technological perspectives and human on financial and blockchain solutions. Her research interest is to factors. She will particularly focus on the recent progress, challenges bring advanced concepts and technologies to practice and address and opportunities for developing head-mounted light field displays real-world problems to transform industries. She served as a technical (LF-HMD), which are capable of rendering true 3D synthetic scenes leader and advisor in a number of blockchain projects with clients with proper focus cues to stimulate natural eye accommodation respons- in Japan and Asia. She joined IBM T.J. Watson Research Center in es and address the well-known vergence-accommodation conflict in 2001, and then moved to IBM Research – Tokyo in 2003 and worked conventional stereoscopic displays. on research in information security technologies, including trusted Dr. Hong Hua is a professor of optical sciences at the University of computing, information flow control, and Web security. She served as Arizona. With more than 25 years of experience, Hua is widely a technology innovation leader at IBM Research Global Labs HQ in recognized through academia and industry as an expert in wearable Shanghai in 2012, where she helped define research strategies for display technologies and optical imaging and engineering in gen- developing countries. She received her PhD from Yokohama National eral. Hua’s current research focuses on optical technologies enabling University (2010). She is a member of ACM, a senior member of advanced 3D displays, especially head-mounted display technologies Information Processing Society of Japan, and a member of IBM for virtual reality and augmented reality applications, and microscopic Academy of Technology. and endoscopic imaging systems for medicine. Hua has published more than 200 technical papers and filed a total of 23 patent applications 10:00 am – 7:00 pm Industry Exhibition in her specialty fields, and delivered numerous keynote addresses and invited talks at major conferences and events worldwide. She is an SPIE 10:10 – 10:30 am Coffee Break Fellow and OSA senior member. She was a recipient of NSF Career Award in 2006 and honored as UA Researchers @ Lead Edge in Steganalysis 2010. Hua and her students shared a total of 8 “Best Paper” awards in various IEEE, SPIE and SID conferences. Hua received her PhD in Session Chair: Jessica Fridrich, Binghamton University (United States) optical engineering from the Beijing Institute of Technology in China 10:30 am – 12:10 pm (1999). Prior to joining the UA faculty in 2003, Hua was an assistant Cypress C professor with the University of Hawaii at Manoa in 2003, was a Beckman Research Fellow at the Beckman Institute of University of Illinois 10:30 MWSF-534 at Urbana-Champaign between 1999 and 2002, and was a post-doc Detection of diversified stego sources with CNNs,Jan Butora and Jessica at the University of Central Florida in 1999. Fridrich, Binghamton University (United States)

10:55 MWSF-535 3:00 – 3:30 pm Coffee Break Algorithm mismatch in spatial steganalysis, Stephanie Reinders1, Jennifer Newman1, Li Lin1, Yong Guan1, and Min Wu2; 1Iowa State University and 2University of Maryland (United States)

102 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org M edia WaTERMARKING, Security, and Forensics 2019

Taking Blockchain Beyond Crypto-currency Nasir Memon is a professor in the department of computer science and engineering at NYU Polytechnic School of Engineering and director 3:30 – 5:00 pm of the Information Systems and Internet Security (ISIS) laboratory. He is Cypress C one of the founding members of the Center for Interdisciplinary Studies in Security and Privacy (CRISSP), a collaborative initiative of multiple Panel Moderator: Gaurav Sharma, University of Rochester (United schools within NYU including NYU-Steinhardt, NYU-Wagner, NYU- States) Stern and NYU-Courant. His research interests include digital forensics, Panelists: biometrics, data compression, network security and security and human Nasir Memon, New York University (United States) behavior. Memon earned a Bachelor of Engineering in chemical engi- Marc Mercuri, Microsoft Corporation (United States) neering and a Master of Science in mathematics from Birla Institute of Hilarie Orman, Cryptic Labs (United States) Technology and Science (BITS) in Pilani, India. He received a Master Sachiko Yoshihama, IBM Research (Japan) of Science in computer science and a PhD in computer science from the University of Nebraska. Memon has published more than 250 Marc Mercuri is a director in Microsoft’s Applied Innovation team where articles in journals and conference proceedings and holds a dozen pat- he focuses on scaling emerging business and technology scenarios ents in image compression and security. He has won several awards for Microsoft. His current focus areas include Blockchain and Smart including the Jacobs Excellence in Education award and several best Buildings. Mercuri’s career has included architecture, consulting, paper awards. He has been on the editorial boards of several journals engineering, evangelism, and strategy leadership roles at startups, and was the editor-in-chief of Transactions on Information Security and enterprises, ISVs and CSVs. He has worked in Europe, Latin America, Forensics. He is an IEEE Fellow and a distinguished lecturer of the IEEE Asia, and the United States and his work has been featured in a number Signal Processing Society. Memon is the co-founder of Digital Assembly of mainstream media outlets including ABC, Advertising Age, AdWeek, and Vivic Networks, two early-stage start-ups in NYU-Poly’s business TechCrunch, Ars Technica, the BBC, CNET, the Telegraph, FastCompany, incubators. MWSF Mashable, Wired, and ZDNet. He has 12 issued patents and 15 patents pending in the areas of cloud, mobile, and social. Mercuri is the author of four books on services and identity. He is also the technical edi- 5:30 – 7:00 pm Symposium Demonstration Session tor of a book on Microservices with Docker on Microsoft Azure. Hilaire Orman’s expertise centers on the design, development, and analysis of software and systems that protect data and communications; applied cryptography is principal technology for those protections. She has designed well-regarded protocols and cryptographic documents for the IETF. Orman’s educational software for demonstrating malware and how to respond to it is part of the educational archive at USC’s Information Sciences Institute. She is one of the founders and co-organ- izers to the GREPSEC workshop for under-represented groups in com- puter security research. Orman is the “Practical Security” columnist for IEEE Internet Computing Magazine. Recent articles have covered online voting, the secrets of email headers, and the Internet of Things. She is the archivist for the IACR and a constant advocate for open source pub- lishing. Orman has a BS in mathematics from the Massachusetts Institute of Technology. She’s a former chair of the IEEE Computer Society’s Technical Committee on Security and Privacy. She has a strong interest in Blockchain, particularly in the area of smart contracts. Dr. Sachiko Yoshihama is a senior technical staff member and senior manager at IBM Research - Tokyo. She leads a team that focuses on financial and blockchain solutions. Her research interest is to bring ad- vanced concepts and technologies to practice and address real-world problems to transform industries. She served as a technical leader and advisor in a number of blockchain projects with clients in Japan and Asia. She joined IBM T.J. Watson Research Center in 2001, and then moved to IBM Research – Tokyo in 2003 and worked on research in information security technologies, including trusted computing, informa- tion flow control, and Web security. She served as a technology inno- vation leader at IBM Research Global Labs HQ in Shanghai in 2012, where she helped define research strategies for developing countries. She received PhD from Yokohama National University (2010). She is a member of ACM, a senior member of Information Processing Society of Japan, and a member of IBM Academy of Technology.

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 103 M edia WaTERMARKING, Security, and Forensics 2019

Wednesday January 16, 2019 11:20 MWSF-541 Nondestructive ciphertext injection in document files,Scott Craver1, Jugal Shah2, and Enshirah Altarawneh1; 1Binghamton University (United States) 2 Solutions to Foreign Propaganda and Nirma University (India) 11:45 MWSF-542 Session Chair: Nasir Memon, New York University (United States) A natural steganography embedding scheme dedicated to color sensors 9:00 – 10:00 am in the JPEG domain, Patrick Bas1, Théo Taburet1, Wadih Sawaya2, Cypress C and Jessica Fridrich3; 1CNRS (France), 2IMT Lille-Douais (France), and 3Binghamton University (United States) MWSF-538 KEYNOTE: Technology in context: Solutions to foreign propaganda and disinformation, Justin Maddox and Patricia Watts, Global 12:30 – 2:00 pm Lunch Engagement Center, US State Department (United States)

Justin Maddox is an adjunct professor in the department of information Wednesday Plenary sciences and technology at George Mason University. Maddox is a counterterrorism expert with specialization in emerging technology 2:00 – 3:00 pm applications. He is the CEO of Inventive Insights LLC, a research and Grand Peninsula Ballroom D analysis consultancy. He recently served as the deputy coordinator of Light Fields and Light Stages for Photoreal Movies, Games, and MWSF the interagency Global Engagement Center, where he implemented Virtual Reality, Paul Debevec, senior scientist, Google (United States) cutting-edge technologies to counter terrorist propaganda. He has led counterterrorism activities at the CIA, the State Department, DHS, and Paul Debevec will discuss the technology and production processes behind NNSA, and has been a special operations team leader in the US “Welcome to Light Fields”, the first downloadable virtual reality experience Army. Since 2011, Maddox has taught National Security Challenges, based on light field capture techniques which allow the visual appear- a graduate-level course, requiring students to devise realistic solutions to ance of an explorable volume of space to be recorded and reprojected key strategic threats. Maddox holds an MA from Georgetown University’s photorealistically in VR enabling full 6DOF head movement. The lightfields National Security Studies Program and a BA in liberal arts from St. John’s technique differs from conventional approaches such as 3D modelling College, the “great books” school. He has lived and worked in Iraq, India, and photogrammetry. Debevec will discuss the theory and application of and Germany, and can order a drink in Russian, Urdu and German. the technique. Debevec will also discuss the Light Stage computational illumination and facial scanning systems which use geodesic spheres of Patricia Watts is currently acting chief, Science and Technology/ inward-pointing LED lights as have been used to create digital actor effects Cyber, in the US State Department. Watts is a skilled senior intelligence in movies such as Avatar, Benjamin Button, and Gravity, and have recently professional with extensive research experience, and brings a solid been used to create photoreal digital actors based on real people in mov- understanding of foriegn operations, weaponry, and worldwide ies such as Furious 7, Blade Runner: 2049, and Ready Player One. The terrorism. Over a diverse career, Watts has managed the Joint lighting reproduction process of light stages allows omnidirectional lighting Intelligence Directorate, supervising and overseeing operations of environments captured from the real world to be accurately reproduced in personnel in Afghanistan supporting the Global War on Terrorism; a studio, and has recently be extended with multispectral capabilities to supervised combat maneuver training operations; aided and assisted the enable LED lighting to accurately mimic the color rendition properties of tactical training of more than 40,000 maneuver brigade soldiers at the daylight, incandescent, and mixed lighting environments. They have also US Army National Training Center; and supplied multi-national support to recently used their full-body light stage in conjunction with natural language British, French and US forces in an Allied Command in Berlin, Germany. processing and automultiscopic video projection to record and project interactive conversations with survivors of the World War II Holocaust. Paul Debevec is a senior scientist at Google VR, a member of Google 10:00 am – 3:30 pm Industry Exhibition VR’s Daydream team, and adjunct research professor of computer sci- 10:10 – 10:30 am Coffee Break ence in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the Steganography USC Institute for Creative Technologies. Debevec’s computer graphics research has been recognized with ACM SIGGRAPH’s first Significant Session Chair: Marc Chaumont, LIRMM Montpellier France (France) New Researcher Award (2001) for “Creative and Innovative Work in 10:30 am – 12:10 pm the Field of Image-Based Modeling and Rendering”, a Scientific and Cypress C Engineering Academy Award (2010) for “the design and engineering of the Light Stage capture devices and the image-based facial render- 10:30 MWSF-539 ing system developed for character relighting in motion pictures” with New graph-theoretic approach to social steganography, Hanzhou Wu, Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Wei Wang, and Jing Dong, Chinese Academy of Sciences (China) Medal (2017) in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects 10:55 MWSF-540 based on measurement of real-world illumination and their effective Reducing coding loss with irregular syndrome trellis codes, Christy Kin- commercial application in numerous Hollywood films. In 2014, he Cleaves and Andrew Ker, University of Oxford (United Kingdom) was profiled in The New Yorker magazine’s “Pixel Perfect: The Scientist Behind the Digital Cloning of Actors” article by Margaret Talbot.

104 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org M edia WaTERMARKING, Security, and Forensics 2019

3:00 – 3:30 pm Coffee Break

Forensics

Session Chair: Scott Craver, Binghamton University (United States) 3:30 – 4:50 pm Cypress C

3:30 MWSF-543 Statistical sequential analysis for object-based video forgery detection, Mohammed Aloraini, Mehdi Sharifzadeh, Chirag Agarwal, and Dan Schonfeld, University of Illinois at Chicago (United States)

3:55 MWSF-544 Explaining and improving a machine learning based printer identification system,Karthick Shankar, Alexander Gokan, Zhi Li, and Jan Allebach, Purdue University (United States)

4:20 MWSF-545 Tackling in-camera downsizing for reliable camera ID verification, Erkam Tandogan, Enes Altinisik, Salim Sarimurat, and Husrev Taha Sencar, TOBB University of Economics and Technology (Turkey) MWSF

4:45 Conference Closing Remarks

Media Watermarking, Security, and Forensics 2019 Interactive Posters Session 5:30 – 7:00 pm The Grove The following work will be presented at the EI 2019 Symposium Interactive Papers Session.

MWSF-546 Hybrid G-PRNU: A novel scale-invariant approach for asymmetric PRNU matching, associating videos to source smartphones, Reepjyoti Deka, Chiara Galdi, and Jean-Luc Dugelay, Eurecom (France)

electronicimaging.org #EI2019 Titles and authors as of 13 January 2019. 105 Conference Chairs: Jon S. McElvain, Dolby Photography, Mobile, and Immersive Labs, Inc. (United States); and Nitin Sampat, Imaging 2019 Edmund Optics, Inc. (United States) Program Committee: Ajit Bopardikar, Samsung Conference overview R&D Institute India Bangalore Pvt. Ltd. (India); Photography, Mobile, and Immersive Imaging, previously Digital Photography and Mobile Peter Catrysse, Stanford University (United States); Imaging, conference expands its scope in 2019 to cover the areas of automotive and medi- Henry Dietz, University of Kentucky (United cal imaging, machine vision/learning, and topics pertaining to virtual reality, augmented States); Joyce E. Farrell, Stanford University reality and mixed reality. It serves to bring together researchers, scientists, and engineers (United States); Boyd Fowler, OmniVision working in the fields of mobile, automotive imaging, medical imaging, computational pho- Technologies, Inc. (United States); Orazio Gallo, tography and VR/AR/MR to discuss recent progress and advances in these fields. The tech- NVIDIA Research (United States); Sergio Goma, nical scope includes novel Input hardware and system architecture designs, high dynamic Qualcomm Technologies, Inc. (United States); range imaging, sensor architectures, image / video artifact corrections, enhancement, ren- Zhen He, Intuitive Surgical, Inc. (United States); Francisco Imai, dering, and imaging pipelines. This conference includes paper presentations, presentation- Apple Inc. (United States); Michael Kriss, MAK Consultants (United States); only talks as well as joint sessions with other Electronic Imaging conferences with overlap- Jiangtao (Willy) Kuang, Facebook, Inc. (United ping interests. In Electronic Imaging 2020, PMII will merge with the IMSE conference. States); Feng Li, Intuitive Surgical, Inc. (United States); Kevin Matherson, Microsoft Corporation Award (jointly with the IMSE conference) (United States); David Morgan-Mar, Canon Arnaud Darmont Memorial Best Paper Award Information Systems Research Australia Pty Ltd (CISRA) (Australia); Bo Mu, Quanergy Inc. (United States); Oscar Nestares, Intel Corporation (United States); Jackson Roland, Apple Inc. (United States); Radka Tezaur, Intel Corporation (United States); Gordon Wetzstein, Stanford University (United States); and Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany)

Conference Sponsor PMII

106 #EI2019 electronicimaging.org

PHOTOGRAPHY, MOBILE, AND IMMERSIVE IMAGING 2019

Monday January 14, 2019 Monday Plenary

Machine Learning Applications in Imaging 2:00 – 3:00 pm Grand Peninsula Ballroom D Session Chairs: Jon McElvain, Dolby Laboratories (United States) and Radka Tezaur, Intel Corporation (United States) Autonomous Driving Technology and the OrCam MyEye, Amnon Shashua, President and CEO, Mobileye, an Intel Company, and senior 10:30 am – 12:00 pm vice president, Intel Corporation (United States) Regency AB The field of transportation is undergoing a seismic change with the 10:30 PMII-575 coming introduction of autonomous driving. The technologies required Expanding the impact of deep learning (Invited), Ray Ptucha, Rochester to enable computer driven cars involves the latest cutting edge artifi­cial Institute of Technology (United States) intelligence algorithms along three major thrusts: Sensing, Planning and Mapping. Shashua will describe the challenges and the kind of com- 11:00 PMII-576 puter vi­sion and machine learning algorithms involved, but will do that Towards combining domain knowledge and deep learning for through the perspective of Mobileye’s activity in this domain. He will computational imaging (Invited), Orazio Gallo, NVIDIA Research (United then describe how OrCam leverages computer vision, situation aware- States) ness and language process­ing to enable blind and visually impaired to interact with the world through a miniature wearable device. 11:20 PMII-577 Prof. Amnon Shashua holds the Sachs chair in computer science at Autofocus by deep reinforcement learning of phase data, Chin-Cheng the Hebrew University of Jerusalem. His field of expertise is computer Chan and Homer Chen, National Taiwan University (Taiwan) vision and machine learning. Shashua has founded three startups in the computer vision and ma­chine learning fields. In 1995 he founded 11:40 PMII-578 CogniTens that specializes in the area of industrial metrology and is Face skin tone adaptive automatic exposure control, Noha El-Yamany, today a division of the Swedish Corporation Hexagon. In 1999 he Jarno Nikkanen, and Jihyeon Yi, Intel Corporation (Finland) cofounded Mobileye with his partner Ziv Aviram. Mobileye develops system-on-chips and computer vision algorithms for driving assistance systems and is developing a platform for autonomous driving to be 12:30 – 2:00 pm Lunch launched in 2021. Today, approximately 32 million cars rely on Mobileye technology to make their vehicles safer to drive. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising $1B at a market cap of $5.3B. In August 2017, Mobileye became an Intel company in the largest Israeli acquisition deal ever of $15.3B. Today, Shashua is the president and CEO of Mobileye and a senior vice president of Intel Corporation. In 2010 Shashua co-founded OrCam which harnesses computer vision and artificial intelligence to

assist people who are visually impaired or blind. PMII

3:00 – 3:30 pm Coffee Break

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 107 Photography, Mobile, and Immersive Imaging 2019

9:30 PMII-580 High dynamic range imaging for high performance applications Panel: Sensing and Perceiving for Autonomous Driving Joint Session (Invited), Boyd Fowler, OmniVision Technologies (United States) 3:30 – 5:30 pm Grand Peninsula Ballroom D 9:50 PMII-581 Improved image selection for stack-based HDR imaging, Peter van Beek, This session is jointly sponsored by the EI Steering Committee University of Waterloo (Canada) Moderator: Dr. Wende Zhang, technical fellow, General Motors Panelists: 10:00 am – 7:00 pm Industry Exhibition Dr. Amnon Shashua, professor of computer science, Hebrew University; president and CEO, Mobileye, an Intel Company, and senior vice 10:10 – 10:40 am Coffee Break president, Intel Corporation Dr. Boyd Fowler, CTO, OmniVision Technologies Dr. Christoph Schroeder, head of autonomous driving N.A., Mercedes- Camera Pipelines and Processing I Benz R&D Development North America, Inc. Session Chairs: Boyd Fowler, OmniVision Technologies (United States) Dr. Jun Pei, CEO and co-founder, Cepton Technologies Inc. and Francisco Imai, Apple Inc. (United States) Driver assistance and autonomous driving rely on perceptual systems 10:40 – 11:20 am that combine data from many different sensors, including camera, Regency AB ultrasound, radar and lidar. The panelists will discuss the strengths and limitations of different types of sensors and how the data from these PMII-582 sensors can be effectively combined to enable autonomous driving. KEYNOTE: Unifying principles of camera processing pipeline in the rapidly changing imaging landscape, Keigo Hirakawa, University of Dayton (United States) 5:00 – 6:00 pm All-Conference Welcome Reception Keigo Hirakawa is an associate professor at the University of Dayton. Prior to UD, he was with Harvard University as a research associate Tuesday January 15, 2019 of the department of statistics. He simultaneously earned his PhD in electrical and computer engineering from Cornell University and his 7:15 – 8:45 am Women in Electronic Imaging Breakfast MM in jazz performance from New England Conservatory of Music. Hirakawa received his MS in electrical and computer engineering from Cornell University and BS in electrical engineering from Princeton High Dynamic Range Imaging I University. He is an associate editor for IEEE Transactions on Image Processing and for SPIE/IS&T Journal of Electronic Imaging, and Session Chairs: Michael Kriss, MAK Consultants (United States) and served on the technical committee of IEEE SPS IVMSP as well as the Jackson Roland, Apple Inc. (United States) organization committees of IEEE ICIP 2012 and IEEE ICASSP 2017. He has received a number of recognitions, including a paper award 8:50 – 9:30 am at IEEE ICIP 2007 and keynote speeches at IS&T CGIV, PCSJ-IMPS, Regency AB CSAJ, and IAPR CCIW. PMII PMII-579 KEYNOTE: High dynamic range imaging: History, challenges, and opportunities, Greg Ward, Dolby Laboratories, Inc. (United States) Camera Pipelines and Processing II Greg Ward is a pioneer in the HDR space, having developed the first widely-used high dynamic range image file format in 1986 as Session Chairs: Boyd Fowler, OmniVision Technologies (United States) and part of the RADIANCE lighting simulation system. Since then, he has Francisco Imai, Apple Inc. (United States) developed the LogLuv TIFF HDR and the JPEG-HDR image formats, and 11:20 am – 12:40 pm created Photosphere, an HDR image builder and browser. He has Regency AB been involved with BrightSide Technology and Dolby’s HDR display developments. He is currently a senior member of technical staff for 11:20 PMII-583 research at Dolby Laboratories. He also consults for the Lawrence Rearchitecting and tuning ISP pipelines (Invited), Kari Pulli, stealth startup Berkeley National Lab on RADIANCE development, and for IRYStec, (United States) Inc. on OS-level mobile display software.

11:40 PMII-584 Image sensor oversampling (Invited), Scott Campbell, Area4 Professional High Dynamic Range Imaging II Design Services (United States)

Session Chairs: Michael Kriss, MAK Consultants (United States) and 12:00 PMII-585 Jackson Roland, Apple Inc. (United States) Credible repair of Sony main-sensor PDAF striping artifacts, Henry 9:30 – 10:10 am Dietz, University of Kentucky (United States) Regency AB

108 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Photography, Mobile, and Immersive Imaging 2019

12:20 PMII-586 Issues reproducing handshake on mobile phone cameras, Francois- Computational Models for Human Optics Joint Session Xavier Bucher, Jae Young Park, Ari Partinen, and Paul Hubel, Apple Inc. (United States) Session Chair: Session Chair: Jennifer Gille, Oculus VR (United States) 3:30 – 5:30 pm 12:40 – 2:00 pm Lunch Grand Peninsula Ballroom D This session is jointly sponsored by the EI Steering Committee.

Tuesday Plenary 3:30 EISS-704 Eye model implementation (Invited), Andrew Watson, Apple Inc. 2:00 – 3:00 pm (United States) Grand Peninsula Ballroom D Dr. Andrew Watson is the chief vision scientist at Apple Inc., where he The Quest for Vision Comfort: Head-Mounted Light Field Displays specializes in vision science, psychophysics display human factors, visual for Virtual and Augmented Reality, Hong Hua, professor of optical human factors, computation modeling of vision, and image and video sciences, University of Arizona (United States) compression. For thirty-four years prior to joining Apple, Dr. Watson was Hong Hua will discuss the high promises and the tremendous progress the senior scientist for vision research at NASA. Watson received his PhD made recently toward the development of head-mounted displays in psychology from the University of Pennsylvania (1977) and followed (HMD) for both virtual and augmented reality displays, developing that with post doc work in vision at the University of Cambridge. HMDs that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many 3:50 EISS-700 grand challenges, both from technological perspectives and human Wide field-of-view optical model of the human eye (Invited), James factors. She will particularly focus on the recent progress, challenges Polans, Verily Life Sciences (United States) and opportunities for developing head-mounted light field displays Dr. James Polans is an engineer who works on surgical robotics at (LF-HMD), which are capable of rendering true 3D synthetic scenes Verily Life Sciences in South San Francisco. Polans received his PhD in with proper focus cues to stimulate natural eye accommodation respons- biomedical engineering from Duke University under the mentorship of es and address the well-known vergence-accommodation conflict in Joseph Izatt. His doctoral work explored the design and development conventional stereoscopic displays. of wide field-of-view optical coherence tomography systems for retinal Dr. Hong Hua is a professor of optical sciences at the University of imaging. He also has a MS in electrical engineering from the University Arizona. With more than 25 years of experience, Hua is widely of Illinois at Urbana-Champaign. recognized through academia and industry as an expert in wearable display technologies and optical imaging and engineering in gen- 4:10 EISS-702 eral. Hua’s current research focuses on optical technologies enabling Evolution of the Arizona Eye Model (Invited), Jim Schwiegerling, advanced 3D displays, especially head-mounted display technologies University of Arizona (United States) for virtual reality and augmented reality applications, and microscopic and endoscopic imaging systems for medicine. Hua has published more Prof. Jim Schwiegerling is a Professor in the College of Optical than 200 technical papers and filed a total of 23 patent applications Sciences at the University of Arizona. His research interests include the in her specialty fields, and delivered numerous keynote addresses and design of ophthalmic systems such as corneal topographers, ocular invited talks at major conferences and events worldwide. She is an SPIE wavefront sensors and retinal imaging systems. In addition to these

Fellow and OSA senior member. She was a recipient of NSF Career systems, Schwiegerling has designed a variety of multifocal intraocular PMII Award in 2006 and honored as UA Researchers @ Lead Edge in and contact lenses and has expertise in diffractive and extended depth 2010. Hua and her students shared a total of 8 “Best Paper” awards of focus systems. in various IEEE, SPIE and SID conferences. Hua received her PhD in optical engineering from the Beijing Institute of Technology in China 4:30 EISS-705 (1999). Prior to joining the UA faculty in 2003, Hua was an assistant Berkeley Eye Model (Invited), Brian Barsky, University of California, professor with the University of Hawaii at Manoa in 2003, was a Berkeley (United States) Beckman Research Fellow at the Beckman Institute of University of Illinois Prof. Brian Barsky is professor of computer science and affiliate at Urbana-Champaign between 1999 and 2002, and was a post-doc professor of optometry and vision science at UC Berkeley. He attended at the University of Central Florida in 1999. McGill University, Montréal, received a DCS in engineering and a BSc in mathematics and computer science. He studied computer graphics and computer science at Cornell University, Ithaca, where he earned 3:00 – 3:30 pm Coffee Break an MS. His PhD is in computer science from the University of Utah, Salt Lake City. He is a fellow of the American Academy of Optometry. His research interests include computer aided geometric design and modeling, interactive three-dimensional computer graphics, visualization in scientific computing, computer aided cornea modeling and visualization, medical imaging, and virtual environments for surgical simulation.

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 109 Photography, Mobile, and Immersive Imaging 2019

4:50 EISS-701 Wednesday January 16, 2019 Modeling retinal image formation for light field displays (Invited), Hekun Huang, Mohan Xu, and Hong Hua, University of Arizona Medical Imaging - Camera Systems Joint Session (United States) Session Chairs: Jon McElvain, Dolby Laboratories (United States) and Ralf Prof. Hong Hua is a professor of optical sciences at the University Widenhorn, Portland State University (United States) of Arizona. With more than 25 years of experience, Hua is widely recognized through academia and industry as an expert in wearable 8:50 – 10:30 am display technologies and optical imaging and engineering in Grand Peninsula Ballroom D general. Hua’s current research focuses on optical technologies This medical imaging session is jointly sponsored by: Image Sensors and enabling advanced 3D displays, especially head-mounted display Imaging Systems 2019, and Photography, Mobile, and Immersive Imaging technologies for virtual reality and augmented reality applications, 2019. and microscopic and endoscopic imaging systems for medicine. Hua has published more than 200 technical papers and filed a total of 23 patent applications in her specialty fields, and delivered 8:50 PMII-350 Plenoptic medical cameras (Invited), numerous keynote addresses and invited talks at major conferences Liang Gao, University of Illinois and events worldwide. She is an SPIE Fellow and OSA senior member. Urbana-Champaign (United States) She was a recipient of NSF Career Award in 2006 and honored as UA Researchers @ Lead Edge in 2010. Hua and her students 9:10 PMII-351 shared a total of 8 “Best Paper” awards in various IEEE, SPIE and SID Simulating a multispectral imaging system for oral cancer screening conferences. Hua received her PhD in optical engineering from the (Invited), Joyce Farrell, Stanford University (United States) Beijing Institute of Technology in China (1999). Prior to joining the UA faculty in 2003, Hua was an assistant professor with the University of 9:30 PMII-352 Hawaii at Manoa in 2003, was a Beckman research fellow at the Imaging the body with miniature cameras, towards portable healthcare Beckman Institute of University of Illinois at Urbana-Champaign between (Invited), Ofer Levi, University of Toronto (Canada) 1999 and 2002, and was a post-doc at the University of Central Florida in 1999. 9:50 PMII-353 Self-calibrated surface acquisition for integrated positioning verification 5:10 EISS-703 in medical applications, Sven Jörissen1, Michael Bleier2, and Andreas Ray-tracing 3D spectral scenes through human optics (Invited), Trisha Nüchter1; 1University of Wuerzburg and 2Zentrum für Telematik e.V. Lian, Kevin MacKenzie, and Brian Wandell, Stanford University (United (Germany) States) Trisha Lian is an electrical engineering PhD student at Stanford 10:10 IMSE-354 Measurement and suppression of multipath effect in time-of-flight depth University. Before Stanford, she received her bachelor’s in biomedical imaging for endoscopic applications, 1 1 engineering from Duke University. She is currently advised by Professor Ryota Miyagi , Yuta Murakami , 1 2 3 Brian Wandell and works on interdisciplinary topics that involve Keiichiro Kagawa , Hajime Ngahara , Kenji Kawashima , Keita 1 1 1 2 image systems simulations. These range from novel camera designs to Yasutomi , and Shoji Kawahito ; Shizuoka University, Osaka University, and 3Tokyo Medical and Dental University (Japan) PMII simulations of the human visual system.

10:00 am – 3:30 pm Industry Exhibition

5:30 – 7:00 pm Symposium Demonstration Session 10:10 – 10:50 am Coffee Break

110 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Photography, Mobile, and Immersive Imaging 2019

12:10 PMII-052 Driving, the future – The automotive imaging revolution (Invited), Patrick Automotive Image Sensing I Joint Session Denny, Valeo (Ireland) Session Chairs: Kevin Matherson, Microsoft Corporation (United States); Arnaud Peizerat, CEA (France); and Peter van Beek, Intel 12:30 AVM-053 Corporation (United States) A system for generating complex physically accurate sensor images 1,2 1 3 10:50 am – 12:10 pm for automotive applications, Zhenyi Liu , Minghao Shen , Jiaqi Zhang , Shuangting Liu3, Henryk Blasinski2, Trisha Lian2, and Brian Wandell2; Grand Peninsula Ballroom D 1Jilin University (China), 2Stanford University (United States), and 3Beihang This session is jointly sponsored by: Autonomous Vehicles and University (China) Machines 2019, Image Sensors and Imaging Systems 2019, and Photography, Mobile, and Immersive Imaging 2019. 12:50 – 2:00 pm Lunch 10:50 IMSE-050 KEYNOTE: Recent trends in the image sensing technologies, Vladimir Koifman, Analog Value Ltd. (Israel) Wednesday Plenary Vladimir Koifman is a founder and CTO of Analog Value Ltd. Prior 2:00 – 3:00 pm to that, he was co-founder of Advasense Inc., acquired by Pixim/ Grand Peninsula Ballroom D Sony Image Sensor Division. Prior to co-founding Advasense, Koifman co-established the AMCC analog design center in Israel and led the Light Fields and Light Stages for Photoreal Movies, Games, and analog design group for three years. Before AMCC, Koifman worked Virtual Reality, Paul Debevec, senior scientist, Google (United States) for 10 years in Motorola Semiconductor Israel (Freescale) managing Paul Debevec will discuss the technology and production processes behind an analog design group. He has more than 20 years of experience “Welcome to Light Fields”, the first downloadable virtual reality experience in VLSI industry and has technical leadership in analog chip design, based on light field capture techniques which allow the visual appear- mixed signal chip/system architecture and electro-optic device develop- ance of an explorable volume of space to be recorded and reprojected ment. Koifman has more than 80 granted patents and several papers. photorealistically in VR enabling full 6DOF head movement. The lightfields Koifman also maintains Image Sensors World blog. technique differs from conventional approaches such as 3D modelling 11:30 AVM-051 and photogrammetry. Debevec will discuss the theory and application of KEYNOTE: Solid-state LiDAR sensors: The future of autonomous the technique. Debevec will also discuss the Light Stage computational vehicles, Louay Eldada, Quanergy Systems, Inc. (United States) illumination and facial scanning systems which use geodesic spheres of inward-pointing LED lights as have been used to create digital actor effects Louay Eldada is CEO and co-founder of Quanergy Systems, Inc. Eldada is in movies such as Avatar, Benjamin Button, and Gravity, and have recently a serial entrepreneur, having founded and sold three businesses to Fortune been used to create photoreal digital actors based on real people in mov- 100 companies. Quanergy is his fourth start-up. Eldada is a techni- ies such as Furious 7, Blade Runner: 2049, and Ready Player One. The cal business leader with a proven track record at both small and large lighting reproduction process of light stages allows omnidirectional lighting companies and with 71 patents, is a recognized expert in quantum optics, environments captured from the real world to be accurately reproduced in nanotechnology, photonic integrated circuits, advanced optoelectronics, a studio, and has recently be extended with multispectral capabilities to sensors and robotics. Prior to Quanergy, he was CSO of SunEdison, after enable LED lighting to accurately mimic the color rendition properties of serving as CTO of HelioVolt, which was acquired by SK Energy. Eldada daylight, incandescent, and mixed lighting environments. They have also was earlier CTO of DuPont Photonic Technologies, formed by the acquisi- recently used their full-body light stage in conjunction with natural language PMII tion of Telephotonics where he was founding CTO. His first job was at processing and automultiscopic video projection to record and project Honeywell, where he started the Telecom Photonics business and sold it to interactive conversations with survivors of the World War II Holocaust. Corning. He studied business administration at Harvard, MIT and Stanford, and holds a PhD in optical engineering from Columbia University. Paul Debevec is a senior scientist at Google VR, a member of Google VR’s Daydream team, and adjunct research professor of computer sci- ence in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the Automotive Image Sensing II Joint Session USC Institute for Creative Technologies. Debevec’s computer graphics research has been recognized with ACM SIGGRAPH’s first Significant Session Chairs: Kevin Matherson, Microsoft Corporation (United States); New Researcher Award (2001) for “Creative and Innovative Work in Arnaud Peizerat, CEA (France); and Peter van Beek, Intel Corporation the Field of Image-Based Modeling and Rendering”, a Scientific and (United States) Engineering Academy Award (2010) for “the design and engineering 12:10 – 12:50 pm of the Light Stage capture devices and the image-based facial render- Grand Peninsula Ballroom D ing system developed for character relighting in motion pictures” with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress This session is jointly sponsored by: Autonomous Vehicles and Machines Medal (2017) in recognition of his achievements and ongoing work 2019, Image Sensors and Imaging Systems 2019, and Photography, in pioneering techniques for illuminating computer-generated objects Mobile, and Immersive Imaging 2019. based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine’s “Pixel Perfect: The Scientist Behind the Digital Cloning of Actors” article by Margaret Talbot.

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 111 Photography, Mobile, and Immersive Imaging 2019

3:00 – 3:30 pm Coffee Break 4:50 EISS-709 Quest for immersion (Invited), Kari Pulli, Stealth Startup (United States)

Light Field Imaging and Display Joint Session Dr. Kari Pulli has spent two decades in computer imaging and AR at com- panies such as Intel, NVIDIA and Nokia. Before joining a stealth startup, Session Chair: Gordon Wetzstein, Stanford University (United States) he was the CTO of Meta, an augmented reality company in San Mateo, 3:30 – 5:30 pm heading up computer vision, software, displays, and hardware, as well as the overall architecture of the system. Before joining Meta, he worked as Grand Peninsula Ballroom D the CTO of the Imaging and Camera Technologies Group at Intel, influenc- ing the architecture of future IPU’s in hardware and software. Prior, he was This session is jointly sponsored by the EI Steering Committee. vice president of computational imaging at Light, where he developed 3:30 EISS-706 algorithms for combining images from a heterogeneous camera array into Light fields - From shape recovery to sparse reconstruction (Invited), a single high-quality image. He previously led research teams as a senior Ravi Ramamoorthi, University of California, San Diego (United States) director at NVIDIA Research and as a Nokia fellow at Nokia Research, where he focused on computational photography, computer vision, and Prof. Ravi Ramamoorthi is the Ronald L. Graham Professor of Computer AR. Pulli holds computer science degrees from the University of Minnesota Science, and Director of the Center for Visual Computing, at the University (BSc), University of Oulu (MSc, Lic. Tech), and University of Washington of California, San Diego. Ramamoorthi received his PhD in computer (PhD), as well as an MBA from the University of Oulu. He has taught and science (2002) from Stanford University. Prior to joining UC San Diego, worked as a researcher at Stanford, University of Oulu, and MIT. Ramamoorthi was associate professor of EECS at the University of California, Berkeley, where he developed the complete graphics curricula. 5:10 EISS-710 His research centers on the theoretical foundations, mathematical repre- Industrial scale light field printing (Invited),Matthew Hirsch, Lumii Inc. sentations, and computational algorithms for understanding and rendering (United States) the visual appearance of objects, exploring topics in frequency analysis Dr. Matthew Hirsch is a co-founder and chief technical officer of Lumii. He and sparse sampling and reconstruction of visual appearance datasets worked with Henry Holtzman’s Information Ecology Group and Ramesh a digital data-driven visual appearance pipeline; light-field cameras and Raskar’s Camera Culture Group at the MIT Media Lab, making the next 3D photography; and physics-based computer vision. Ramamoorthi is generation of interactive and glasses-free 3D displays. Hirsch received his an ACM Fellow for contributions to computer graphics rendering and bachelors from Tufts University in computer engineering, and his Masters physics-based computer vision, awarded Dec. 2017, and an IEEE Fellow and Doctorate from the MIT Media Lab. Between degrees, he worked at for contributions to foundations of computer graphics and computer vision, Analogic Corp. as an imaging engineer, where he advanced algorithms awarded Jan. 2017. for image reconstruction and understanding in volumetric x-ray scanners. 4:10 EISS-707 His work has been funded by the NSF and the Media Lab consortia, The beauty of light fields (Invited), David Fattal, LEIA Inc. (United and has appeared in SIGGRAPH, CHI, and ICCP. Hirsch has also taught States) courses at SIGGRAPH on a range of subjects in computational imaging and display, with a focus on DIY. Dr. David Fattal is co-founder and CEO at LEIA Inc., where he is in charge of bringing their mobile holographic display technology to market. Fattal received his PhD in physics from Stanford University (2005). Prior to Photography, Mobile, and Immersive Imaging 2019 Interactive Posters PMII founding LEIA Inc., Fattal was a research scientist with HP Labs, HP Inc. At LEIA Inc., the focus is on immersive mobile, with screens that come alive Session in richer, deeper, more beautiful ways. Flipping seamlessly between 2D 5:30 – 7:00 pm and lightfields, mobile experiences become truly immersive: no glasses, no The Grove tracking, no fuss. Alongside new display technology LEIA Inc. is develop- ing Leia Loft™ — a whole new canvas. The following works will be presented at the EI 2019 Symposium Interactive Papers Session. 4:30 EISS-708 Light field insights from my time at Lytro (Invited), Kurt Akeley, PMII-587 Google Inc. (United States) A new methodology in optimizing the auto-flash quality of mobile cameras, Abtin Ghelmansaraei, Quarry Lane High School (United States) Dr. Kurt Akeley is a distinguished engineer at Google Inc. Akeley received his PhD in stereoscopic display technology from Stanford University PMII-588 (2004), where he implemented and evaluated a stereoscopic display that Deep video super-resolution network for flickering artifact reduction, passively (e.g., without eye tracking) produces nearly correct focus cues. Il Jun Ahn, Jae-yeon Park, Yongsup Park, and Tammy Lee, Samsung After Stanford, Akeley worked with OpenGL at NVIDIA Incorporated, was Electronics (Republic of Korea) a principal researcher at Microsoft Corporation, and a consulting professor at Stanford University. In 2010, he joined Lytro Inc. as CTO. During his PMII-589 seven-year tenure as Lytro’s CTO, he guided and directly contributed to the Fast restoring of high dynamic range image appearance for multi- development of two consumer light-field cameras and their related display partial reset sensor, Ziad Youssfi and Firas Hassan, Ohio Northern systems, and also to a cinematic capture and processing service that sup- University (United States) ported immersive, six-degree-of-freedom virtual reality playback. PMII-590 Shuttering methods and the artifacts they produce, Henry Dietz and Paul Eberhart, University of Kentucky (United States)

112 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Photography, Mobile, and Immersive Imaging 2019

Thursday January 17, 2019

Imaging Systems Joint Session Session Chairs: Atanas Gotchev, Tampere University of Technology (Finland) and Michael Kriss, MAK Consultants (United States) 8:50 – 10:10 am Regency B This session is jointly sponsored by: Image Processing: Algorithms and Systems XVII, and Photography, Mobile, and Immersive Imaging 2019.

8:50 PMII-278 EDICT: Embedded and distributed intelligent capture technology (Invited), Scott Campbell, Timothy Macmillan, and Katsuri Rangam, Area4 Professional Design Services (United States)

9:10 IPAS-279 Modeling lens optics and rendering virtual views from fisheye imagery, Filipe Gama, Mihail Georgiev, and Atanas Gotchev, Tampere University of Technology (Finland)

9:30 PMII-280 Digital distortion correction to measure spatial resolution from cameras with wide-angle lenses, Brian Rodricks1 and Yi Zhang2; 1SensorSpace, LLC and 2Facebook Inc. (United States)

9:50 IPAS-281 LiDAR assisted large-scale privacy protection in street view cycloramas, Clint Sebastian1, Bas Boom2, Egor Bondarev1, and Peter De With1; 1Eindhoven University of Technology and 2CycloMedia Technology B.V. (the Netherlands)

10:10 – 11:00 am Coffee Break PMII

electronicimaging.org #EI2019 Titles and authors as of 13 January 2019 113 Stereoscopic Displays and Applications XXX

Conference overview The World’s Premier Conference for 3D Innovation The Stereoscopic Displays and Applications conference (SD&A) focuses on developments covering the entire stereoscopic 3D imaging pipeline from capture, processing, and display to perception. The conference brings together practitioners and researchers from industry and academia to facilitate an exchange of current information on stereoscopic imaging topics. The highly popular conference demonstration session provides authors with a per- fect additional opportunity to showcase their work. Large-screen stereoscopic projection is available, and presenters are encouraged to make full use of these facilities during their presentations. Publishing your work at SD&A offers excellent exposure—across all publica- tion outlets, SD&A has the highest proportion of papers in the top 100 cited papers in the stereoscopic imaging field (Google Scholar, May 2013). Conference Chairs: Andrew J. Woods, Curtin University (Australia); Gregg E. Favalora, Draper (United States); Nicolas S. Holliman, Newcastle Awards University (United Kingdom); and Takashi Kawai, Best use of stereoscopy in a presentation Waseda University (Japan) Best film (animation) Best film (live action) Program Committee: Neil A. Dodgson, Victoria University of Wellington (New Zealand); Davide Events Gadia, University degli Studi di Milano (Italy); Monday evening 3D Theatre Hideki Kakeya, University of Tsukuba (Japan); Stephan R. Keith, SRK Graphics Research (United States); Michael Klug, Magic Leap, Inc. (United States); Björn Sommer, University of Konstanz (Germany); John D. Stern, Intuitive Surgical, Inc. (Retired) (United States); and Chris Ward, Lightspeed Design, Inc. (United States)

Founding Chair: John O. Merritt, The Merritt Group (United States)

Conference Sponsors: Projection System SD&A

3D Theatre Partners

114 #EI2019 electronicimaging.org

STEREOSCOPIC DISPLAYS AND APPLICATIONS XXX

Monday, January 14, 2019

30th SD&A Special Session Monday Plenary Session Chair: Takashi Kawai, Waseda University (Japan) 2:00 – 3:00 pm Grand Peninsula Ballroom D 8:50 – 10:20 am Grand Peninsula Ballroom BC Autonomous Driving Technology and the OrCam MyEye, Amnon Shashua, President and CEO, Mobileye, an Intel Company, and senior 8:50 SD&A-625 vice president, Intel Corporation (United States) 3D image processing - From capture to display (Invited), Toshiaki Fujii, Nagoya University (Japan) The field of transportation is undergoing a seismic change with the coming introduction of autonomous driving. The technologies required 9:10 SD&A-626 to enable computer driven cars involves the latest cutting edge artifi­cial 3D TV based on spatial imaging (Invited), Masahiro Kawakita, Hisayuki intelligence algorithms along three major thrusts: Sensing, Planning and Sasaki, Naoto Okaichi, Masanori Kano, Hayato Watanabe, Takuya Mapping. Shashua will describe the challenges and the kind of com- Oomura, and Tomoyuki Mishina, NHK Science and Technology Research puter vi­sion and machine learning algorithms involved, but will do that Laboratories (Japan) through the perspective of Mobileye’s activity in this domain. He will 9:30 SD&A-627 then describe how OrCam leverages computer vision, situation aware- Stereoscopic capture and viewing parameters: Geometry and ness and language process­ing to enable blind and visually impaired to perception (Invited), Robert Allison and Laurie Wilcox, York University interact with the world through a miniature wearable device. (Canada) Prof. Amnon Shashua holds the Sachs chair in computer science at 9:50 the Hebrew University of Jerusalem. His field of expertise is computer 30 Years of SD&A - Milestones and statistics, Andrew Woods, Curtin vision and machine learning. Shashua has founded three startups in University (Australia) the computer vision and ma­chine learning fields. In 1995 he founded CogniTens that specializes in the area of industrial metrology and is 10:10 today a division of the Swedish Corporation Hexagon. In 1999 he Conference Opening Remarks cofounded Mobileye with his partner Ziv Aviram. Mobileye develops system-on-chips and computer vision algorithms for driving assistance 10:20 – 10:50 am Coffee Break systems and is developing a platform for autonomous driving to be launched in 2021. Today, approximately 32 million cars rely on Autostereoscopic Displays I Mobileye technology to make their vehicles safer to drive. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising Session Chair: Gregg Favalora, Draper (United States) $1B at a market cap of $5.3B. In August 2017, Mobileye became 10:50 am – 12:30 pm an Intel company in the largest Israeli acquisition deal ever of $15.3B. Grand Peninsula Ballroom BC Today, Shashua is the president and CEO of Mobileye and a senior vice president of Intel Corporation. In 2010 Shashua co-founded OrCam which harnesses computer vision and artificial intelligence to 10:50 SD&A-628 assist people who are visually impaired or blind. A Full-HD super-multiview display with a deep viewing zone, Hideki Kakeya and Yuta Watanabe, University of Tsukuba (Japan)

11:10 SD&A-629 3:00 – 3:30 pm Coffee Break A 360-degrees holographic true 3D display unit using a Fresnel phase plate, Levent Onural, Bilkent University (Turkey)

11:30 SD&A-630 Electro-holographic light field projector modules: progress in SAW AOMs, illumination, and packaging, Gregg Favalora, Michael Moebius, SD&A Valerie Bloomfield, John LeBlanc, and Sean O’Connor, Draper (United States)

11:50 SD&A-631 Thin form-factor super multiview head-up display system, Ugur Akpinar, Erdem Sahin, Olli Suominen, and Atanas Gotchev, Tampere University of Technology (Finland)

12:10 SD&A-632 Dynamic multi-view autostereoscopy, Yuzhong Jiao, Man Chi Chan, and Mark P. C. Mok, ASTRI (Hong Kong)

12:30 – 2:00 pm Lunch

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 115 Stereoscopic Displays and Applications XXX

Autostereoscopic Displays II Tuesday January 15, 2019 Session Chair: John O. Merritt, The Merritt Group (United States) 7:15 – 8:45 am Women in Electronic Imaging Breakfast 3:30 – 3:50 pm Grand Peninsula Ballroom BC Light Field Imaging and Displays

SD&A-633 Session Chair: Hideki Kakeya, University of Tsukuba (Japan) Spirolactam rhodamines for multiple color volumetric 3D digital light photoactivatable dye displays, Maha Aljowni, Uroob Haris, Bo Li, 8:50 – 10:10 am Cecilia O’Brien, and Alexander Lippert, Southern Methodist University Grand Peninsula Ballroom BC (United States) 8:50 SD&A-634 Light-field display architecture and the heterogeneous display SD&A Keynote I ecosystem, Thomas Burnett, FoVI3D (United States)

Session Chair: Andrew Woods, Curtin University (Australia) 9:10 SD&A-635 3:50 – 4:50 pm Understanding ability of 3D integral displays to provide accurate out- of-focus retinal blur with experiments and diffraction simulations, Grand Peninsula Ballroom BC Ginni Grover, Oscar Nestares, and Ronald Azuma, Intel Corporation (United SD&A-658 States) KEYNOTE: From set to theater: Reporting on the 3D cinema business and technology roadmaps, Tony Davis, RealD Inc. (United States) 9:30 SD&A-636 EPIModules on a geodesic: Toward 360-degree light-field imaging, Tony Davis is the VP of technology at RealD where he works with Harlyn Baker, EPIImaging, LLC (United States) an outstanding team to perfect the cinema experience from set to screen. Davis has a Masters in electrical engineering from Texas Tech 9:50 SD&A-637 University, specializing in advanced signal acquisition and processing. A photographing method of Integral Photography with high angle After several years working as a technical staff member for Los Alamos reproducibility of light rays, Shotaro Mori, Yue Bao, and Norigi Oishi, National Laboratory, Davis was director of engineering for a highly Tokyo City University Graduate School (Japan) successful line of medical and industrial X-ray computed tomography systems at 3M. Later, he was the founder of Tessive, a company 10:00 am – 7:00 pm Industry Exhibition dedicated to improvement of temporal representation in motion picture cameras. 10:10 – 10:50 am Coffee Break

Stereoscopic Vision Testing 5:00 – 6:00 pm All-Conference Welcome Reception Session Chair: John Stern, Intuitive Surgical, Inc. (United States)

SD&A Conference 3D Theatre 10:50 – 11:30 am Grand Peninsula Ballroom BC Session Chairs: John Stern, Intuitive Surgical, Inc. (United States) and Andrew Woods, Curtin University (Australia) 10:50 SD&A-638 6:00 – 7:30 pm Operational based vision assessment: Stereo acuity testing research Grand Peninsula Ballroom BC and development, Marc Winterbottom1, Eleanor O’Keefe2, Maria Gavrilescu3, Mackenzie Glaholt4, Asao Kobayashi5, Yukiko Tsujimoto5, This ever-popular session of each year’s Stereoscopic Displays and Amanda Douglass6, Elizabeth Shoda2, Peter Gibbs3, Charles Lloyd7, Applications Conference showcases the wide variety of 3D content that is James Gaska1, and Steven Hadley1; 1US Air Force School of Aerospace being produced and exhibited around the world. All 3D footage screened Medicine, 2KBRwyle, 3Defence, Science & Technology, 4Defence Research in the 3D Theater Session is shown in high-quality polarized 3D on a large 5 SD&A and Development Canada (Canada), Aeromedical Laboratory, Japan screen. The final program will be announced at the conference and 3D Air Self Defense Force (Japan), 6Deakin University (Australia), and 7Visual glasses will be provided. Performance, LLC (United States)

SD&A Conference Annual Dinner 11:10 SD&A-639 Operational based vision assessment: Evaluating the effect of 7:50 – 10:00 pm stereoscopic display crosstalk on simulated remote vision system Offsite - details provided on ticket depth discrimination, Eleanor O’Keefe1, Charles Lloyd2, Tommy Bullock3, Alexander Van Atta1, and Marc Winterbottom3; 1KBRwyle, 2Visual The annual informal dinner for SD&A attendees. An opportunity to meet Performance, and 3US Air Force School of Aerospace Medicine (United with colleagues and discuss the latest advances. There is no host for the States) dinner. Information on venue and cost will be provided on the day at the conference.

116 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Stereoscopic Displays and Applications XXX

3:00 – 3:30 pm Coffee Break SD&A Keynote 2

Visualization Facilities oint ession Session Chair: Nicolas Holliman, University of Newcastle (United J S Kingdom) Session Chairs: Margaret Dolinsky, Indiana University (United States) and 11:30 am – 12:30 pm Björn Sommer, University of Konstanz (Germany) Grand Peninsula Ballroom BC 3:30 – 5:10 pm SD&A-640 Grand Peninsula Ballroom BC KEYNOTE: What good is imperfect 3D?, Miriam Ross, Victoria University of Wellington (New Zealand) This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2019, and Stereoscopic Displays and Applications XXX. Dr. Miriam Ross is senior lecturer in the Film Programme at Victoria University of Wellington. She works with new technologies to combine 3:30 SD&A-641 creative methodologies and traditional academic analysis. She is Tiled stereoscopic 3D display wall – Concept, applications and the author of South American Cinematic Culture: Policy, Production, evaluation, Björn Sommer, Alexandra Diehl, Karsten Klein, Philipp Distribution and Exhibition (2010) and 3D Cinema: Optical Illusions Meschenmoser, David Weber, Michael Aichem, Daniel Keim, and Falk and Tactile Experiences (2015) as well as publications and creative Schreiber, University of Konstanz (Germany) works relating to film industries, mobile media, virtual reality, 3:50 SD&A-642 stereoscopic media, and film festivals. The quality of stereo disparity in the polar regions of a stereo panorama, Daniel Sandin1,2, Haoyu Wang3, Alexander Guo1, Ahmad 12:30 – 2:00 pm Lunch Atra1, Dick Ainsworth4, Maxine Brown3, and Tom DeFanti2; 1Electronic Visualization Lab (EVL), University of Illinois at Chicago, 2California Institute for Telecommunications and Information Technology (Calit2), University of Tuesday Plenary California San Diego, 3University of Illinois at Chicago, and 4Ainsworth & Partners, Inc. (United States) 2:00 – 3:00 pm Grand Peninsula Ballroom D 4:10 SD&A-644 Opening a 3-D museum - A case study of 3-D SPACE, Eric Kurland, 3-D The Quest for Vision Comfort: Head-Mounted Light Field Displays SPACE (United States) for Virtual and Augmented Reality, Hong Hua, professor of optical sciences, University of Arizona (United States) 4:30 SD&A-645 State of the art of multi-user virtual reality display systems, Juan Munoz Hong Hua will discuss the high promises and the tremendous progress Arango, Dirk Reiners, and Carolina Cruz-Neira, University of Arkansas at made recently toward the development of head-mounted displays (HMD) Little Rock (United States) for both virtual and augmented reality displays, developing HMDs that offer uncompromised optical pathways to both digital and physical worlds 4:50 SD&A-646 without encumbrance and discomfort confronts many grand challenges, StarCAM - A 16K stereo panoramic video camera with a novel parallel 1 2 both from technological perspectives and human factors. She will particu- interleaved arrangement of sensors, Dominique Meyer , Daniel Sandin , 1 1 1 2 1 larly focus on the recent progress, challenges and opportunities for de- Christopher McFarland , Eric Lo , Gregory Dawe , Haoyu Wang , Ji Dai , 2 1 3 1 veloping head-mounted light field displays (LF-HMD), which are capable Maxine Brown , Truong Nguyen , Harlyn Baker , Falko Kuester , and 1 1 2 of rendering true 3D synthetic scenes with proper focus cues to stimulate Tom DeFanti ; University of California, San Diego, University of Illinois at 3 natural eye accommodation responses and address the well-known Chicago, and EPIImaging, LLC (United States) vergence-accommodation conflict in conventional stereoscopic displays. 5:30 – 7:00 pm Symposium Demonstration Session Dr. Hong Hua is a professor of optical sciences at the University of Arizona. With more than 25 years of experience, Hua is widely recognized through academia and industry as an expert in wearable display technologies and optical imaging and engineering in gen- Wednesday January 16, 2019 eral. Hua’s current research focuses on optical technologies enabling advanced 3D displays, especially head-mounted display technologies 360, 3D, and VR Joint Session for virtual reality and augmented reality applications, and microscopic and endoscopic imaging systems for medicine. Hua has published more Session Chairs: Neil Dodgson, Victoria University of Wellington (New SD&A than 200 technical papers and filed a total of 23 patent applications Zealand) and Ian McDowall, Intuitive Surgical / Fakespace Labs (United in her specialty fields, and delivered numerous keynote addresses and States) invited talks at major conferences and events worldwide. She is an SPIE 8:50 – 10:10 am Fellow and OSA senior member. She was a recipient of NSF Career Grand Peninsula Ballroom BC Award in 2006 and honored as UA Researchers @ Lead Edge in 2010. Hua and her students shared a total of 8 “Best Paper” awards This session is jointly sponsored by: The Engineering Reality of Virtual in various IEEE, SPIE and SID conferences. Hua received her PhD in Reality 2019, and Stereoscopic Displays and Applications XXX. optical engineering from the Beijing Institute of Technology in China (1999). Prior to joining the UA faculty in 2003, Hua was an assistant 8:50 SD&A-647 professor with the University of Hawaii at Manoa in 2003, was a Enhanced head-mounted eye tracking data analysis using super- 1 1 1 Beckman Research Fellow at the Beckman Institute of University of Illinois resolution, Qianwen Wan , Aleksandra Kaszowska , Karen Panetta , 1 2 1 2 at Urbana-Champaign between 1999 and 2002, and was a post-doc Holly Taylor , and Sos Agaian ; Tufts University and CUNY/ The College at the University of Central Florida in 1999. of Staten Island (United States)

electronicimaging.org #EI2019 Titles and authors as of 13 January 2019. 117 Stereoscopic Displays and Applications XXX

9:10 SD&A-648 Effects of binocular parallax in 360-degree VR images on viewing Wednesday Plenary behavior, Yoshihiro Banchi, Keisuke Yoshikawa, and Takashi Kawai, Waseda University (Japan) 2:00 – 3:00 pm Grand Peninsula Ballroom D 9:30 SD&A-649 Visual quality in VR head mounted device: Lessons learned with StarVR Light Fields and Light Stages for Photoreal Movies, Games, and headset, Bernard Mendiburu, Starbreeze (United States) Virtual Reality, Paul Debevec, senior scientist, Google (United States)

9:50 SD&A-650 Paul Debevec will discuss the technology and production processes behind Time course of sickness symptoms with HMD viewing of 360-degree “Welcome to Light Fields”, the first downloadable virtual reality experience videos (JIST-first),Jukka Häkkinen1, Fumiya Ohta2, and Takashi Kawai2; based on light field capture techniques which allow the visual appear- 1University of Helsinki (Finland) and 2Waseda University (Japan) ance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling 10:00 am – 3:30 pm Industry Exhibition and photogrammetry. Debevec will discuss the theory and application of 10:10 – 10:50 am Coffee Break the technique. Debevec will also discuss the Light Stage computational illumination and facial scanning systems which use geodesic spheres of inward-pointing LED lights as have been used to create digital actor effects Autostereoscopic Displays III in movies such as Avatar, Benjamin Button, and Gravity, and have recently Session Chair: Chris Ward, Lightspeed Design, Inc. (United States) been used to create photoreal digital actors based on real people in mov- ies such as Furious 7, Blade Runner: 2049, and Ready Player One. The 10:50 – 11:30 am lighting reproduction process of light stages allows omnidirectional lighting Grand Peninsula Ballroom BC environments captured from the real world to be accurately reproduced in a studio, and has recently be extended with multispectral capabilities to 10:50 SD&A-651 enable LED lighting to accurately mimic the color rendition properties of Head-tracked patterned-backlight autostereoscopic (virtual reality) daylight, incandescent, and mixed lighting environments. They have also display system, Jean-Etienne Gaudreau, PolarScreens Inc. (Canada) recently used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project 11:10 SD&A-652 interactive conversations with survivors of the World War II Holocaust. The looking glass: A new type of superstereoscopic display, Shawn Frayne, Looking Glass Factory, Inc. (United States) Paul Debevec is a senior scientist at Google VR, a member of Google VR’s Daydream team, and adjunct research professor of computer sci- ence in the Viterbi School of Engineering at the University of Southern SD&A Keynote 3 California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec’s computer graphics Session Chair: Andrew Woods, Curtin University (Australia) research has been recognized with ACM SIGGRAPH’s first Significant 11:30 am – 12:40 pm New Researcher Award (2001) for “Creative and Innovative Work in Grand Peninsula Ballroom BC the Field of Image-Based Modeling and Rendering”, a Scientific and SD&A-653 Engineering Academy Award (2010) for “the design and engineering KEYNOTE: Beads of reality drip from pinpricks in space, Mark Bolas, of the Light Stage capture devices and the image-based facial render- Microsoft Corporation (United States) ing system developed for character relighting in motion pictures” with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Mark Bolas loves perceiving and creating synthesized experiences. Medal (2017) in recognition of his achievements and ongoing work To feel, hear and touch experiences impossible in reality and yet in pioneering techniques for illuminating computer-generated objects grounded as designs that bring pleasure, meaning and a state of flow. based on measurement of real-world illumination and their effective His work with Ian McDowall, Eric Lorimer and David Eggleston at commercial application in numerous Hollywood films. In 2014, he Fakespace Labs; Scott Fisher and Perry Hoberman at USC’s School of was profiled in The New Yorker magazine’s “Pixel Perfect: The Scientist Cinematic Arts; the team at USC’s Institute for Creative Technologies; Behind the Digital Cloning of Actors” article by Margaret Talbot.

SD&A Niko Bolas at SonicBox; and Frank Wyatt, Dick Moore and Marc Dolson at UCSD informed results that led to his receipt of both the IEEE Virtual Reality Technical Achievement and Career Awards. See more at 3:00 – 3:30 pm Coffee Break https://en.wikipedia.org/wiki/Mark_Bolas

Conference Closing Remarks

12:40 – 2:00 pm Lunch

118 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org Stereoscopic Displays and Applications XXX

4:50 EISS-709 Light Field Imaging and Display Joint Session Quest for immersion (Invited), Kari Pulli, Stealth Startup (United States) Session Chair: Gordon Wetzstein, Stanford University (United States) Dr. Kari Pulli has spent two decades in computer imaging and AR at com- Grand Peninsula Ballroom D panies such as Intel, NVIDIA and Nokia. Before joining a stealth startup, he was the CTO of Meta, an augmented reality company in San Mateo, This session is jointly sponsored by the EI Steering Committee. heading up computer vision, software, displays, and hardware, as well as the overall architecture of the system. Before joining Meta, he worked as 3:30 EISS-706 the CTO of the Imaging and Camera Technologies Group at Intel, influenc- Light fields - From shape recovery to sparse reconstruction (Invited), ing the architecture of future IPU’s in hardware and software. Prior, he was Ravi Ramamoorthi, University of California, San Diego (United States) vice president of computational imaging at Light, where he developed Prof. Ravi Ramamoorthi is the Ronald L. Graham Professor of Computer algorithms for combining images from a heterogeneous camera array into Science, and Director of the Center for Visual Computing, at the a single high-quality image. He previously led research teams as a senior University of California, San Diego. Ramamoorthi received his PhD director at NVIDIA Research and as a Nokia fellow at Nokia Research, in computer science (2002) from Stanford University. Prior to joining where he focused on computational photography, computer vision, and UC San Diego, Ramamoorthi was associate professor of EECS at the AR. Pulli holds computer science degrees from the University of Minnesota University of California, Berkeley, where he developed the complete (BSc), University of Oulu (MSc, Lic. Tech), and University of Washington graphics curricula. His research centers on the theoretical foundations, (PhD), as well as an MBA from the University of Oulu. He has taught and mathematical representations, and computational algorithms for under- worked as a researcher at Stanford, University of Oulu, and MIT. standing and rendering the visual appearance of objects, exploring 5:10 EISS-710 topics in frequency analysis and sparse sampling and reconstruction Industrial scale light field printing (Invited),Matthew Hirsch, Lumii Inc. of visual appearance datasets a digital data-driven visual appearance (United States) pipeline; light-field cameras and 3D photography; and physics-based computer vision. Ramamoorthi is an ACM Fellow for contributions Dr. Matthew Hirsch is a co-founder and chief technical officer of Lumii. to computer graphics rendering and physics-based computer vision, He worked with Henry Holtzman’s Information Ecology Group and awarded Dec. 2017, and an IEEE Fellow for contributions to founda- Ramesh Raskar’s Camera Culture Group at the MIT Media Lab, making tions of computer graphics and computer vision, awarded Jan. 2017. the next generation of interactive and glasses-free 3D displays. Hirsch received his bachelors from Tufts University in computer engineering, 4:10 EISS-707 and his Masters and Doctorate from the MIT Media Lab. Between The beauty of light fields (Invited),David Fattal, LEIA Inc. (United degrees, he worked at Analogic Corp. as an imaging engineer, where States) he advanced algorithms for image reconstruction and understanding Dr. David Fattal is co-founder and CEO at LEIA Inc., where he is in charge in volumetric x-ray scanners. His work has been funded by the NSF of bringing their mobile holographic display technology to market. Fattal and the Media Lab consortia, and has appeared in SIGGRAPH, CHI, received his PhD in physics from Stanford University (2005). Prior to and ICCP. Hirsch has also taught courses at SIGGRAPH on a range of founding LEIA Inc., Fattal was a research scientist with HP Labs, HP Inc. At subjects in computational imaging and display, with a focus on DIY. LEIA Inc., the focus is on immersive mobile, with screens that come alive in richer, deeper, more beautiful ways. Flipping seamlessly between 2D and lightfields, mobile experiences become truly immersive: no glasses, no Stereoscopic Displays and Applications XXX Interactive Posters Session tracking, no fuss. Alongside new display technology LEIA Inc. is develop- ing Leia Loft™ — a whole new canvas. 5:30 – 7:00 pm The Grove 4:30 EISS-708 The following works will be presented at the EI 2019 Symposium Light field insights from my time at Lytro (Invited),Kurt Akeley, Google Interactive Papers Session. Inc. (United States) SD&A-654 Dr. Kurt Akeley is a distinguished engineer at Google Inc. Akeley A comprehensive head-mounted eye tracking review: Software received his PhD in stereoscopic display technology from Stanford solutions, applications, and challenges, Qianwen Wan1, Aleksandra University (2004), where he implemented and evaluated a stereo- Kaszowska1, Karen Panetta1, Holly Taylor1, and Sos Agaian2; 1Tufts scopic display that passively (e.g., without eye tracking) produces University and 2CUNY/ The College of Staten Island (United States) nearly correct focus cues. After Stanford, Akeley worked with OpenGL

at NVIDIA Incorporated, was a principal researcher at Microsoft SD&A-655 SD&A Corporation, and a consulting professor at Stanford University. In A study on 3D projector with four parallaxes, Shohei Yamaguchi and 2010, he joined Lytro Inc. as CTO. During his seven-year tenure as Yue Bao, Tokyo City University (Japan) Lytro’s CTO, he guided and directly contributed to the development of two consumer light-field cameras and their related display systems, SD&A-656 and also to a cinematic capture and processing service that supported Saliency map based multi-view rendering for autostereoscopic displays, immersive, six-degree-of-freedom virtual reality playback. Yuzhong Jiao, Man Chi Chan, and Mark P. C. Mok, ASTRI (Hong Kong) SD&A-657 Semi-automatic post-processing of multi-view 2D-plus-depth video, Braulio Sespede1, Florian Seitner2, and Margrit Gelautz1; 1TU Wien and 2Emotion3D (Austria)

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 119 VDA Conference Chairs: Thomas Wischgoll, Wright Visualization and Data Analysis 2019 State University. (United States); Song Zhang, Mississippi State University (United States); David Conference overview Kao, NASA Ames Research Center (United The Conference on Visualization and Data Analysis (VDA) 2019 covers all research and States); and Yi-Jen Chiang, New York University development and application aspects of data visualization and visual analytics. Since the (United States) first VDA conference was held in 1994, the annual event has served as a major venue for visualization researchers and practitioners from around the world to present their work and Program Committee: Madjid Allili, Bishop’s share their experiences. University (Canada); Wes Bethel, Lawrence Berkeley National Laboratory (United States); Abon Chaudhuri, Intel Corporation (United Award States); Guoning Chen, University of Houston Kostas Pantazos Memorial Award for Outstanding Paper (United States); Joseph Cottam, Pacific Northwest National Laboratory (United States); Sussan Einakian, The University of Alabama in Conference Sponsors Huntsville (United States); Ulrich Engelke, CSIRO (Australia); John Gerth, Stanford University (United States); Matti Gröhn, Finnish Institute of Occupational Health (Finland); Christopher G. Healey, North Carolina State University (United States); Halldór Janetzko, University of Konstanz (Germany); Ming Jiang, Lawrence Livermore National Laboratory (United States); Andreas Kerren, Linnaeus University (Sweden); Harinarayan Krishnan, Lawrence Livermore National Laboratory (United States); Robert Lewis, Washington State University (United States); Peter Lindstrom, Lawrence Livermore National Laboratory (United States); Zhanping Liu, Kentucky State University (United States); Aidong Lu, The University of North Carolina at Charlotte (United States); G. Elisabeta Marai, University of Illinois at Chicago (United States); Richard May, Pacific Northwest National Laboratory (United States); Theresa-Marie Rhyne, Computer Graphics and E-Learning (United States); René Rosenbaum, meeCoda (Germany); Jibonananda Sanyal, Oak Ridge National Laboratory (United States); Pinaki Sarder, University of Buffalo (United States); Graig Sauer, Towson University (United States); Jürgen Schulze, University of California, San Diego (United States); Chad Steed, Oak Ridge National Laboratory (United States); Kalpathi Subramanian, The University of North Carolina at Charlotte (United States); Shigeo Takahashi, University of Aizu (Japan); Chaoli Wang, University of Notre Dame (United States); and Leishi Zhang, Middlesex University London (United Kingdom)

120 #EI2019 electronicimaging.org

VISUALIZATION AND DATA ANALYSIS 2019 VDA

Wednesday January 16, 2019 Visualization and Data Analysis 2019 Interactive Posters Session 5:30 – 7:00 pm The Grove

Wednesday Plenary The VDA program includes works to be presented at the EI 2019 Symposium Interactive Papers Session. Refer to the Visualization and Data Analysis 2019 2:00 – 3:00 pm Interactive Papers Overview session on Thursday morning for the list of entries. Grand Peninsula Ballroom D Light Fields and Light Stages for Photoreal Movies, Games, and Virtual Reality, Paul Debevec, senior scientist, Google (United States) Thursday January 17, 2019 Paul Debevec will discuss the technology and production processes behind “Welcome to Light Fields”, the first downloadable virtual reality experience Data Visualization and Displays based on light field capture techniques which allow the visual appear- ance of an explorable volume of space to be recorded and reprojected Session Chair: David Kao, NASA Ames Research Center (United States) photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling 8:50 – 9:30 am and photogrammetry. Debevec will discuss the theory and application of Harbour B the technique. Debevec will also discuss the Light Stage computational VDA-675 illumination and facial scanning systems which use geodesic spheres of KEYNOTE: Data visualization using large-format display systems, inward-pointing LED lights as have been used to create digital actor effects Thomas Wischgoll, Wright State University (United States) in movies such as Avatar, Benjamin Button, and Gravity, and have recently Professor Thomas Wischgoll is the director of visualization research been used to create photoreal digital actors based on real people in mov- and professor in the computer science & engineering department ies such as Furious 7, Blade Runner: 2049, and Ready Player One. The at Wright State University. Wischgoll received his PhD in computer lighting reproduction process of light stages allows omnidirectional lighting science from the University of Kaiserslautern (2002), and was a post- environments captured from the real world to be accurately reproduced in doctoral researcher at the University of California, Irvine from 2003 a studio, and has recently be extended with multispectral capabilities to through 2005. The Advanced Visual Data Analysis (AViDA) group at enable LED lighting to accurately mimic the color rendition properties of Wright State is devoted to research and support of the community in daylight, incandescent, and mixed lighting environments. They have also the areas of scientific visualization, medical imaging and visualiation, recently used their full-body light stage in conjunction with natural language virtual environments, information visualization and analysis, big data processing and automultiscopic video projection to record and project analysis, and data science, etc. The AViDA group runs and supports interactive conversations with survivors of the World War II Holocaust. the Appenzeller Visualization Laboratory, a state-of-the-art visualization Paul Debevec is a senior scientist at Google VR, a member of Google facility that supports large-scale visualizating and fully immersive, virtual VR’s Daydream team, and adjunct research professor of computer sci- reality equipment. The Appenzeller Visualization laboratory provides ence in the Viterbi School of Engineering at the University of Southern access to cutting edge visualization technology and equipment, California, working within the Vision and Graphics Laboratory at the including a traditional CAVE-type setup as well as other fully immersive USC Institute for Creative Technologies. Debevec’s computer graphics display environments. research has been recognized with ACM SIGGRAPH’s first Significant New Researcher Award (2001) for “Creative and Innovative Work in the Field of Image-Based Modeling and Rendering”, a Scientific and Visualization and Data Analysis 2019 Interactive Papers Overview Engineering Academy Award (2010) for “the design and engineering of the Light Stage capture devices and the image-based facial render- Session Chair: Yi-Jen Chiang, New York University (United States) ing system developed for character relighting in motion pictures” with 9:30 – 10:00 am Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Harbour B Medal (2017) in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects In this session, interactive poster authors will each provide a brief oral based on measurement of real-world illumination and their effective overview of their poster presentation, presented interactively in the commercial application in numerous Hollywood films. In 2014, he Visualization and Data Analysis 2019 Interactive Papers Session at 5:30 was profiled in The New Yorker magazine’s “Pixel Perfect: The Scientist pm on Wednesday. Behind the Digital Cloning of Actors” article by Margaret Talbot. 9:30 VDA-676 Visual analytic process to familiarize the average person with ways to 3:00 – 3:30 pm Coffee Break apply machine learning, Andrew Tran, Yamini Dasu, and Anna Baynes, California State University, Sacramento (United States)

9:40 VDA-677 Visualization of carbon monoxide particles released from firearms, Sadan Suneesh Menon and Thomas Wischgoll, Wright State University (United States)

electronicimaging.org #EI2019 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. 121 Visualization and Data Analysis 2019

VDA 9:50 VDA-678 Information Visualization Visualizing tweets from confirmed fake Russian accounts,Stephen Hsu, David Kes, and Alark Joshi, University of San Francisco (United States) Session Chair: Thomas Wischgoll, Wright State University (United States) 2:40 – 3:20 pm 10:10 – 10:50 am Coffee Break Harbour B

Data Analysis and Visual Analytics 2:40 VDA-685 1 Session Chair: Thomas Wischgoll, Wright State University (United States) VideoSwarm: Analyzing video ensembles, Shawn Martin , Milosz Sielicki2, Jaxon Gittinger1, Matthew Letter1, Warren Hunt1, and Patricia 10:50 am – 12:10 pm Crossno1; 1Sandia National Laboratories and 2Foster Milo (United States) Harbour B 3:00 VDA-686 M-QuBE3: Querying big multilayer graph by evolutive extraction 10:50 VDA-679 and exploration, Antoine Laumond1, Mohammad Ghoniem2, Bruno Chemometric data analysis with autoencoder neural network, Pinaud1, and Guy Melancon1; 1Bordeaux University - LaBRI (France) and Muhammad Bilal1 and Mohib Ullah2; 1University of Trento (Italy) and 2Luxembourg Institute of Science and Technology (Luxembourg) 2Norwegian University of Science and Technology (NTNU) (Norway)

11:10 VDA-680 Dynamic color mapping with a multi-scale histogram: A design study with physical scientists, Junghoon Chae, Chad Steed, John Goodall, and Steven Hahn, Oak Ridge National Laboratory (United States)

11:30 VDA-681 CCVis: Visual analytics of student online learning behaviors using course clickstream data, Maggie Goulden1, Eric Gronda2, Yurou Yang3, Zihang Zhang3, Jun Tao4, Chaoli Wang4, Xiaojing Duan4, G. Alex Ambrose4, Kevin Abbott4, and Patrick Miller4; 1Trinity College Dublin (Ireland), 2University of Maryland, Baltimore County (United States), 3Zhejiang University (China), and 4University of Notre Dame (United States)

11:50 VDA-682 Correlation visualisation for sleep data analytics in SWAPP (Sleep Wake Application), Amal Vincent, Simon Fraser University (Canada)

12:10 – 1:30 pm Lunch

Scientific Visualization

Session Chair: David Kao, NASA Ames Research Center (United States) 2:00 – 2:40 pm Harbour B

2:00 VDA-683 Visualizing mathematical knot equivalence, Juan Lin and Hui Zhang, University of Louisville (United States)

2:20 VDA-684 Visualization and data analysis of quantum computations in high energy, nuclear and condensed matter physics, Michael McGuigan, Raffaele Miceli, Charles Kocher, Tri Duong, Christopher Kane, and Brandon Ortega, Brookhaven National Laboratory (United States)

122 Paper titles and authors as of 1 Jan 2019; final proceedings manuscript may vary. #EI2019 electronicimaging.org

EI 2019 SHORT COURSE DESCRIPTIONS

Sunday, January 13, 2019 human-factors evaluation of stereoscopic video displays for telepresence and telerobotics, scientific visualization, and medical imaging. SC01: Stereoscopic Display Application Issues Andrew J. Woods is manager of the Curtin HIVE visualization facility and 8:00 am – 5:45 pm a research engineer at Curtin University’s Centre for Marine Science and Technology in Perth, Western Australia. He has more than 20 years of Course Length: 8 hours experience working on the design, application, and evaluation of stereo- Course Level: Intermediate scopic image and video capture and display equipment. Instructors: John O. Merritt, The Merritt Group and Andrew J. Woods, Curtin University

SC02: Color and Calibration in Mobile Imaging Devices Short Courses Fee*: Member: $510 / Non-member: $560 / Student: $195 *after December 18, 2019, members / non-members prices increase by 8:00 – 10:00 am $50, student price increases by $20 Course Length: 2 hours When correctly implemented, stereoscopic 3D video displays can provide Course Level: Introductory/Intermediate significant benefits in many areas, including endoscopy and other medical Instructors: Kevin J. Matherson, Microsoft Corporation, and Uwe Artmann, imaging, remote-control vehicles and telemanipulators, stereo 3D CAD, Image Engineering GmbH & Co. KG molecular modeling, 3D computer graphics, 3D visualization, and video- Fee*: Member: $185 / Non-member: $210 / Student: $65 based training. This course conveys a concrete understanding of basic prin- *after December 18, 2019, members / non-members prices increase by ciples and pitfalls that should be considered in transitioning from 2D to 3D $50, student price increases by $20 displays, and in testing for performance improvements. In addition to the traditional lecture sessions, there is a “workshop” session to demonstrate When an image is captured using a digital imaging device it needs to be stereoscopic hardware and 3D imaging/display principles, emphasizing rendered. For consumer cameras this processing is done within the camera the key issues in an ortho-stereoscopic video display setup, and showing and covers various steps like dark current subtraction, flare compensa- video from a wide variety of applied stereoscopic imaging systems. tion, shading, color compensation, demosaicing, white balancing, tonal and color correction, sharpening, and compression. Each of these steps Learning Outcomes has a significant influence on image quality. In order to design and tune • List critical human factors guidelines for stereoscopic display configura- cameras, it is important to understand how color camera hardware varies tion & implementation. as well as the methods that can be used to calibrate such variations. This • Calculate optimal camera focal length, separation, display size, and course provides the basic methods describing the capture and process- viewing distance to achieve a desired level of depth acuity. ing of a color camera image. Participants get to examine the basic color • Calculate comfort limits for focus/fixation mismatch and on-screen image capture and how calibration can improve images using a typical parallax values, as a function of focal length, separation, convergence, color imaging pipeline. In the course, participants are shown how raw display size, and viewing distance factors. image data influences color transforms and white balance. The knowledge • Set up a large-screen stereo display system using AV equipment readily acquired in understanding the image capture and calibration process can available at most conference sites for slides and for full-motion video. used to understand tradeoffs in improving overall image quality. • Evaluate the trade-offs among currently available stereoscopic display technologies for your proposed applications. Learning Outcomes • List the often-overlooked side-benefits of stereoscopic displays that should • Understand how hardware choices in compact cameras impact calibra- be included in a cost/benefit analysis for proposed 3D applications. tions and the type of calibrations performed and how such choices can • Avoid common pitfalls in designing tests to compare 2D vs. 3D displays. impact overall image quality. • Calculate and demonstrate the distortions in perceived 3D space due to • Describe basic image processing steps for compact color cameras. camera and display parameters. • Understand calibration methods for mobile camera modules. • Design and set up an orthostereoscopic 3D imaging/display system. • Describe the differences between class calibration and individual mod- • Understand the projective geometry involved in stereo modeling. ule calibration. • Understand the trade-offs among currently available stereoscopic display • Understand how spectral sensitivities and color matrices are calculated. system technologies and determine which will best match a particular • Understand how the calibration light source impacts calibration application. • Describe required calibration methods based on the hardware chosen and the image processing used. Intended Audience • Appreciate artifacts associated with color shading and incorrect Engineers, scientists, and program managers involved with video display calibrations. systems for applications such as: medical imaging & endoscopic surgery, • Learn about the impacts of pixel saturation and the importance of con- simulators & training systems, teleoperator systems (remote-control vehicles trolling it on color. & manipulators), computer graphics, 3D CAD systems, data-space explora- • Learn about the impact of tone reproduction on perceived color (skin tion and visualization, and virtual reality. tone, memory colors, etc.)

Instructors Intended Audience John O. Merritt is a display systems consultant at The Merritt Group, People involved in the design and image quality of digital cameras, mobile Williamsburg, MA, with more than 25 years’ experience in the design and cameras, and scanners would benefit from participation. Technical staff of

electronicimaging.org #EI2019 123 EI 2019 Short Course Descriptions

manufacturers, managers of digital imaging projects, as well as journal- nal and image processing, computer vision, color imaging, and bioinformat- ists, and students studying image technology are among the intended ics. He has extensive experience in media security across diverse application audience. domains. Prior to joining the University of Rochester, he was a Principal Scientist and Project Leader at the Xerox Innovation Group. Additionally, he Instructors has consulted for several companies on the development of image process- Kevin J. Matherson is a director of optical engineering at Microsoft ing and computer vision algorithms. He holds 51 issued patents and has Corporation working on advanced optical technologies for consumer authored more than 195 peer-reviewed publications. He is the editor of the

Short Courses Short products. Prior to Microsoft, he participated in the design and development “Digital Color Imaging Handbook” published by CRC Press. He is the Editor- of compact cameras at HP and has more than 15 years of experience in-Chief for the IEEE Transactions on Image Processing and previously served developing miniature cameras for consumer products. His primary research as the Editor-in-Chief for the IS&T/SPIE Journal of Electronic Imaging from interests focus on sensor characterization, optical system design and 2011 through 2015. Dr. Sharma is a fellow of IS&T, IEEE, and SPIE. analysis, and the optimization of camera image quality. Matherson holds a masters and PhD in optical sciences from the University of Arizona. SC03: Advanced Image Enhancement and Deblurring Uwe Artmann studied photo technology at the University of Applied 8:00 am – 12:15 pm Sciences in Cologne following an apprenticeship as a photographer, and finished with the German ‘Diploma Engineer’. He is now CTO at Image Course Length: 4 hours Engineering, an independent test lab for imaging devices and manufac- Course Level: Advanced turer of all kinds of test equipment for these devices. His special interest is Instructor: Majid Rabbani, consultant the influence of noise reduction on image quality and MTF measurement in Fee*: Member: $290 / Non-member: $315 / Student: $95 general. *after December 18, 2019, members / non-members prices increase by $50, student price increases by $20 SC05: An Introduction to Blockchain This course presents some of the advanced algorithms used in contrast en- 8:00 – 10:00 am hancement, noise reduction, sharpening, and deblurring of still images and video. Applications include consumer and professional imaging, medical Course Length: 2 hours imaging, forensic imaging, surveillance, and astronomical imaging. Many Course Level: Introductory image examples complement the technical descriptions. Instructor: Gaurav Sharma, University of Rochester Fee*: Member: $185 / Non-member: $210 / Student: $65 Learning Outcomes *after December 18, 2018, members / non-members prices increase by • Understand advanced algorithms used for contrast enhancement such $50, student price increases by $20 as CLAHE, Photoshop Shadows/Highlights, and Dynamic Range Compression (DRC). This course introduces attendees to blockchains, which have recently • Understand advanced techniques used in image sharpening such as emerged as a revolutionary technology that has the potential to disrupt a advanced variations of nonlinear unsharp masking, etc. range of diverse business processes and applications. Using a concrete • Understand recent advancements in image noise removal, such as application setting, the course illustrates the construction of blockchains as bilateral filtering and nonlocal means. a distributed, secure, and tamper-resistant framework for the management • Understand how motion information can be utilized in image sequences of transaction ledgers. Necessary background in the technologies underly- to improve the performance of various enhancement techniques. ing blockchains, including basic cryptographic concepts, are introduced as • Understand Wiener filtering and its variations for performing image required. Current and emerging applications of blockchains are surveyed, deblurring (restoration). including those in media security. Intended Audience Learning Outcomes Scientists, engineers, and technical managers who need to understand • Explain how the blockchain construction provides resistance against and/or apply the techniques employed in digital image processing in tampering. various products in a diverse set of applications such as medical imaging, • Distinguish between centralized and distributed ledgers and highlight professional and consumer imaging, forensic imaging, etc. will benefit from their pros and cons. this course. Some knowledge of digital filtering (convolution) and frequency • Describe the concepts of proof-of-work and proof-of-stake. decomposition is necessary for understanding the deblurring concepts. • Explain the utility and applicability of blockchains in diverse applications. • Cite example applications of blockchains. Instructor Majid Rabbani has more than 35 years of experience in digital imag- Intended Audience ing. After a 33-year career at Kodak Research Labs, he retired in 2016 Engineers, scientists, students, and managers interested in understanding with the rank of Kodak Fellow. Currently, he is a visiting professor at how blockchains are constructed and how they can be useful in a variety Rochester Institute of Technology (RIT). He is the co-recipient of the 2005 of business processes. The course includes an overview of necessary back- and 1988 Kodak C. E. K. Mees Awards and the co-recipient of two ground information, such as cryptographic tools utilized in the blockchain; Emmy Engineering Awards in 1990 and 1996. He has 44 issued US prior familiarity with these concepts is not required. patents and is the co-author of the book Digital Image Compression Techniques published in 1991 and the creator of six video/CDROM Instructor courses in the area of digital imaging. Rabbani is a Fellow of SPIE and Gaurav Sharma is a professor of electrical and computer engineering and of IEEE and a Kodak Distinguished Inventor. He has been an active educator computer science at the University of Rochester where his research spans sig- in the digital imaging community for the past 31years.

124 #EI2019 electronicimaging.org EI 2019 Short Course Descriptions

SC04: Digital Camera Image Quality Tuning SC06: Perceptual Metrics for Image and Video Quality in a Broader 8:00 am – 12:15 pm Context 8:00 am – 12:15 pm Course Length: 4 hours Course Level: Introductory/Intermediate Course Length: 4 hours Instructor: Luke Cui, Amazon Course Level: Intermediate (Prerequisites: Basic understanding of image Fee*: Member: $290 / Non-member: $315 / Student: $95 compression algorithms; background in digital signal processing and basic *after December 18, 2019, members / non-members prices increase by statistics: frequency-based representations, filtering, distributions.) $50, student price increases by $20 Instructors: Thrasyvoulos N. Pappas, Northwestern University, and Sheila S. Hemami, Draper A critical step in developing a digital camera product is image quality Fee*: Member: $290 / Non-member: $315 / Student: $95 tuning – a process to balance and set camera operating parameters that *after December 18, 2019, members / non-members prices increase by generate the best raw images, hide defects inherent to each camera tech-

$50, student price increases by $20 Short Courses nology, and make the images appear to be most pleasing. Image quality tuning is complex and full of pitfalls but yet directly impacts the competitive- The course examines objective criteria for the evaluation of image quality ness of the product and customer satisfaction. The course covers the com- that are based on models of visual perception. The primary emphasis will plete engineering process as well as fundamental science and techniques be on image fidelity, i.e., how close an image is to a given original or ref- with practical examples including 1) 3A tuning; 2) objective image quality erence image, but we will broaden the scope of image fidelity to include tuning; 3) subjective image quality tuning; 4) image quality evaluation and structural equivalence. Also discussed is no-reference and limited-reference competitive benchmarking; and 5) nuts and bolts in managing the process. metrics. An examination of a variety of applications with special emphasis on image and video compression is included. We examine near-threshold Learning Outcomes perceptual metrics, which explicitly account for human visual system (HVS) • Understand the camera image quality tuning goals. sensitivity to noise by estimating thresholds above which the distortion is • Understand the hardware capabilities and limitation based on specifica- just-noticeable, and supra-threshold metrics, which attempt to quantify vis- tions and testing. ible distortions encountered in high compression applications or when there • Understand the features, capabilities, limitations, and turnabilities of the are losses due to channel conditions. The course also considers metrics image processing pipelines. for structural equivalence, whereby the original and the distorted image • Deep dive into the tuning process and workflow. have visible differences but both look natural and are of equally high visual • Explore 3A models, the tuning processes, metrics, and testing. quality. This short course takes a close look at procedures for evaluating • Deep dive into various ISP modules and image processing techniques. the performance of quality metrics, including database design, models • Learn about camera module variations and camera per module factory for generating realistic distortions for various applications, and subjective calibration. procedures for metric development and testing. Throughout the course we • Understand subjective image quality and competitive benchmarking for discuss both the state of the art and directions for future research. tuning. • Discuss new trends in digital camera image quality performance. Learning Outcomes • Image quality review of top three cellphone cameras of the year. • Gain a basic understanding of the properties of the human visual system and how current applications (image and video compression, restora- Intended Audience tion, retrieval, etc.) attempt to exploit these properties. Engineers, scientists, and program managers involved with digital camera • Gain an operational understanding of existing perceptually-based and development. structural similarity metrics, the types of images/artifacts on which they work, and their failure modes. Instructor • Understand current distortion models for different applications and how Luke Cui has been hands-on working on imaging systems for more than they can be used to modify or develop new metrics for specific contexts. twenty-five years with a BS in optics, MS in color science and PhD in • Understand the differences between sub-threshold and supra-threshold human vision. He has been involved with the delivery of numerous market- artifacts, the HVS responses to these two paradigms, and the differences proven digital imaging systems, working from photons, lenses, sensors, in measuring that response. cameras, color science, imaging processing, image quality evaluation sys- • Understand criteria by which to select and interpret a particular metric tems, to psychophysics and human vision. He has more than sixty patents for a particular application. and patent applications. He has worked for Macbeth Co. on standard • Understand the capabilities and limitations of full-reference, limited- lighting, color formulation, spectrophotometry, and colorimetry, led high reference, and no-reference metrics, and why each might be used in a speed document scanner optical imaging system development at Lexmark particular application. International, working from lens design to final image pipeline tuning, and led camera tuning of most Surface products on the market at Microsoft, Intended Audience covering from system specification, ISP evaluation, selection, and all Image and video compression specialists who wish to gain an understand- phases of camera tuning. Currently he is with PrimeAir at Amazon. ing of how performance can be quantified. Engineers and scientists who wish to learn about objective image and video quality evaluation. Managers who wish to gain a solid overview of image and video quality evaluation. Students who wish to pursue a career in digital image process- ing. Intellectual property and patent attorneys who wish to gain a more fundamental understanding of quality metrics and the underlying technolo- gies. Government laboratory personnel who work in imaging.

electronicimaging.org #EI2019 125 EI 2019 Short Course Descriptions

Instructors Learning Outcomes Thrasyvoulos N. Pappas received SB, SM, and PhD in electrical engi- • Understand visualization definitions. neering and computer science from MIT in 1979, 1982, and 1987, • Learn about and distinguish software tools. respectively. From 1987 until 1999, he was a member of the technical • Understand how to generate visualizations from categorical, numerical, staff at Bell Laboratories, Murray Hill, NJ. He is currently a professor in and spatial data. the department of electrical and computer engineering at Northwestern University, which he joined in 1999. His research interests are in image Intended Audience

Short Courses Short and video quality and compression, image and video analysis, content- Anyone with e.g. a basic knowledge of Excel opening, editing, saving, based retrieval, perceptual models for multimedia processing, model-based creating formula who wants to learn about data visualization and how to halftoning, and tactile and multimodal interfaces. Pappas has served as co- start using it in their own projects. chair of the 2005 SPIE/IS&T Electronic Imaging (EI) Symposium, and since 1997 he has been co-chair of the EI Conference on Human Vision and Preparation Electronic Imaging. Pappas is a Fellow of IEEE and SPIE. He is currently Participants will need to have the free version of Microsoft PowerBI in- serving as Vice President-Publications for the Signal Processing Society stalled on laptops they bring with them to take part in the course. Example of IEEE. He has also served as Editor-in-Chief of the IEEE Transactions on data sets and lecture slides will be provided on USB (or web download Image Processing (2010-12), elected member of the Board of Governors perhaps) on the day. of the Signal Processing Society of IEEE (2004-06), chair of the IEEE Image and Multidimensional Signal Processing (now IVMSP) Technical Instructors Committee, and technical program co-chair of ICIP-01 and ICIP-09. Nick Holliman is Professor of Visualization at Newcastle University, UK, where he researches the science and engineering of visualization and Sheila S. Hemami received a BSEE from the University of Michigan visual analytics, addressing the fundamental challenges of visualizing big (1990), MSEE and PhD from Stanford University (1992 and 1994). She data. His research includes the psychophysics of the human visual system, was most recently at Northeastern University as professor and chair of the the creation of novel algorithms for the control of image content to match electrical engineering and computer science department at the College human abilities and demonstrating how these algorithms work in practice in of Engineering; with Hewlett-Packard Laboratories in Palo Alto, California scalable cloud-based software tools and award-winning 3D visualizations. in 1994; and with the School of Electrical Engineering at Cornell He has worked in both industrial and academic environments and is ex- University from 1995-2013. She is currently Director, Strategic Technical perienced in delivering commercial impact from research outputs. He has Opportunities, at Draper, Cambridge, MA. Her research interests broadly led the design of high performance visualization theatres at four different concern communication of visual information from the perspectives of both institutions which have been specified to support both individual and team- signal processing and psychophysics. She was elected a Fellow of the based decision making. He currently holds a Turing Research Fellowship in IEEE in 2009 for contributions to robust and perceptual image and video visualization with the Alan Turing Institute, London. communications. Hemami has held various visiting positions, most recently at the University of Nantes, France and at Ecole Polytechnique Fédérale de SC25: Color Optimization for Displays Lausanne, Switzerland. She has received numerous university and national teaching awards, including Eta Kappa Nu’s C. Holmes MacDonald 10:15 am – 12:15 pm Award. She was a Distinguished Lecturer for the IEEE Signal Processing Society in 2010-2011, was editor-in-chief for the IEEE Transactions on Course Length: 2 hours Multimedia from 2008-2010. She has held various technical leadership Course Level: Intermediate positions in the IEEE. Instructor: Gabriel Marcu, Apple Inc. Fee*: Member: $185 / Non-member: $210 / Student: $65 SC07: Visualization Tools and Techniques *after December 18, 2019, members / non-members prices increase by $50, student price increases by $20 8:00 am – 12:15 pm This course introduces color optimization techniques for various display Course Length: 4 hours types (LCDs, plasma, OLED, QLED, and projection: DLP, LCD, LcoS), Course Level: Introductory and ranging from mobile devices to large LCD TV screens. Factors such Instructors: Nicolas Holliman, Newcastle University as technology, luminance level (including HDR), dynamic/static contrast Fee*: Member: $290 / Non-member: $315 / Student: $95 ratio (including local dimming), linearization and gamma correction, gray *after December 18, 2019, members / non-members prices increase by tracking, color gamut (including wide gamut), white point, response time, $50, student price increases by $20 viewing angle, uniformity, color model, calibration, and characterization are discussed and color optimization methods for displays are presented. This course provides an introduction for first time users to modern visualiza- tion tools and techniques. It includes a strong practical element that gets Learning Outcomes you up to speed with Power BI, a modern BI visualization tool. • Identify the critical parameters and their impact on display color quality for various display types (LCD, plasma, OLED, QLED) and applications The course will cover the following topics: visualization definitions, choos- (smartphones, tablets, notebooks, desktops, TVs, and projectors). ing between tools, visualizing categorical, numerical and spatial data, and • Select the optimal color model for a display and highlight its depend- publishing visualizations. You should leave knowing how to generate your ency on display technology. own visualizations quickly and easily using a modern visualization tool. • Understand the advantages of the LED backlight modulation and the principles of quantum dot gamut enhancement for QLED technology. • Understand the critical factors for HDR displays and wide gamut dis- plays.

126 #EI2019 electronicimaging.org EI 2019 Short Course Descriptions

• Compare color performance and limitations for various LCD modes like • Difference between and use of object space and image space resolution. IPS, MVA, FFS. • Describe the impact of image processing functions on spatial resolution. • Understand the use of the color model for the display ICC profile and • Understand practical issues associated with resolution measurements. the implication for the color management​. • Understand targets, lighting, and measurement set up. • Follow a live calibration and characterization of an LCD screen and • Learn measurement of lens resolution and sensor resolution. projector used in the class, using tools varying from visual calibrator to • Appreciate RAW vs. processed image resolution measurements. instrument based ones. • Learn cascade properties of resolution measurements. • Apply the knowledge from the course to practical problems of color • Understand measurement of camera resolution. optimization for displays. • Understand the practical considerations when measuring real lenses. • Specifying center versus corner resolution. Intended Audience • Learn about impact of actuator tilt. Engineers, scientists, managers, pre-press professionals, and those confront- • Learn about impact of field curvature. ing display related color issues. • Understand through-focus MTF.

Instructor Intended Audience Short Courses Gabriel Marcu is senior scientist at Apple Inc. His achievements are in Managers, engineers, and technicians involved in the design and evalua- color reproduction on displays and desktop printing (characterization/ tion of image quality of digital cameras, mobile cameras, video cameras, calibration, halftoning, gamut mapping, ICC profiling, HDR imaging, and scanners would benefit from participation. Technical staff of manu- RAW color conversion). He holds more than 80 issued patents in these facturers, managers of digital imaging projects, as well as journalists and areas. Marcu is responsible for color calibration and characterization of students studying image technology are among the intended audience. Apple desktop display products. He has taught seminars and courses on color topics at various IS&T, SPIE, and SID conferences and IMI Europe. Instructors He was co-chair of the 2006 SPIE/IS&T Electronic Imaging Symposium Kevin J. Matherson is a director of optical engineering at Microsoft and of CIC11; he is co-chair of the Electronic Imaging Symposium’s Color Corporation working on advanced optical technologies for consumer Imaging Conference: Displaying, Hardcopy, Processing, and Applications. products. Prior to Microsoft, he participated in the design and development Marcu is an IS&T and SPIE Fellow. of compact cameras at HP and has more than 15 years of experience developing miniature cameras for consumer products. His primary re- search interests focus on sensor characterization, optical system design and SC08: Resolution in Mobile Imaging Devices: Concepts and analysis, and the optimization of camera image quality. Matherson holds a Measurements Masters and PhD in optical sciences from the University of Arizona. 10:15 am – 12:15 pm Uwe Artmann studied photo technology at the University of Applied Sciences Course Length: 2 hours in Cologne following an apprenticeship as a photographer and finished with Course Level: Introductory/Intermediate the German ‘Diploma Engineer’. He is now the CTO at Image Engineering, Instructors: Kevin J. Matherson, Microsoft Corporation, and Uwe Artmann, an independent test lab for imaging devices and manufacturer of all kinds of Image Engineering GmbH & Co. KG test equipment for these devices. His special interest is the influence of noise Fee*: Member: $185 / Non-member: $210 / Student: $65 reduction on image quality and MTF measurement in general. *after December 18, 2019, members / non-members prices increase by $50, student price increases by $20 SC09: Imaging Quality Testing: Developments for Mobile, Automotive & Machine Vision Resolution is often used to describe image quality of electronic imaging systems. Components of an imaging system such as lenses, sensors, and 1:30 – 3:30 pm image processing impact the overall resolution and image quality achieved in devices such as digital and mobile phone cameras. While image pro- Course Length: 2 hours cessing can in some cases improve the resolution of an electronic camera, Course Level: Intermediate it can also introduce artifacts as well. This course is an overview of spatial Instructors: Don Williams, Image Science Associates, and Peter Burns, resolution methods used to evaluate electronic imaging devices and the Burns Digital Imaging impact of image processing on the final system resolution. The course cov- Fee*: Member: $185 / Non-member: $210 / Student: $65 ers the basics of resolution and impacts of image processing, international *after December 18, 2019, members / non-members prices increase by standards used for the evaluation of spatial resolution, and practical as- $50, student price increases by $20 pects of measuring resolution in electronic imaging devices such as target choice, lighting, sensor resolution, and proper measurement techniques. This course extends our introductory course material by addressing emerging standards and methods for, e.g., automotive (ADAS) and Learning Outcomes machine-vision applications. We start by describing how and why imag- • Understand terminology used to describe resolution of electronic imag- ing performance methods are being adapted and adopted. Most efforts ing devices. rely on several ISO-defined methods, e.g., for color-encoding, image • Describe the basic methods of measuring resolution in electronic imag- resolution, distortion, and noise. While several performance measurement ing devices and their pros and cons. protocols are similar, the image quality needs are different. For example, • Understand point spread function and modulation transfer function. the EMVA 12288 standard for machine vision emphasizes detector • Learn slanted edge spatial frequency response (SFR). signal and noise characteristics. However, the CPIQ and IEEE P2020 • Learn Siemens Star SFR. automotive imaging initiatives include attributes due to optical and video • Contrast transfer function. performance (e.g., distortion and motion artifacts).

electronicimaging.org #EI2019 127 EI 2019 Short Course Descriptions

Learning Outcomes tracted and segmentation can be performed. For higher level analysis key • Understand the difference between imaging performance and image points can be extracted and features can be computed at their locations. quality. These can then be used to facilitate registration and recognition algorithms. • Describe why standard performance methods might differ with markets. Finally, for visualization and analysis purposes the point cloud may be trian- • Identify challenges, and approaches for evaluating wide Field-of-View gulated. The course discusses and explains the steps described above and (FOV) cameras. introduces the increasingly popular PCL (Point Cloud Library) open source • Quantify and mitigate sources of system variability, e.g., in multi-camera framework for processing point clouds.

Short Courses Short systems. Learning Outcomes Intended Audience • Describe fundamental concepts for point cloud processing. Image scientists, quality engineers, and others evaluating digital camera • Develop algorithms for point cloud processing. and scanner performance. The previous introduction to current ISO meth- • Incorporate point cloud processing in your applications. ods for imaging performance testing (optical distortion, color-error, MTF, • Understand the limitations of point cloud processing. etc.) will be useful. • Use industry standard tools for developing point cloud processing ap- plications. Instructors Don Williams, founder of Image Science Associates, was with Kodak Intended Audience Research Laboratories. His work focuses on quantitative signal and noise Engineers, researchers, and software developers who develop imaging ap- performance metrics for digital capture imaging devices and imaging plications and/or use camera sensors for inspection, control, and analysis. fidelity issues. He co-leads the TC 42 standardization efforts on digital print and film scanner resolution (ISO 16067-1, ISO 16067-2), scanner Instructor dynamic range (ISO 21550), and is the editor for the second edition to Gady Agam is an associate professor of computer science at the Illinois digital camera resolution (ISO 12233). Institute of Technology. He is the director of the Visual Computing Lab at IIT which focuses on imaging, geometric modeling, and graphics applica- Peter Burns is a consultant working in imaging system evaluation, mod- tions. He received his PhD from Ben-Gurion University (1999). eling, and image processing. Previously he worked for Carestream Health, Xerox, and Eastman Kodak. A frequent instructor and speaker at technical SC11: 3D Video Processing Techniques for Immersive Environments conferences, he has contributed to several imaging standards. He has taught imaging courses at Kodak, SPIE, and IS&T technical conferences, 1:30 pm – 5:45 pm and at the Center for Imaging Science, RIT. Course Length: 4 hours Course Level: Intermediate SC10: 3D Point Cloud Processing Instructor: Yo-Sung Ho, Gwangju Institute of Science and Technology Fee*: Member: $290 / Non-member: $315 / Student: $95 1:30 pm – 5:35 pm *after December 18, 2019, members / non-members prices increase by $50, student price increases by $20 Course Length: 4 hours Course Level: Introductory With the emerging market of 3D imaging products, 3D video has become Instructor: Gady Agam, Illinois Institute of Technology an active area of research and development in recent years. 3D video is Fee*: Member: $290 / Non-member: $315 / Student: $95 the key to provide more realistic and immersive perceptual experiences *after December 18, 2019, members / non-members prices increase by than the existing 2D counterpart. There are many applications of 3D video, $50, student price increases by $20 such as 3D movie and 3DTV, which are considered the main drive of the next-generation technical revolution. Stereoscopic display is the current Point clouds are an increasingly important modality for imaging with appli- mainstream technology for 3DTV, while auto-stereoscopic display is a more cations ranging from user interfaces to street modeling for GIS. Range sen- promising solution that requires more research endeavors to resolve the sors such as the Intel RealSense camera are becoming increasingly small associated technical difficulties. This short course lecture covers the current and cost effective thus opening a wide range of applications. The purpose state-of-the-art technologies for 3D contents generation. After defining the of this course is to review the necessary steps in point cloud processing basic requirements for 3D realistic multimedia services, we cover various and introduce fundamental algorithms in this area. multi-modal immersive media processing technologies. Also addressed is the depth estimation problem for natural 3D scenes and several challeng- Point cloud processing is similar to traditional image processing in some ing issues of 3D video processing, such as camera calibration, image sense yet different due to the 3D and unstructured nature of the data. rectification, illumination compensation, and color correction. The course In contrast to a traditional camera sensor which produces a 2D array discusses MPEG activities for 3D video coding, including depth map of samples representing an image, a range sensor produces 3D point estimation, prediction structure for multi-view video coding, multi-view samples representing a 3D surface. The points are generally unorganized video-plus-depth coding, and intermediate view synthesis for multi-view and so are termed “cloud”. Once the points are acquired there is a need video display applications. to store them in a data structure that facilitates finding neighbors of a given point in an efficient way. The point cloud often contains noise and holes Learning Outcomes which can be treated using noise filtering and hole filling algorithms. For • Understand the general trend of 3D video services. computational efficiency purposes the point cloud may be down sampled. • Describe the basic requirements for realistic 3D video services. In an attempt to further organize the points and obtain a higher level • Identify the main components of 3D video processing systems. representation of the points, planar or quadratic surface patches can be ex- • Estimate camera parameters for camera calibration.

128 #EI2019 electronicimaging.org EI 2019 Short Course Descriptions

• Analyze the captured data for image rectification and illumination com- and, finally, the conversion of electrical signals into a digital image file. pensation. We will learn the underlying theory and practical implementation details of • Apply image processing techniques for color correction and filtering. forensic techniques from each step of this imaging pipeline. • Estimate depth map information from stereoscopic and multi-view im- ages. Learning Outcomes • Synthesize intermediate views at virtual viewpoints. • Review the historical and societal context for imaging forensics. • Review MPEG and JCT-3V activities for 3D video coding. • Understand the underlying theory and practical implementation details of • Design a 3D video system to handle multi-view video-plus-depth data. forensic techniques. • Discuss various challenging problems related to 3D video services. • Understand how to model the imaging pipeline. • Analyze each step of a model imaging pipeline in relation to forensic Intended Audience techniques. Scientists, engineers, technicians, or managers who wish to learn more Intended Audience about 3D video and related processing techniques. Undergraduate training Scientists, engineers, technicians, or managers who wish to learn in engineering or science is assumed. more about digital image forensics and related processing techniques. Short Courses Familiarity with imaging pipeline concepts will be helpful. Instructor Yo-Sung Ho has been developing video processing systems for digital TV Instructor and HDTV, first at Philips Labs in New York and later at ETRI in Korea. He Prof. Hany Farid is the Albert Bradley 1915 Third Century Professor is currently a professor at the school of electrical and computer engineer- of Computer Science at Dartmouth. Farid’s research focuses on digital ing at Gwangju Institute of Science and Technology (GIST) in Korea, and forensics, image analysis, and human perception. He received his also Director of Realistic Broadcasting Research Center at GIST. He has undergraduate degree in computer science and applied mathematics from given several tutorial lectures at various international conferences, includ- the University of Rochester (1989) and his PhD in computer science from ing the 3DTV Conference, the IEEE International Conference on Image the University of Pennsylvania (1997). Following a two-year post-doctoral Processing (ICIP), and the IEEE International Conference on Multimedia & fellowship in brain and cognitive sciences at MIT, Farid joined the faculty Expo (ICME). He earned his PhD in electrical and computer engineering at Dartmouth in 1999. He is the recipient of an Alfred P. Sloan Fellowship, at the University of California, Santa Barbara. He has been an associate a John Simon Guggenheim Fellowship, and is a Fellow of the IEEE and the editor of IEEE Transactions on Circuits and Systems for Video Technology National Academy of Inventors. Prof. Farid is also the Chief Technology (T-CSVT). Officer and co-founder of Fourandsix Technologies and a Senior Adviser to the Counter Extremism Project. SC12: Digital Image Forensics 1:30 pm – 5:45 pm SC13: Optics and Hardware Calibration of Compact Camera Modules 1:30 – 5:45 pm Course Length: 4 hours Course Level: Intermediate Course Length: 4 hours Instructor: Hany Farid, Dartmouth College Course Level: Introductory/ Intermediate Fee*: Member: $290 / Non-member: $315 / Student: $95 Instructors: Kevin J. Matherson, Microsoft Corporation, and Uwe Artmann, *after December 18, 2019, members / non-members prices increase by Image Engineering GmbH & Co. KG $50, student price increases by $20 Fee*: Member: $290 / Non-member: $315 / Student: $95 *after December 18, 2019, members / non-members prices increase by Stalin, Mao, Hitler, Mussolini, and many others had photographs manipu- $50, student price increases by $20 lated in an attempt to rewrite history. These men understood the power of photography and that if they changed photographs they could change his- Digital and mobile imaging camera and system performance is determined tory. Cumbersome and time-consuming darkroom techniques were required by a combination of sensor characteristics, lens characteristics, and image to alter the historical record on behalf of Stalin and others. Today, powerful processing algorithms. Smaller pixels, smaller optics, smaller modules, and low-cost digital technology has made it far easier for nearly anyone and lower cost result in more part-to-part variation driving the need for to alter digital images. And the resulting fakes are often very difficult to calibration to maintain good image quality. This short course provides detect. This photographic fakery is having a significant impact in many an overview of issues associated with compact imaging modules used in different areas of society. Doctored photographs are appearing in tabloid mobile and digital imaging. The course covers optics, sensors, actua- and fashion magazines, government media, main-stream media, social tors, camera modules and the camera calibrations typically performed to media, fake-news, political ad campaigns, and scientific journals. The mitigate issues associated with production variation of lenses, sensor, and technology that can distort and manipulate digital media is developing at autofocus actuators. break-neck speeds, and it is imperative that the technology that can detect such alterations develop just as quickly. The field of photo forensics has Learning Outcomes emerged to restore some trust to photography. This short course will provide • Describe illumination, photons, sensor, and camera radiometry. a hands-on overview of this field. • Select optics and sensor for a given application. • Understand the optics of compact camera modules used for mobile At their foundation, photo forensic techniques rely on understanding imaging. and modeling the imaging pipeline, from the interaction of light with the • Understand the difficulties in minimizing sensor and camera modules. physical 3-D world, the refraction of light as it passes through the camera • Assess the need for per unit camera calibrations in compact camera lenses, the transformation of light to electrical signals in the camera sensor, modules.

electronicimaging.org #EI2019 129 EI 2019 Short Course Descriptions

• Determine camera spectral sensitivities. Learning Outcomes • Understand autofocus actuators and why per unit calibrations are • Understand basic principles of spatial, temporal, and color processing required. by the human visual system. • How to perform the various calibrations typically done in compact cam- • Explore basic cognitive processes, including visual attention and era modules (relative illumination, color shading, spectral calibrations, semantics. gain, actuator variability, etc.). • Develop skills in applying knowledge about human perception and • Equipment required for performing calibrations. cognition to real-world imaging and visual analytics applications.

Short Courses Short • Compare hardware tradeoffs such as temperature variation, its impact on calibration, and overall influence on final quality. Intended Audience Imaging scientists, engineers, application developers. Domain experts are Intended Audience also welcome, since imaging plays a pivotal role in today’s application People involved in the design and image quality of digital cameras, mobile areas, including finance, medicine, science, environment, telecommunica- cameras, and scanners will benefit from participation. Technical staff of tions, sensor integration, augmented and virtual reality, art and design, manufacturers, managers of digital imaging projects, as well as journalists and others. Students interested in understanding imaging systems from the and students studying image technology are among the intended audi- perspective of the human user are also encouraged to attend, as well as ence. anyone interested in how the visual world is processed by our eye-brain system. Instructors Kevin J. Matherson is a director of optical engineering at Microsoft Instructor Corporation working on advanced optical technologies for consumer Bernice Rogowitz is the Chief Scientist at Visual Perspectives, a consult- products. Prior to Microsoft, he participated in the design and development ing and research practice that works with companies and universities to of compact cameras at HP and has more than 15 years of experience improve visual imaging and visualization systems through a better under- developing miniature cameras for consumer products. His primary research standing of human vision and cognition. She created the Data Visualization interests focus on sensor characterization, optical system design and and Design curriculum at Columbia University, where she is an instructor in analysis, and the optimization of camera image quality. Matherson holds a the Applied Analytics Program, and is one of the founding Editors-in-Chief Masters and PhD in optical sciences from the University of Arizona. (with Thrasyvoulos Pappas) of the new IS&T Journal of Perceptual Imaging, which publishes research at the intersection of human perception/cognition Uwe Artmann studied photo technology at the University of Applied Sciences and imaging. Rogowitz received her BS in experimental psychology from in Cologne following an apprenticeship as a photographer and finished with Brandeis University, a PhD in vision science from Columbia University, and the German ‘Diploma Engineer’. He is now the CTO at Image Engineering, was a post-doctoral Fellow in the Laboratory for Psychophysics at Harvard an independent test lab for imaging devices and manufacturer of all kinds of University. For many years, she was a scientist and research manager test equipment for these devices. His special interest is the influence of noise at the IBM T.J. Watson Research Center. Her work includes fundamen- reduction on image quality and MTF measurement in general. tal research in human color and pattern perception, novel perceptual approaches for visual data analysis and image semantics, and human- centric methods to enhance visual problem solving in medical, financial, SC14: Perception and Cognition for Imaging and scientific applications. She is the founder, and past chair of the IS&T 1:30 pm – 5:45 pm Conference on Human Vision and Electronic Imaging, which has been a vital part of the imaging community for more than 30 years. Rogowitz is Course Length: 4 hours a Fellow of IS&T and SPIE, and a Senior Member of the IEEE. In 2015, Course Level: Introductory/Intermediate she was named the IS&T Honorary Member, and was cited as a “leader Instructor: Bernice Rogowitz, Visual Perspectives in defining the research agenda for human-computer interaction in imag- Fee*: Member: $290 / Non-member: $315 / Student: $95 ing, driving technology innovation through research in human perception, *after December 18, 2019, members / non-members prices increase by cognition, and aesthetics.” $50, student price increases by $20 SC15: High-Dynamic-Range Theory and Technology Imaging is a very broad field. We produce a wide range of visual representations that support many different tasks in every industry. These 3:45 – 5:45 pm representations are created for human consumption, so it is critical for us to understand how the human sees, interprets, and makes decisions based on Course Length: 2 hours this visual information. Course Level: Intermediate Instructors: Alessandro Rizzi, University of Milano, and John McCann, The human observer actively processes visual representations using percep- McCann Imaging tual and cognitive mechanisms that have evolved over millions of years. Fee*: Member: $185 / Non-member: $210 / Student: $65 The goal of this tutorial is to provide an introduction to these processing *after December 18, 2019, members / non-members prices increase by mechanisms, and to show how this knowledge can guide engineering $50, student price increases by $20 decisions about how to represent data visually. This course will provide a fundamental perceptual foundation for approaching important topics in High Dynamic Range (HDR) imaging is a continuously evolving part of imaging, such as image quality, visual feature analysis, and data visualiza- color. It began with the invention of HDR painting in the Renaissance. It tion. The course will begin with understanding early vision mechanisms, continued with multiple exposures to attempt to capture a wider range of such as contrast and color perception, cover important topics in attention scene information, and to recreating HDR scenes by integrating widely- and memory, and provide insights into individual differences, aesthetics used LCD with LED illumination. Today, there are HDR televisions using and emotion. OLED and Quantum Dot technologies and developing HDR display standards.

130 #EI2019 electronicimaging.org EI 2019 Short Course Descriptions

HDR imaging captures and displays more information than conventional Monday, January 14, 2019 imaging. Non-uniform illumination increases the range of light in natural scenes. After a detailed description of the problems in accurate image acquisition, SC16: Camera Noise Sources and its Characterization Using International Standards This course focuses on methods of creating and manipulating HDR images 8:30 – 10:30 am using scene measurements of camera images and visual appearances. Measuring the actual physical limitations of scene capture, scene display, Course Level: Introductory to intermediate and the interaction of these systems with human vision are emphasized, as Course Length: 2 hours is the differences between single-pixel and spatial comparison HDR algo- Instructors: Kevin J. Matherson, Microsoft Corporation, and Uwe Artmann, rithms. The course presents measurements about the limits of accurate cam- Image Engineering GmbH & Co. KG era acquisition (range and color) and the usable range of light for displays Fee*: Member: $185 / Non-member: $210 / Student: $65 presented to human vision. It discusses the principles of tone rendering and *after December 18, 2019, members / non-members prices increase by the role of HDR spatial models. $50, student price increases by $20 Short Courses Learning Outcomes This short course provides an overview of noise sources associated with • Explore the history of HDR imaging. “light in to byte out” in digital and mobile imaging cameras. The course • Understand dynamic range and quantization: the ‘salame’ metaphor. discusses common noise sources in imaging devices, the influence of im- • Compare single and multiple-exposures for scene capture. age processing on these noise sources, the use of international standards • Measure optical limits in acquisition and display: scene dependent ef- for noise characterization, and simple hardware test setups for character- fects of glare. izing noise. • RAW scene capture in LDR and HDR scenes. • Human vision and a program to calculate the retinal image altered by Learning Outcomes glare. • Become familiar with basic noise source in mobile and digital imaging • Discuss current HDR TV systems and standards: tone-rendering vs. spatial devices. HDR methods. • Learn how image processing impacts noise sources in digital imaging devices. Intended Audience • Make noise measurements based on international standards: EMVA Anyone interested in using HDR imaging: science and applications. This 1288, ISO 14524, ISO 15739, and visual noise measurements. includes students, color scientists, imaging researchers, medical imagers, • Describe simple test setups for measuring noise based on international software and hardware engineers, photographers, cinematographers, and standards. production specialists. • Predict system level camera performance using international standards. Instructors Intended Audience Alessandro Rizzi is Full Professor and head of the MIPS Lab in the depart- People involved in the design and image quality of digital cameras, mobile ment of computer science at the University of Milan, teaching fundamentals cameras, and scanners would benefit from participation. Technical staff of of digital imaging and colorimetry. He is doing research since 1990 in manufacturers, managers of digital imaging projects, as well as journalists the field of digital imaging with a particular interest on color, visualiza- and students studying image technology are among the intended audience. tion, photography, HDR, and on the perceptual issues related to digital imaging, interfaces, and lighting. He has been one of the founders of the Instructors Italian Color Group, Secretary of CIE Division 8, and IS&T Fellow and Kevin J. Matherson is a director of optical engineering at Microsoft Vice President. In 2015 he received the Davies medal from the Royal Corporation working on advanced optical technologies for consumer Photographic Society. Rizzi is co-chair of the IS&T conference “Color products. Prior to Microsoft, he participated in the design and development Imaging: Displaying, Processing, Hardcopy and Applications”, topical edi- of compact cameras at HP and has more than 15 years of experience tor for Applied Color Science of the Journal of Optical Society of America, developing miniature cameras for consumer products. His primary research associate editor of Journal of Electronic Imaging, member of several pro- interests focus on sensor characterization, optical system design and gram committees of conferences related to color and digital imaging, and analysis, and the optimization of camera image quality. Matherson holds a author of more than 300 scientific works. masters and PhD in optical sciences from the University of Arizona. John McCann received a degree in biology from Harvard College (1964). Uwe Artmann studied photo technology at the University of Applied He worked in, and managed, the Vision Research Laboratory at Polaroid Sciences in Cologne following an apprenticeship as a photographer, and from 1961 to 1996. He has studied human color vision, digital image finished with the German ‘Diploma Engineer’. He is now CTO at Image processing, large format instant photography, and the reproduction of Engineering, an independent test lab for imaging devices and manufac- fine art. His publications and patents have studied Retinex theory, color turer of all kinds of test equipment for these devices. His special interest is constancy, color from rod/cone interactions at low light levels, appear- the influence of noise reduction on image quality and MTF measurement in ance with scattered light, and HDR imaging. He is a Fellow of IS&T and general. the Optical Society of America (OSA). He is a past President of IS&T and the Artists Foundation, Boston. He is the IS&T/OSA 2002 Edwin H. Land Medalist and IS&T 2005 Honorary Member.

electronicimaging.org #EI2019 131 EI 2019 Short Course Descriptions

SC17: 3D Imaging on general image quality is first reviewed. Known issues in traditional approaches are discussed. Methodology for building camera colorimetric 8:30 am – 12:45 pm transforms and profiles are detailed step-by-step. State-of-the-art solutions us- ing current technology are presented including monochromators, multispec- Course Length: 4 hours tral LED light sources, in situ measurements of spectral radiances of natural Course Level: Introductory objects, and modern color transform methods including multidimensional Instructor: Gady Agam, Illinois Institute of Technology color look up tables. Practical considerations and methods for high-speed

Short Courses Short Fee*: Member: $290 / Non-member: $315 / Student: $95 production-line camera module calibration are detailed. This short course *after December 18, 2019, members / non-members prices increase by provides the basis needed to implement advanced color correction in $50, student price increases by $20 cameras and software as well as updating the relevant parameters in a production-line environment. The purpose of this course is to introduce algorithms for 3D structure infer- ence from 2D images. In many applications, inferring 3D structure from Learning Outcomes 2D images can provide crucial sensing information. The course begins by • Understand the need for camera colorimetric characterization and the reviewing geometric image formation and mathematical concepts that are impact of color calibration on image quality and manufacturing yield. used to describe it, and then moves to discuss algorithms for 3D model • Perform target-based and spectral-based camera characterization. reconstruction. • Solve for colorimetric camera transforms and build profiles using linear and nonlinear techniques. The problem of 3D model reconstruction is an inverse problem in which • Evaluate current colorimetric camera characterization hardware and we need to infer 3D information based on incomplete (2D) observations. software technology and products. We discuss reconstruction algorithms which utilize information from multiple • Implement production-line per-module camera color calibration solutions. views. Reconstruction requires the knowledge of some intrinsic and extrinsic camera parameters and the establishment of correspondence between Intended Audience views. Also discussed are algorithms for determining camera parameters Engineers, project leaders, and managers involved in camera image pro- (camera calibration) and for obtaining correspondence using epipolar cessing pipeline development, image quality engineering, and production- constraints between views. The course introduces relevant 3D imaging soft- line quality assurance. ware components available through the industry standard OpenCV library. Instructor Learning Outcomes Dietmar Wueller studied photographic sciences from 1987 to 1992 in • Describe fundamental concepts in 3D imaging. Cologne. He is the founder of Image Engineering, one of the leading • Develop algorithms for 3D model reconstruction from 2D images. suppliers for test equipment for digital image capture devices. Wueller is a • Incorporate camera calibration into your reconstructions. member of IS&T, DGPH and ECI and he is the German representative for • Classify the limitations of reconstruction techniques. ISO/TC42 /WG18 and also participates in several other standardization • Use industry standard tools for developing 3D imaging applications. activities.

Intended Audience Eric Walowit’s interests are in color management, appearance estimation, Engineers, researchers, and software developers who develop imaging ap- and image processing pipelines for digital photographic applications. He plications and/or use camera sensors for inspection, control, and analysis. is founder (retired) of Color Savvy Systems, a color management hardware The course assumes basic working knowledge concerning matrices and and software company. He graduated from RIT’s Image Science program vectors. (1985), concentrating in Color Science. Walowit is a member of ICC, ISO/TC42, and IS&T. Instructor Gady Agam is an associate professor of computer science at the Illinois SC19: Developing Enabling Technologies for Automated Driving Institute of Technology. He is the director of the visual computing lab at IIT which focuses on imaging, geometric modeling, and graphics applica- 10:45 am – 12:45 pm tions. He received his PhD from Ben-Gurion University (1999). Course Length: 2 hours Course Level: Introductory SC18: Production Line Camera Color Calibration Instructor: Forrest Iandola, Kurt Keutzer, and Joseph Gonzalez, University 8:30 am – 12:45 pm of California, Berkeley Fee*: Member: $185 / Non-member: $210 / Student: $65 Course Length: 4 hours *after December 18, 2019, members / non-members prices increase by Course Level: Intermediate $50, student price increases by $20 Instructor: Dietmar Wueller, Image Engineering GmbH & Co. KG, and Eric Walowit, Consultant This course presents speakers covering the latest research on enabling-tech- Fee*: Member: $290 / Non-member: $315 / Student: $95 nologies for automated driving including sensors, perception, and motion *after December 18, 2019, members / non-members prices increase by planning and control. $50, student price increases by $20 Learning Outcomes This short course covers the process of establishing baseline colorimetric • Describe autonomous vehicle imaging challenges and constraints. camera characterization transforms along with subsequent individual unit • Review the current limitations of real-time 3D imaging for automated driving. production-line calibration to increase quality and yield while lowering • Understand emerging imaging technologies for autonomous vehicle cost. The need for camera characterization and calibration and the impact imaging.

132 #EI2019 electronicimaging.org EI 2019 Short Course Descriptions

Intended Audience (RNNs), and then gives attendees hands-on training on how to build cus- Engineers, researchers, and software developers who develop automated tom models using popular open source deep learning frameworks. CNNs driving imaging applications. The course assumes basic working knowl- are end-to-end, learning low level visual features and classifier simultane- edge concerning computer vision. ously in a supervised fashion, giving substantial advantage over methods using independently solved features and classifiers. RNNs inject temporal Instructor feedback into neural networks. The best performing RNN framework, Dr. Forrest Iandola is an American computer scientist and entrepreneur. He Long Short Term Memory modules, are able to both remember long term is currently the CEO at Deepscale. Iandola earned his PhD in electrical sequences and forget more recent events. This short course describes engineering and computer science from the University of California, Berkeley, what deep networks are, how they evolved over the years, and how they where his research focused on improving the efficiency of deep neural differ from competing technologies. Examples are given demonstrating networks (DNNs). His best-known work includes deep learning infrastructure their widespread usage in imaging, and as this technology is described, such as FireCaffe and deep models such as SqueezeNet and SqueezeDet. indicating their effectiveness in many applications. SqueezeNet, a lightweight deep neural network has been deployed on smartphones and other embedded devices. His advances in scalable train- There are an abundance of approaches to getting started with deep learn- Short Courses ing and efficient implementation of DNNs led to the founding of DeepScale, ing, ranging from writing C++ code to editing text with the use of popular where he has been CEO since 2015. Iandola earned his BS in computer frameworks. After understanding how these networks are able to learn science from the University of Illinois at Urbana–Champaign. complex systems, a hands-on portion provided by NVIDIA’s Deep Learning Institute, we demonstrate usage with popular open source utilities to build Prof. Kurt Keutzer is a professor of electrical engineering and computer state-of-the-art models. An overview of popular network configurations science at the University of California, Berkeley, and co-founder of and how to use them with frameworks is discussed. The session concludes Deepscale. Over the last decade his research group has focused on using with tips and techniques for creating and training deep neural networks parallel and distributed processing to accelerate machine learning, and to perform classification on imagery, assessing performance of a trained more recently, deep learning, in its various applications in computer-vision, network, and modifications for improved performance. speech recognition, multimedia analytics, and computational finance. From November 2015, this research has been orchestrated to build superior Learning Outcomes perceptual systems for autonomous driving, commercialized in DeepScale. • To become familiar with deep learning concepts and applications. Keutzer‘s research group has achieved significant speedups in machine • To understand how deep learning methods, specifically convolutional learning (SVMs), computer vision, speech recognition, multimedia analytics, neural networks and recurrent neural networks work. computational finance, and, most recently, training and deployment of deep • To gain hands-on experience building, testing, and improving the perfor- neural networks. As a researcher, Keutzer has published six books and mance of deep networks using popular open source utilities. more than 200 refereed articles. As an entrepreneur, Keutzer has been an investor and advisor to thirteen startups and an advisor to seven more. Intended Audience Engineers, scientists, students, and managers interested in acquiring a Dr. Joseph Gonzalez is an assistant professor at the University of California, broad understanding of deep learning. Prior familiarity with basics of Berkeley and co-director of the UC Berkeley RISE Lab where he studies the machine learning and a scripting language are helpful. design of algorithms, abstractions, and systems for scalable machine learning. Gonzalez also teaches the advanced data science class at UC Berkeley to Instructors over 600 students a semester and is helping to develop the new data science Raymond Ptucha is an assistant professor in computer engineering at major. Before joining UC Berkeley, Gonzalez co-founded Turi Inc. (formerly the Rochester Institute of Technology specializing in machine learning, GraphLab) to develop AI tools for data scientists and later sold Turi to Apple. computer vision, robotics, and embedded control. Ptucha was a research Gonzalez holds a PhD in machine learning from Carnegie Mellon University. scientist with Eastman Kodak Company for 20 years where he worked on computational imaging algorithms and was awarded 26 US patents with another 23 applications on file. He graduated from SUNY/Buffalo SC20: Fundamentals of Deep Learning with a BS in computer science (1988) and a BS in electrical engineering 8:30 am – 12:45 pm (1989). He earned a MS in image science (2002) and PhD in computer science from RIT (2013). He was awarded an NSF Graduate Research Course Length: 4 hours Fellowship in 2010 and his PhD research earned the 2014 Best RIT Course Level: Intermediate. Basic machine learning exposure and prior Doctoral Dissertation Award. Ptucha is a passionate supporter of STEM experience programming using a scripting language helpful. education and is an active member of his local IEEE chapter and FIRST Instructor: Raymond Ptucha, Rochester Institute of Technology robotics organizations. Fee*: Member: $290 / Non-member: $315 / Student: $95 *after December 18, 2019, members / non-members prices increase by SC21: Using Cognitive and Behavioral Sciences and the Arts in Artificial $50, student price increases by $20 Intelligence Research and Design Deep learning has been revolutionizing the machine learning community 8:30 am – 12:45 pm winning numerous competitions in computer vision and pattern recognition. Success in this space spans many domains including object detection, Course Length: 4 hours classification, speech recognition, natural language processing, action Course Level: Introductory/Intermediate recognition and scene understanding. In some cases, results are on par Instructor: Mónica López-González, La Petite Noiseuse Productions with and even surpassing the abilities of humans. Activity in this space is Fee*: Member: $290 / Non-member: $315 / Student: $95 pervasive, ranging from academic institutions to small startups to large *after December 18, 2019, members / non-members prices increase by corporations. This short course encompasses the two hottest deep learning $50, student price increases by $20 fields: convolutional neural networks (CNNs) and recurrent neural networks

electronicimaging.org #EI2019 133 EI 2019 Short Course Descriptions

A major goal of machine learning and autonomous systems research is to Wednesday, January 16, 2019 create human-like intelligent machines. Despite the current surge of sophis- ticated computational systems available, from natural language proces- sors and pattern recognizers to surveillance drones and self-driving cars, SC22: Build Your Own VR Display: An Introduction to VR Display machines are not human-like, most fundamentally, in regards to our capacity Systems for Hobbyists & Educators to integrate past with incoming multi-sensory information to creatively adapt 8:30 am – 12:45 pm to the ever-changing environment. To create an accurate human-like machine

Short Courses Short entails thoroughly understanding human processes and behavior. The Course Length: 4 hours complexity of the mind/brain and its cognitive processes necessitates that Course Level: Introductory multidisciplinary expertise and lines of research must be brought together Instructors: Robert Konrad, Nitish Padmanaban, and Hayato Ikoma, and combined. This introductory to intermediate course presents a multidisci- Stanford University plinary perspective about method, data, and theory from the cognitive and Fee*: Member: $290 / Non-member: $315 / Student: $95 behavioral sciences and the arts not yet used in artificial intelligence research *after December 18, 2019, members / non-members prices increase by and design. The goal of this course is to provide a theoretical framework $50, student price increases by $20 from which to build highly efficient and integrated cognitive-behavioral-com- putational models to advance the field of artificial intelligence. Wearable computing is widely anticipated to be the next computing platform for consumer electronics and beyond. In many wearable comput- Learning Outcomes ing applications, most notably virtual and augmented reality (VR/AR), the • Identify the major, yet pressing, failures of contemporary autonomous primary interface between a wearable computer and a user is a near-eye intelligent systems. display. A near-eye display in turn is only a small part of a much more • Understand the challenges of implementation of and necessary mindset complex system that delivers these emerging VR/AR experiences. Other needed for integrative, multidisciplinary research. key components of VR/AR systems include low-latency tracking of the • Review latest findings in the cognitive and behavioral sciences, par- user’s head position and orientation, magnifying optics, sound synthesis, ticularly learning, attention, problem solving, decision-making, emotion and also content creation. In can be challenging to understand all of these perception, and spontaneous creative artistic thinking. technologies in detail as only limited and fragmented educational material • Explain how relevant findings in the cognitive and behavioral sciences on the technical aspects of VR/AR exist today. This course serves as a and the arts apply to the advancement of efficient and autonomous intel- comprehensive introduction to VR/AR technology to conference attendees. ligent systems. We will teach attendees how to build a head-mounted display (HMD) from • Discuss various research solutions for improving current computational scratch. Throughout the course, different components of the VR system are frameworks. taught and implemented, including the graphics pipeline, stereo rendering, lens distortion with fragment shaders, head orientation tracking with inertial Intended Audience measurement units, positional tracking, spatial sound, and cinematic VR Computer and imaging scientists, mathematicians, statisticians, engineers, content creation. At the end, attendees will have built a VR display from program managers, system and software developers, and students in those scratch and implemented every part of it. All hardware components are fields interested in exploring the importance of using multidisciplinary concepts, low-cost and off-the-shelf; the list will be shared with attendees. For maxi- questions, and methods within cognitive science, a fundamental and necessary mum accessibility, all software is implemented in WebGL and using the field to build novel mathematical algorithms for computational systems. Arduino platform. Source code will be provided to conference attendees. Instructor Learning Outcomes Mónica López-González, a polymath and visionary, is a multilingual cogni- • Understand and be able to implement the various systems comprising tive scientist, educator, entrepreneur, multidisciplinary artist, public speaker, today’s VR display systems with low-cost DIY components. science communicator, theorist, and writer. She merges questions, methods, • Learn about DIY system hardware and software. data, and theory from both the sciences and the arts to better understand • Understand the basic computer graphics pipeline. and unleash our creative thinking and making capacities as human beings • Learn basic OpenGL, WebGL, and GLSL (for shader programming) and for the betterment of artificial intelligence. She’s the co-founder and Chief how to implement via Javascript with Three.js to run in a browser. Science & Art Officer of La Petite Noiseuse Productions, a unique company • Understand stereoscopic perception and rendering. at the forefront of innovative science-art integration. López-González holds • Evaluate head mounted display optics and how to correct for lens BAs in psychology and French, and a MA and PhD in cognitive science, distortion. all from Johns Hopkins University, and a Certificate of Art in photography • Explore orientation tracking and how to perform sensor fusion on IMU data. from Maryland Institute College of Art. She held a postdoctoral fellowship • Use positional tracking via a DIY system that reverse engineers the Vive in the Johns Hopkins School of Medicine. She is a committee member and Lighthouse. session co-chair of HVEI. • Learn omnidirectional stereo (ODS) VR video format and current methods of capturing VR content. • Explore spatial Audio representations for 3D sound reproduction.

Intended Audience For this introductory-level course, some familiarity with programming, basic computer graphics, penGL, and the Arduino platform would be helpful. However, all required software and hardware concepts will be introduced in the course.

134 #EI2019 electronicimaging.org EI 2019 Short Course Descriptions

Instructors Intended Audience Robert Konrad is a 3rd year PhD candidate in the electrical engineering Scientists or developers that want to get started with TensorFlow. department at Stanford University, advised by Professor Gordon Wetzstein. His research interests lie at the intersection of computational displays and Instructor human physiology with a specific focus on virtual and augmented reality Magnus Hyttsten is a senior staff developer advocate for TensorFlow at systems. He has recently worked on relieving vergence-accommodation Google. He focuses on all things TensorFlow - from making sure that the and visual-vestibular conflicts present in current VR and AR displays, as developer community is happy to help developing the product. He has well as computationally efficient cinematic VR capture systems. Konrad been speaking at many major events including Google I/O, AnDevCon, has been the head TA for the VR course taught at Stanford that Professor Machine Learning meetups, etc. Right now, he is fanatically and joyfully Wetzstein and he started in 2015. He received is BA from the ECE focusing on TensorFlow for Mobile as well as creating Reinforcement department at the University of Toronto (2014), and an MA from the EE Learning models. Department at Stanford University (2016). Nitish Padmanaban is a second year PhD student at Stanford EE. He works Thursday, January 17, 2019 in the Stanford computational imaging lab on optical and computational Short Courses techniques for virtual and augmented reality. In particular, he spent the last SC24: Introduction to Probabilistic Models for Inference and Estimation year working on building and evaluating displays to alleviate the vergence- accommodation conflict, and also looked into the role of the vestibular 8:30 am – 12:45 pm system conflicts in causing motion sickness in VR. He graduated with a BS in EECS from UC Berkeley (2015), during which he focused primarily on Course Length: 4 hours signal processing. Course Level: Intermediate Instructor: Gaurav Sharma, University of Rochester Hayato Ikoma is a PhD student at the department of electrical engineer- Fee*: Member: $290 / Non-member: $315 / Student: $95 ing, Stanford University, working with Professor Gordon Wetzstein. His *after December 18, 2019, members / non-members prices increase by current research interest is in signal processing and optimization, particu- $50, student price increases by $20 larly for image processing. He is also interested in virtual reality related technologies and served as a teaching assistant for a virtual reality class at The course aims at providing attendees a foundation in inference and Stanford University. Before coming to Stanford University, he worked as a estimation using probabilistic models. Starting from the broad base of research assistant to develop new computational imaging techniques for an probabilistic inference and estimation, the course develops the treatment of optical microscope and a space telescope at MIT Media Lab and Centre specific techniques that underlie many current day machine learning and de Mathématiques et Leurs Applications at École Normal Supérieure de inference algorithms. Topics covered include a review of concepts from Cachan (CMLA, ENS Cachan) in France. probability and stochastic processes, IID and Markov processes, basics of inference and estimation, Maximum Aposteriori Probability (MAP) and Maximum Likelihood (ML), expectation maximization for ML estimation, SC23: Introduction to TensorFlow hidden Markov models, and Markov and conditional random fields. 3:45 – 5:45 pm The pedagogical approach is to illustrate the use of models via concrete examples: each model is introduced via a detailed toy example and then Course Level: Introductory illustrated via one or two actual application examples. Course Length: 2 hours Learning Outcomes Instructors: Magnus Hyttsten, Google, Inc. Fee*: Member: $185 / Non-member: $210 / Student: $65 • Describe and intuitively explain fundamental probabilistic concepts such *after December 18, 2019, members / non-members prices increase by independence, Bayes’ rule, and stationarity. $50, student price increases by $20 • Explain the basis of Maximum Aposteriori Probability (MAP) and Maximum Likelihood (ML) detection and estimation rules. TensorFlow is an open-source software library for machine learning. It is • Describe how latent variables and sequential dependence underlie used to define, train, and test machine learning models, which can later expectation maximization and hidden Markov Models. • Develop simple applications of probabilistic models for computer vision be served on a variety of platforms - servers to mobile devices. In this and image processing problems. workshop, you get an introduction using TensorFlow. We will go through • Cite and explain application examples involving the use of probabilistic the basics, and by the end of the course, you will know how to build deep models in computer vision, machine learning, and image processing. neural network models on your own. Intended Audience Prerequisites: Bring your laptop installed with TensorFlow by following Engineers, scientists, students, and managers interested in understanding instructions on “http://tensorflow.org”. Alternatively, we can provide how probabilistic models are used in inference and parameter estimation Google Cloud instances of TensorFlow that you can use (no installation problems in today’s machine learning and computer vision applications required). If you have a Google Cloud account, we can also share a and in applying such models to their own problems. Prior familiarity with TensorFlow cloud image that you can use. the basics of probability and with matrix vector operations are necessary Learning Outcomes for a thorough understanding, although attendees lacking this background • Become familiar with TensorFlow programming environment. will still be able to develop an intuitive high-level understanding. • Learn how to build models related to regression and classification. Instructor • Understand what deep neural networks are and how to build them. • Create a deep neural network that is able to classify digits based on Gaurav Sharma has more than two decades of experience in the design raw pixel input. and optimization of color imaging systems and algorithms that spans

electronicimaging.org #EI2019 135 EI 2019 Short Course Descriptions

employment at the Xerox Innovation Group and his current position as a professor at the University of Rochester in the departments of electrical and computer engineering and computer science. Additionally, he has con- sulted for several companies on the development of new imaging systems and algorithms. He holds 51 issued patents and has authored more than a 190 peer-reviewed publications. He is the editor of the Digital Color Imaging Handbook published by CRC Press and served as the Editor-in-

Short Courses Short Chief for the SPIE/IS&T Journal of Electronic Imaging from 2011 through 2015. Sharma is a fellow of IS&T, IEEE, and SPIE.

136 #EI2019 electronicimaging.org GENERAL INFORMATION About IS&T Registration Onsite Registration and Badge Pick-Up Hours The Society for Imaging Science and Technology Sunday 13 January...... 7:00 am to 8:00 pm (IS&T)—the organizer of the Electronic Imaging Monday 14 January...... 7:00 am to 5:00 pm Symposium—is an international non-profit dedi- Tuesday 15 January...... 8:00 am to 5:00 pm cated to keeping members and other imaging Wednesday 16 January...... 8:00 am to 5:00 pm professionals apprised of the latest develop- Thursday 17 January...... 8:30 am to 4:00 pm ments in the field through conferences, educa- tional programs, publications, and its website. Symposium Registration Symposium Registration Includes: Admission to all technical sessions; coffee breaks; the IS&T encompasses all aspects of imaging, with Symposium Reception, exhibition, poster and demonstration sessions; 3D theater; and sup- particular emphasis on digital printing, elec- port of free access to all the EI proceedings papers on the IS&T Digital Library. Separate tronic imaging, color science, sensors, virtual registration fees are required for short courses. reality, image preservation, and hybrid imaging systems. Short Course Registration Courses are priced separately. Course-only registration includes your selected course(s), IS&T offers members: course notes, coffee breaks, and admittance to the exhibition. Courses take place in various • Free, downloadable access to more than meeting rooms at the Hyatt Regency San Francisco Airport. Room assignments are noted on 7,000 papers from IS&T conference the course admission tickets and distributed with registration materials. proceedings via Author/Presenter Information www.ingentaconnect.com/content/ist General Information Speaker AV Prep Room: Conference Office • A complimentary online subscription to the Open during Registration Hours Journal of Imaging Science & Technology Each conference room has an LCD projector, screen, lapel microphone, and laser pointer. All or the Journal of Electronic Imaging presenters are encouraged to visit the Speaker AV Prep Room to confirm that their presentation • Reduced rates on products found in the and personal laptop is compatible with the audiovisual equipment supplied in the conference IS&T bookstore, including technical books, rooms. Speakers who requested special equipment prior to the request deadline are asked to conference proceedings, and journal sub- confirm that their requested equipment is available. scriptions No shared laptops are provided. • Reduced registration fees at all IS&T sponsored conferences—a value equal Policies to the difference between member and Granting Attendee Registration and Admission nonmember rates alone—and short courses IS&T, or their officially designated event management, in their sole discretion, reserves the • Access to the IS&T member directory right to accept or decline an individual’s registration for an event. Further, IS&T, or event • An honors and awards program management, reserves the right to prohibit entry or remove any individual whether regis- • Networking opportunities through active tered or not, be they attendees, exhibitors, representatives, or vendors, who in their sole opinion are not, or whose conduct is not, in keeping with the character and purpose of participation in chapter activities and con- the event. Without limiting the foregoing, IS&T and event management reserve the right to ference, program, and other committees remove or refuse entry to any attendee, exhibitor, representative, or vendor who has regis- tered or gained access under false pretenses, provided false information, or for any other Contact IS&T for more information on these reason whatsoever that they deem is cause under the circumstances. and other benefits.

IS&T Code of Conduct/Anti-Harassment Policy IS&T The Society for Imaging Science and Technology (IS&T; imaging.org) is dedicated to ensur- 7003 Kilworth Lane ing a harassment-free environment for everyone, regardless of gender, gender identity/ex- Springfield, VA 22151 pression, race/ethnicity, sexual orientation, disability, physical appearance, age, language 703/642-9090; 703/642-9094 fax spoken, national origin, and/or religion. As an international, professional organization with community members from across the globe, IS&T is committed to providing a respectful en- [email protected] / www.imaging.org vironment where discussions take place and ideas are shared without threat of belittlement, condescension, or harassment in any form. This applies to all interactions with the Society and its programs/events, whether in a formal conference session, in a social setting, or on-line.

Harassment includes offensive verbal comments related to gender, sexual orientation, etc., as well as deliberate intimidation; stalking; harassing photography, recording, or postings; sustained disruption of talks or other events; inappropriate physical contact; and unwelcome

electronicimaging.org #EI2019 137 sexual attention. Please note that the use of sexual language and/or imagery is never ap- propriate, including within conference talks, online exchanges, or the awarding of prizes. Participants asked to stop any harassing behavior are expected to comply immediately.

Those participating in IS&T activities who violate these or IS&T’s Publications Policy may be sanctioned or expelled from the conference and/or membership without a refund at the discretion of IS&T. If you are being harassed, notice that someone else is being harassed, or have any other concerns, please contact the IS&T Executive Director or e-mail incident.report@ imaging.org immediately. Please note that all reports are kept confidential and only shared with those who “need to know”; retaliation in any form against anyone reporting an incident of harassment, independent of the outcome, will not be tolerated.

Identification To verify registered participants and provide a measure of security, IS&T will ask attendees to

General Information present a government issued Photo ID at registration to collect registration materials. Individu- als are not allowed to pick up badges for attendees other than themselves. Further, attendees may not have some other person participate in their place at any conference-related activity. Such other individuals will be required to register on their own behalf to participate.

Capture and Use of a Person’s Image By registering for an IS&T event, I grant full permission to IS&T to capture, store, use, and/or reproduce my image or likeness by any audio and/or visual recording technique (including electronic/digital photographs or videos), and create derivative works of these images and recordings in any IS&T media now known or later developed, for any legitimate IS&T market- ing or promotional purpose. By registering for an IS&T event, I waive any right to inspect or approve the use of the images or recordings or of any written copy. I also waive any right to royalties or other compensation arising from or related to the use of the images, recordings, or materials. By registering, I release, defend, indemnify and hold harmless IS&T from and against any claims, damages or liability arising from or related to the use of the images, recordings or materials, including but not limited to claims of defamation, invasion of privacy, or rights of publicity or copyright infringement, or any misuse, distortion, blurring, alteration, optical illusion or use in composite form that may occur or be produced in taking, processing, reduction or production of the finished product, its publication or distribution.

Payment Method Registrants for paid elements of the event, who do not provide a method of payment, will not be able to complete their registration. Individuals with incomplete registrations will not be able to attend the conference until payment has been made. IS&T accepts VISA, MasterCard, American Express, Discover, Diner’s Club, checks and wire transfers. Onsite registrations can also pay with Cash.

Audio, Video, Digital Recording Policy For copyright reasons, recordings of any kind are prohibited without the of the presenter or fellow attendees. Attendees may not capture nor use the materials presented in any meeting room without obtaining permission from the presenter.

Your registration signifies your agreement to be photographed or videotaped by IS&T in the course of normal business. Such photos and video may be used in IS&T marketing materials or other IS&T promotional items.

Laser Pointer Safety Information/Policy IS&T supplies tested and safety-approved laser pointers for all conference meeting rooms. For safety reasons, IS&T requests that presenters use provided laser pointers. Use of a per- sonal laser pointer represents user’s acceptance of liability for use of a non- IS&T-supplied laser pointer. Laser pointers in Class II and IIIa (<5 mW) are eye safe if power output is correct, but output must be verified because manufacturer labeling may not match actual output. Misuse of any laser pointer can lead to eye damage.

138 #EI2019 electronicimaging.org Underage Persons on Exhibition Floor Policy For safety and insurance reasons, no one under the age of 16 will be allowed in the exhibi- tion area during move-in and move-out. During open exhibition hours, only children over the age of 12 accompanied by an adult will be allowed in the exhibition area.

Unauthorized Solicitation Policy Unauthorized solicitation in the Exhibition Hall is prohibited. Any non-exhibiting manufac- turer or supplier observed to be distributing information or soliciting business in the aisles, or in another company’s booth, will be asked to leave immediately.

Unsecured Items Policy Personal belongings should not be left unattended in meeting rooms or public areas. Unattend- ed items are subject to removal by security. IS&T is not responsible for items left unattended.

Wireless Internet Service Policy At IS&T events where wireless is included with your registration, IS&T provides wireless ac- cess for attendees during the conference and exhibition, but does not guarantee full cover- age in all locations, all of the time. Please be respectful of your time and usage so that all attendees are able to access the internet.

Excessive usage (e.g., streaming video, gaming, multiple devices) reduces bandwidth and increases cost for all attendees. No routers may be attached to the network. Properly secure your computer before accessing the public wireless network. Failure to do so may allow General Information unauthorized access to your laptop as well as potentially introduce viruses to your computer and/or presentation. IS&T is not responsible for computer viruses or other computer damage.

Mobile Phones and Related Devices Policy Mobile phones, tablets, laptops, pagers, and any similar electronic devices should be silenced during conference sessions. Please exit the conference room before answering or beginning a phone conversation.

Smoking Smoking is not permitted at any event space. Most facilities also prohibit smoking in all or specific areas. Attendees should obey any signs preventing or authorizing smoking in specified locations.

Hold Harmless Attendee agrees to release and hold harmless IS&T from any and all claims, demands, and causes of action arising out of or relating to your participation in the event you are register- ing to participate in and use of any associated facilities or hotels.

electronicimaging.org #EI2019 139 AUTHOR INDEX

A Aurini, Maliha Tasnim ERVR-178 Bondarev, Egor IPAS-266, Chiu, Yu-Hsiang IMSE-374 Aykac, Deniz IRIACV-461 IPAS-268, IPAS-281 Cho, Minki IQSP-300, IQSP-301, Azhar, Faisal 3DMP-008 Boom, Bas IPAS-281, IRIACV-455 IQSP-302, IQSP-303 HVEI-224 Abbey, Craig K. Azuma, Ronald SD&A-635 Boroumand, Mehdi MWSF-537 Choi, Byoung-Soo IMSE-355 Abbott, Kevin VDA-681 Bouman, Charles A. COIMG-127, Cholewo, Tomasz J. IPAS-256 IPAS-262 Abdelazim, Abdelrahman COIMG-128, COIMG-129 Choudhury, Anustup IQSP-307 Abe, Narishige IMAWM-403 Bourillot, Eric IMSE-371 Chuang, Jen-Hui IMSE-374, COLOR-086 B Abebe, Mekides A. Bours, Patrick MWSF-528 IRIACV-453 Adams, Guy 3DMP-008 Bovik, Alan C. IQSP-306, Chuang, Ying-Wei AVM-056 Adams, Veronika AVM-031 Bagchi, Judy MAAP-484 IQSP-316, IQSP-321 Cogranne, Remi MWSF-537 COIMG-139 Aeron, Shuchin Baird, Seth T. COIMG-140 Brown, Maxine D. SD&A-642, Cohrs, Thaden HVEI-219 SD&A-636 Agaian, Sos S. ERVR-175, Baker, Harlyn , SD&A-646 Comby, Frédéric COLOR-090 IPAS-222, IPAS-223, IPAS-273, SD&A-646 Brunel, Anthony COLOR-090 Cooper, Brian E. IPAS-256 IPAS-277, SD&A-647, Baker, Mary MAAP-480 Brunnstrom, Kjell HVEI-218 Cornett III, David COIMG-126, SD&A-654 Balamurugan, Adith COIMG-138 Bucher, Francois-Xavier PMII-586 COIMG-140 Agarwal, Chirag MWSF-543 Balke, Thilo COIMG-128 Buckstein, Daniel S. ERVR-182 Craver, Scott A. MWSF-541 Agarwal, Shruti MWSF-529 Bampis, Christos G. IQSP-316 Buerger, Martin IRIACV-462 Creutzburg, Reiner COLOR-097 PMII-588 IQSP-321 Ahn, Il Jun Bampis, Christos Bui, Thanh T. IRIACV-450 Crossno, Patricia VDA-685 SD&A-648 Aichem, Michael SD&A-641 Banchi, Yoshihiro Bullock, Tommy SD&A-639 Cruz, Luis A. IQSP-312 Ainsworth, Dick SD&A-642 Bang, Yousun IQSP-300, Burks, Stephen D. IQSP-319 Cruz-Neira, Carolina SD&A-645 Ait-Boudaoud, Djamel IPAS-262 IQSP-301, IQSP-302, Burnett, Thomas L. 3DMP-005, Cwitkowitz, Frank AVM-040 Akeley, Kurt EISS-708 IQSP-303 SD&A-634 Akpinar, Ugur SD&A-631 Bao, Yue IPAS-251, IRIACV-456, Butora, Jan MWSF-534 HVEI-214 Akyazi, Pinar SD&A-637, SD&A-655 Buzzard, Gregery T. COIMG-128 Alam, Md. Ashraful ERVR-178, Bappy, Jawadul H. MWSF-532 D ERVR-183 HVEI-230 Author Index Barghout, Lauren Albright, Austin IRIACV-461 Barsky, Brian A. EISS-705 D’Alessandro, Brian IMAWM-409 Alexopoulos, Kenneth AVM-040 Bas, Patrick MWSF-542 C Dai, Ji SD&A-646 Alipour, Kamran ERVR-186 Battisti, Federica IPAS-261, Daly, Scott IQSP-307 Aljowni, Maha SD&A-633 IPAS-271 Campbell, Scott P. PMII-278, Dasu, Yamini VDA-676 Allebach, Jan P. COLOR-099, Bauer, Peter COLOR-087 PMII-584 Davis, Ezra MAAP-482 COLOR-101, COLOR-102, Baynes, Anna A. VDA-676 Campos, Joaquin COLOR-076 Davis, Tony SD&A-658 COLOR-103, IMAWM-400, Becerra, Helard IQSP-324 Carli, Marco IPAS-261, IPAS-271 Dawe, Gregory SD&A-646 IMAWM-402, IMAWM-412, Bell, Tyler 3DMP-007 Carmichael, Zachariah AVM-040 Debayle, Johan IPAS-254 IMAWM-413, IMAWM-415, Bensaied, Rania IQSP-311 Carmona-Galán, Ricardo Debevec, Paul EISS-713 IMAWM-419, IQSP-300, Bernal, Edgar A. IRIACV-454 IMSE-365 Dechenaud, Marcelline IQSP-301, IQSP-302, Bernier, Bruno ERVR-184 Cascone, Marcos H. IMAWM-406 3DMP-010 IQSP-303, IQSP-327, Bertozzi, Andrea IPAS-264 Castañón, David COIMG-147 Deegan, Brian M. AVM-044 MWSF-526, MWSF-544 Beyers, Ronald COIMG-142 Chae, Junghoon VDA-680 DeFanti, Tom SD&A-642, SD&A-646 Allison, Robert SD&A-627 Bhanushali, Darshan Ramesh Chan, Chin-Cheng PMII-577 Deforges, Olivier IQSP-322 Almansouri, Hani COIMG-129 IRIACV-451 Chan, Hoover HVEI-238 Deka, Reepjyoti MWSF-546 Aloraini, Mohammed MWSF-531, Bhat, Radhesh AVM-033, Chan, Man Chi SD&A-632, Delaney, John COLOR-077 MWSF-543 AVM-045, IMSE-361 SD&A-656 Delfin, Leandro M. IPAS-255 Altarawneh, Enshirah MWSF-541 Bhuiyan, Ashikuzzaman ERVR-183 Chandra, Sunil AVM-042 Delp, Edward J. IMAWM-405 Altinisik, Enes MWSF-545 Bian, Yuan COIMG-142 Chandrasekaran, Shivkumar DeMattia, Daniel MWSF-525 Ambrose, G. Alex VDA-681 Bichal, Abhishek 3DMP-005 MWSF-532 Denes, Gyorgy HVEI-212 Amo-Fempong, Isaac ERVR-181 Bilal, Muhammad VDA-679 Chang, Chi -. IRIACV-453 Deng, Zhongliang IMAWM-421 Andersson, Mattias HVEI-218 Bilbo, Rachel J. IQSP-327 Chang, Pao-Chi AVM-056 Denny, Patrick AVM-044, Aoyama, Satoshi IMSE-356, Blanc, Pierre COLOR-083 Chang, Seunghyuk IMSE-355 AVM-048, PMII-052 IMSE-367 Blanchard, Romain COIMG-139 Chang, Yong-Jun IPAS-257 De Simone, Francesca HVEI-216 Arabi, Parham IMAWM-410 Blasinski, Henryk AVM-053 Chapman, Glenn H. IMSE-359 Devadithya, Sandamali Arganda-Carreras, Ignacio Bleier, Michael PMII-353 Charrier, Christophe MWSF-528 COIMG-147 IPAS-272 Bloomfield, Valerie J. SD&A-630 Chaumont, Marc COLOR-090 Devreaux, Phillip ERVR-176 Arru, Giuliano IPAS-261 Blouke, Morley M. IMSE-369 Cheikh, Faouzi Alaya IRIACV-466 De With, Peter H. AVM-034, Artmann, Uwe AVM-030, Bodegom, Erik IMSE-369 Chen, Chin-Ning COLOR-103 IPAS-266, IPAS-268, IPAS-281, IQSP-320 Boher, Pierre M. COLOR-083, Chen, Homer H. PMII-577 IRIACV-455, IRIACV-458, Asari, Vijayan K. IMAWM-414 MAAP-486 Chen, Qiaozhu IMAWM-145 IRIACV-465 Ash, George HVEI-212 Bokaris, Panagiotis-Alexandros Chen, Qiulin IQSP-301 Díaz, César HVEI-217 Aspiras, Theus H. IMAWM-414 MAAP-483 Chen, Wenhao MWSF-536 Diehl, Alexandra SD&A-641 Atra, Ahmad SD&A-642 Bolas, Mark SD&A-653 Chen, Yong-Sheng IMSE-374 Dietz, Henry G. COIMG-132, Atre, Siddharth ERVR-188 Bolme, David S. COIMG-126, Chepuri, Narendra Kumar IMSE-361 COIMG-146, PMII-585, Auer, Florian IRIACV-462 COIMG-140 Ching, Daniel J. COIMG-131 PMII-590

140 #EI2019 electronicimaging.org Author Index

Dima, Elijs HVEI-218 Feiten, Bernhard IQSP-314 Gokan, Alexander IMAWM-415, Hebert, Mathieu MAAP-203, Diniz, Rafael 3DMP-006 Ferrell, Regina IRIACV-461 MWSF-544 MAAP-481 Do, Minh N. COIMG-133 Ferrero, Alejandro COLOR-076 Golwala, Gautam COLOR-099, Hedstrom, Timothy ERVR-188 Dodgson, Neil A. HVEI-208 Feyer, Stefan ERVR-177 IMAWM-412, IMAWM-413, Hegde, Jay HVEI-225 Doe, Josh M. IQSP-319 Fink, Daniel ERVR-177 IMAWM-415, IMAWM-419 Heinen, Stephen J. HVEI-205 Dokmanic, Ivan COIMG-134 Fitzhugh, Andrew MAAP-480 Gomes, Otavio B. IMAWM-406 Hendrickson, Benjamin IMSE-369 Donati, Laurène COIMG-133 Flenner, Arjuna MWSF-530, Goodall, John VDA-680 Hendriks, Ella COLOR-077 Dong, Jing MWSF-539 MWSF-532 Göring, Steve HVEI-215, IQSP-314 Heng, Franklin COIMG-138 Donohue, Kevin D. COIMG-132, Foster, Benjamin COIMG-127 Gotchev, Atanas P. IPAS-279, Heo, Jingu AVM-055 COIMG-146 Fouchez, Dominique COLOR-090 SD&A-631 Heymsfield, Steven 3DMP-010 Dornaika, Fadi COIMG-141, Fournel, Thierry MAAP-203 Gouaillard, Alexandre HVEI-213 Hirai, Keita IPAS-267, MAAP-204, IPAS-272 Fowler, Boyd A. PMII-580 Goulden, Maggie VDA-681 MAAP-485 Dorsey, Julie MAAP-482 Frank , Tal COLOR-102 Gouton, Pierre IMSE-371 Hirakawa, Keigo PMII-582 Douglass, Amanda SD&A-638 Frants, Vladimir IPAS-277 Grazaitis, Peter J. ERVR-176, Hirao, Yutaro ERVR-185 Duan, Xiaojing VDA-681 Frayne, Shawn SD&A-652 ERVR-181 Hirsch, Matthew EISS-710 Dubbelman, Gijs AVM-034 Fremerey, Stephan HVEI-219, Green, Phil COLOR-091, Ho, Yo-Sung IPAS-257, IPAS-270, Dugelay, Jean-Luc MWSF-546 IQSP-220 COLOR-095 IRIACV-457 Dumic, Emil IQSP-312 Fridrich, Jessica MWSF-534, Gronda, Eric VDA-681 Hofmeyer, Frank HVEI-219 Dunkel, Friedrich A. 3DMP-010 MWSF-537, MWSF-542 Groot, Herman G. IPAS-268 Hogue, Andrew ERVR-182 Duong, Tri VDA-684 Fuchs, Christian AVM-032 Grover, Ginni SD&A-635 Hong, Xiaopeng IMAWM-401 Durell, Christopher N. IMSE-370 Fujii, Toshiaki IQSP-310, SD&A-625 Grynovicki, Jock ERVR-176, ERVR-181 Hoover, Melynda ERVR-179 Fujimoto, Hiroyuki AVM-039 Gu, Junli AVM-037 Horgan, Jonathan AVM-044 Funatsu, Ryohei IMSE-367 Guan, Haike IRIACV-459 Horiuchi, Takahiko COLOR-089, E Guan, Yong MWSF-535, COLOR-104, IPAS-267, MAAP- MWSF-536 204, MAAP-485 Guervós, Esther HVEI-217 Hsieh, Yi-Yu IMSE-374 Easton, Roger L. COLOR-100 G Guo, Alexander SD&A-642 Hsu, Chang-Tao IRIACV-453 Eberhart, Paul S. PMII-590 Guo, Song IMAWM-403 Hsu, Stephen VDA-678 Ebrahimi, Touradj HVEI-214, Gäde, Max AVM-030 Guo, Yandong IMAWM-420 Hu, Litao IMAWM-412 IQSP-310 Galdi, Chiara MWSF-546 Guo, Zhenhua IMAWM-145 Hu, Zhenhua COLOR-087 Eckstein, Miguel P. HVEI-224 Gallo, Orazio PMII-576 Gupta, Praful IQSP-316 Hua, Hong EISS-701, EISS-712 Edlinger, Raimund IRIACV-463 Gama, Filipe IPAS-279 Gupta, Sidhant IRIACV-450 Huang, Hekun EISS-701 Edwards, Geoffrey ERVR-184 Ganguly, Amlan IRIACV-451 Gürsoy, Doğa COIMG-131 Huang, Rachel IQSP-220 Egiazarian, Karen O. IPAS-250, Author Index Gao, Liang PMII-350 Gutiérrez, Jesús HVEI-216 Huang, Shuai COIMG-134 IPAS-258, IPAS-260, IPAS-263 Gao, Pan IQSP-323 Huang, Te-Yuan IQSP-321 Eid, Ahmed H. IPAS-256 Garcia, Narciso HVEI-217 Huang, Wan-Eih IQSP-302 Eigen, David IMAWM-411 Garcia Freitas, Pedro IQSP-304 Hubel, Paul M. PMII-586 Eira, Luisa IQSP-304 Gaska, James SD&A-638 H Hughes, Ciaran AVM-042, AVM-044 Ekanadham, Chaitanya IQSP-321 Gaudreau, Jean-Etienne SD&A-651 Hunt, Warren VDA-685 Eka Putranto, Evan H. IPAS-274 Gavai, Gaurang R. IMSE-360 Haberland, Matthew IPAS-264 Huraibat, Khalil COLOR-076 Eldada, Louay AVM-051 Gavrilescu, Maria SD&A-638 Haddad, Michaël MAAP-483 Hurych, David AVM-048 Elder, James H. HVEI-200 Geese, Marc AVM-030 Hadley, Steven SD&A-638 Hussain, Syed A. IMSE-362 El Helou, Majed COIMG-135 Gelautz, Margrit SD&A-657 Hadzi, Adnan IMAWM-417 Ellens, Marc S. MAAP-487 Geldof, Muriel COLOR-077 Haefner, David IQSP-319 El Traboulsi, Youssof COIMG-141 George, Sony COLOR-100 Hahn, Steven VDA-680 El-Yamany, Noha COLOR-080, I Georgiev, Mihail IPAS-279 Haindl, Sandra AVM-041 PMII-578 Gerardin, Morgane MAAP-203 Häkkinen, Jukka SD&A-650 Erdei, Gábor HVEI-207 Iacomussi, Paola AVM-029 Ghahremani, Amir IPAS-266 Hallman, Lauri IMSE-357 Eschbach, Reiner COLOR-100 Iandola, Forrest AVM-047 Gharbharan, Michael ERVR-182 Hamidouche, Wassim IQSP-322 Esteva, Andre IMAWM-408 Ichihara, Yasuyo G. COLOR-092 Ghelmansaraei, Abtin PMII-587 Hamza, Ahmed IPAS-262 Evans, Karla K. HVEI-226 Iehl, Jean-Claude MAAP-203 Ghoniem, Mohammad VDA-686 Han, Jaeduk IPAS-275 Ieremeiev, Oleg IPAS-260 Ghosh, Dipanjan AVM-033, Handa, Yukitaro IMSE-356 Ikeda, Tatsunosuke ERVR-180 IMSE-361 Hardeberg, Jon Yngve Ikeda, Yoshitaka IQSP-308 F Gibbs, Peter SD&A-638 COLOR-086, COLOR-100, Ishmam, Raiyan COIMG-139 Gigilashvili, Davit MAAP-202 MAAP-202, MAAP-475 Islam, Shitab Mushfiq-ul ERVR-178 Faiyaz, Intisar Hasnain ERVR-183 Gilton, Davis IPAS-253 Haris, Uroob SD&A-633 Ispasoiu, Radu AVM-046 Farias, Mylène C. 3DMP-006, Gittinger, Jaxon VDA-685 Harris, Todd COLOR-087 IQSP-304, IQSP-324 Glaholt, Mackenzie SD&A-638 Hasan, Mehedi ERVR-183 Farid, Hany MWSF-529 Glasstone, Benjamin AVM-040 Hasche, Eberhard COLOR-097 Farrell, Joyce E. PMII-351 Glover, Jack L. IQSP-316 Hassan, Firas PMII-589 J Farrugia, Jean-Philippe MAAP-203 Goddard, Jim IRIACV-461 Hauenstein, Jacob IQSP-325 Fattal, David EISS-707 Godoy, Myrna C. HVEI-226 Haygood, Tamara HVEI-226 Janssens, Koen COLOR-077 Favalora, Gregg E. SD&A-630 Goebel, Michael MWSF-530 He, Jiangpeng MAAP-484 Jenkin, Robin B. AVM-026

electronicimaging.org #EI2019 141 Author Index

Jeong, Mira 3DMP-004 Ker, Andrew D. MWSF-540 Lee, Byung-Uk COLOR-093 Lundekvam, Susann COLOR-091 Jessome, Renee IQSP-300, Kes, David VDA-678 Lee, Chulhee COIMG-148 Luo, Ming Ronnier COLOR-079, IQSP-301, IQSP-302, Kim, Hansol IPAS-252 Lee, Deokwoo 3DMP-004 IQSP-309, IQSP-326 IQSP-303 Kim, Jonghyun IPAS-275 Lee, Haegeun IPAS-275 Jiang, Chufan 3DMP-001 Kim, Kyeongman COLOR-088 Lee, Jimin IMSE-355 Jiao, Jichao IMAWM-421 Kim, SangJun 3DMP-004 Lee, Joonwoo COIMG-148 M Jiao, Yuzhong SD&A-632, Kim, Sangwon 3DMP-004 Lee, Minji COLOR-093 SD&A-656 Kin-Cleaves, Christy MWSF-540 Lee, Perry IMAWM-412, Ma, Zheng HVEI-205 Johanson, Mathias HVEI-218 Kirchner, Eric COLOR-076, IMAWM-413, IMAWM-415, MacKenzie, Kevin J. EISS-703 Johnson, Christi R. COIMG-140 COLOR-077 IMAWM-419 Macmillan, Timothy PMII-278 Jörissen, Sven PMII-353 Kiss, Jocelyne A. ERVR-184 Lee, Sanghun COIMG-148 Maddox, Justin D. MWSF-538 Joseph, Dileepan IMSE-362, Kitajima, Toshiaki IMSE-367 Lee, Sang-Jin IMSE-355 Maggard, Eric IQSP-300, IMSE-363, IMSE-372 Kiyotomo, Takuma COLOR-104 Lee, Sangyoon IPAS-252 IQSP-301, IQSP-302, Joshi, Alark VDA-678 Klein, Karsten ERVR-177, Lee, Tammy PMII-588 IQSP-303 Joy, Sheakh Fahim Ahmmed SD&A-641 Lemesle, Augustin HVEI-213 Majee, Soumendu COIMG-128 ERVR-183 Klein, Stanley A. HVEI-228 Leñero-Bardallo, Juan A. IMSE-365 Malesa, Marcin IMSE-375 Ju, Alexandra L. MAAP-479, Klinkhammer, Daniel ERVR-177 Leroux, Thierry COLOR-083, Malherbe, Emmanuel MAAP-483 MAAP-480 Klomp, Sander R. IRIACV-455, MAAP-486 Manalody, Tom Korah IMSE-361 Jumabayeva, Altyngul COLOR-101 IRIACV-458 Letter, Matthew VDA-685 Manjunath, B.S. MWSF-530, Jun, Jiwon MAAP-480 Ko, ByoungChul 3DMP-004 Levi, Ofer PMII-352 MWSF-532 Jun, Sung-Wook IMSE-367 Kobayashi, Asao SD&A-638 Li, Bo SD&A-633 Mantiuk, Rafal HVEI-212 Kocher, Charles VDA-684 Li, Fei IMAWM-403 Marion, Alexis HVEI-213 Koeckhoven, Pim COLOR-076 Li, Hao IPAS-264 Martin, Shawn VDA-685 Koifman, Vladimir IMSE-050 Li, Jing IMSE-372 Martinez-Verdu, Francisco K Kokubo, Koya IPAS-251 Li, Qin IMAWM-145 COLOR-076 Komazawa, Akito IMSE-356 Li, Qun IRIACV-454 IMSE-356 Matanga, Jacques IMSE-371 Author Index Kondo, Keita COLOR-099 Kagawa, Keiichiro IMSE-354, Li, Zhi , IMAWM-415, Matsubara, Tomoki IMSE-367 Kong, Yitian IPAS-266 IMAWM-419, IQSP-317, IMSE-358, IMSE-366 Kontsevich, Leonid HVEI-232 Matsui, Akira IQSP-315 SD&A-628 IQSP-321, IQSP-327, Kakeya, Hideki Kopra, Andy MAAP-075 Matsunami, Tomoaki IMAWM-403 MWSF-527 MWSF-544 IMAWM-409 Kamath, Ajith Koren, Henry IQSP-313 Matts, Paul Lian, Trisha AVM-053, EISS-703 COLOR-085 Kamath, Shreyas IPAS-273 Koren, Israel IMSE-359 McCann, John J. Kando, Daichi IPAS-259 Liberadzki, Pawel 3DMP-002 McCourt, Mark E. HVEI-231 Koren, Norman IQSP-305 Liberini, Simone COLOR-084 Kane, Christopher VDA-684 Koren, Zahava IMSE-359 Mc Farland, Christopher AVM-027 Ligterink, Frank COLOR-077 Kane, Paul , AVM-046 Korshak, Oleksandr ERVR-188 SD&A-646 Likova, Lora T. HVEI-237 VDA-684 Kang, Byongmin AVM-055 Kostamovaara, Juha IMSE-357 McGuigan, Michael Kang, Dongwoo AVM-055 Lin, Juan VDA-683 Meedendorp, Teio COLOR-077 Kosugi, Tomohiko IMSE-367 Lin, Li MWSF-535, MWSF-536 Kang, Dukjin COLOR-082 Krizek, Pavel AVM-048 Megens, Luc COLOR-077 Kang, Ki-Min COLOR-081, Lin, Qian IMAWM-400, Megeto, Guilherme A. Kuester, Falko SD&A-646 IMAWM-402, IMAWM-406 COLOR-088 Kuhl, Michael IRIACV-451 IMAWM-406 IPAS-252 Lin, Wei -. IRIACV-453 Kang, Moon Gi , Kunze, Jörg COLOR-098 Melancon, Guy VDA-686 IPAS-275 Lippert, Alexander R. SD&A-633 Mendiburu, Bernard SD&A-649 Kuo, Chien-Hao AVM-056 List, Peter IQSP-314 Kano, Masanori SD&A-626 Kurland, Eric J. SD&A-644 Meneses, Klinsmann J. IMSE-359 Liu, Chu-heng IQSP-317 VDA-677 Karaschewski, Oliver COLOR-097 Kusakabe, Yuichi IQSP-308 Menon, Sadan Suneesh COIMG-125 Liu, Jiayin COLOR-102 Karl, W. Clem Kwasinski, Andres IRIACV-451 Meschenmoser, Philipp SD&A-641 IRIACV-461 Liu, Shuangting AVM-053 SD&A-646 Karnowski, Thomas Kwon, Kiwoon COIMG-143 Meyer, Dominique E. Kaszowska, Aleksandra ERVR-175, Liu, Tongyang IMAWM-400 Miceli, Raffaele VDA-684 SD&A-647, SD&A-654 Liu, Xin IPAS-269 Michals, Adam D. COLOR-101 Katkovnik, Vladimir IPAS-258 Liu, Xinwei MWSF-528 Michiba, Tomoya IMSE-356 Kato, Naoyah IQSP-315 L Liu, Yi IQSP-322 Michonski, Jakub 3DMP-002 Katsavounidis, Ioannis IQSP-321 Liu, Zhenyi AVM-053 Miller, Eric COIMG-139 Kawahito, Shoji IMSE-354, IMSE- Lago, Miguel A. HVEI-224 Lloyd, Charles SD&A-638, SD&A- Miller, Jack ERVR-179 356, IMSE-358, IMSE-366, Lahoud, Fayez COIMG-135, 639 Miller, Patrick VDA-681 IMSE-367 ERVR-187 Lo, Eric SD&A-646 Min, Byungseok COLOR-082 Kawai, Takashi ERVR-185, Lam, Edmund Y. IRIACV-450 Lopez-Alvarez, Miguel A. MAAP- Miroshnichenko, Oleksandr SD&A-648, SD&A-650 Lanaro, Matteo P. COLOR-084 479 IPAS-263 Kawakita, Masahiro SD&A-626 Langehennig, James COLOR-082 López-González, Mónica AVM-054 Mishina, Tomoyuki SD&A-626 Kawamura, Harumi COLOR-094 Lans, Ivo V. COLOR-076, Lorch, Benedikt MWSF-529 Mitrea, Mihai IQSP-311 Kawashima, Kenji IMSE-354 COLOR-077 Luan, Zhen COLOR-103 Miura, Kenjiro T. IPAS-274 Keim, Daniel SD&A-641 Laumond, Antoine J. VDA-686 Lui, King S. IRIACV-450 Miyagi, Ryota IMSE-354 Keller, Dominik HVEI-215 LeBlanc, John J. SD&A-630 Lukac, Rastislav IMSE-373 Moebius, Michael G. SD&A-630 Kemp, Craig A. COIMG-128 Le Callet, Patrick HVEI-216, Lukin, Vladimir V. IPAS-250, Mohammed, Tajuddin Manhar Kennedy, Samantha 3DMP-010 IQSP-311 IPAS-260, IPAS-263 MWSF-532

142 #EI2019 electronicimaging.org Author Index

Mok, Mark P. C. SD&A-632, Oh, Hyunsoo COLOR-081, Pizlo, Zygmunt HVEI-201 Rudd, Michael E. HVEI-210 SD&A-656 COLOR-088 Polans, James EISS-700 Ruiz, Jaime Jesús HVEI-217 Montez, Diane COIMG-140 Oh, Paul IPAS-252 Pollard, Stephen 3DMP-008 Ruokamo, Henna IMSE-357 Moon, Sungwhan COIMG-143 Ohta, Fumiya SD&A-650 Ponomarenko, Nikolay IPAS-258, Rushmeier, Holly MAAP-482 Morales, Ernesto ERVR-184 Oishi, Norigi SD&A-637 IPAS-260, IPAS-263 Mori, Shotaro SD&A-637 Okaichi, Naoto SD&A-626 Porral, Philippe MAAP-486 Moujahid, Abdelmalik IPAS-272 Okura, Yushi IMSE-356, IMSE-358 Pourian, Niloufar IMSE-373 S Mueller, Tobias IRIACV-462 Okutomi, Masatoshi IPAS-276 Preece, Bradley L. IQSP-319 Muller, Thomas MAAP-486 Ongie, Greg IPAS-253 Przybyla, Craig P. COIMG-130 COLOR-078 Mulligan, Jeffrey B. HVEI-206, Onural, Levent SD&A-629 Ptucha, Ray AVM-040, IRIACV-451, Safdar, Muhammad , COLOR-095 HVEI-229 Oomura, Takuya SD&A-626 PMII-575 Mun, Ji-Hun IPAS-270 Ortega, Brandon VDA-684 Pulli, Kari EISS-709, PMII-583 Sagebiel, Tara L. HVEI-226 Muñoz, Juan Alberto HVEI-217 Osinski, Piotr IMSE-375 Purwar, Ankur IMAWM-407 Sahin, Erdem SD&A-631 COIMG-138 Munoz Arango, Juan S. SD&A-645 Otani, Ken’ichi COLOR-089 Sahiner, Arda AVM-034 Murakami, Yuta IMSE-354 Ozcinar, Cagri IQSP-323 Sanberg, Willem P. Sandin, Daniel SD&A-642, Q SD&A-646 N Santos, Samuel IQSP-304 P Qiao, Yiling IPAS-264 Santos-Villalobos, Hector J. Quiroga, Julian HVEI-209 COIMG-126, COIMG-129, IMSE-366 Nabeshima, Takuya Pachatz, Nicolas IQSP-314 Qureshi, Tahir HVEI-218 COIMG-140 Nada, Hajime IMAWM-403 Palacio, Diana M. HVEI-226 Sanz, Isabel MAAP-478 Nagase, Masanori IMSE-367 Palani, Harish MWSF-527 Sapaico, Luis R. MAAP-477 Nagpal, Raghav AVM-033 Pan, Xunyu IMAWM-418 Sari, Oussama MAAP-481 Nagy, Mate ERVR-177 Panetta, Karen ERVR-175, R Sarimurat, Salim MWSF-545 Nakamura, Takumi MAAP-204 IPAS-273, SD&A-647, Sasaki, Hisayuki SD&A-626 Nam, Dongkyung AVM-055 SD&A-654 Raake, Alexander HVEI-215, Sato, Yoshihiro IPAS-251 Nascimento, Maikon IMSE-363, Paone, Jeffrey IRIACV-461 HVEI-219, IQSP-220, IQSP- Sattler, Florian AVM-049 IMSE-372 Park, In-ho COLOR-081, 314 Sawaya, Wadih MWSF-542 Nataraj, Lakshmanan MWSF-530, COLOR-088 Ragavan, Vijaya AVM-033 Schonfeld, Dan MWSF-531, MWSF-532 Park, Jae Sung COLOR-082 Rajeev, Srijith IPAS-273 MWSF-543 Nayola, Grace COIMG-140 Park, Jae-yeon PMII-588 Ramachandra Rao, Rakesh Rao Schramm, Morgan MAAP-479 Ness Proano Gaibor, Art Park, Jae Young PMII-586 IQSP-314 Schreiber, Falk ERVR-177,

COLOR-077 Park, JongHo IMSE-355 Ramamoorthi, Ravi EISS-706 SD&A-641 Author Index Nestares, Oscar SD&A-635 Park, Jun-Yong IRIACV-457 Ramesh, Palghat IQSP-317 Schulze, Jürgen P. ERVR-186, Neupane, Ashish COIMG-139 Park, Yongsup PMII-588 Rangam, Katsuri PMII-278 ERVR-188 Newman, Jennifer MWSF-535, Parrish, Chadwick COIMG-132, Reeves, Stanley COIMG-142 Schwiegerling, Jim EISS-702 MWSF-536 COIMG-146 Reinders, Stephanie MWSF-535, Scribner, David ERVR-176, Newman, Timothy IQSP-325 Partinen, Ari PMII-586 MWSF-536 ERVR-181 Ngahara, Hajime IMSE-354, Pasquet, Jérôme COLOR-090 Reiners, Dirk SD&A-645 Sebastian, Clint IPAS-281 IMSE-366 Pasquet, Johanna COLOR-090 Reischl, Daniel AVM-041 Seitner, Florian SD&A-657 Nguyen, Truong SD&A-646 Paturu, Chaitanya Krishna Reiterer, Harald ERVR-177 Semenishchev, Evgeny A. IPAS-222, Niel, Kurt S. IRIACV-464 AVM-033 Relyea, Robert AVM-040, IPAS-223 Nikkanen, Jarno PMII-578 Paulter, Nicholas G. IQSP-316 IRIACV-451 Sencar, Husrev Taha MWSF-545 Niño, Juan R. ERVR-184 Paulus, Dietrich W. AVM-031, Restrepo, Alfredo HVEI-209 Seo, Jaejun COIMG-148 Nishikawa, Keishi IRIACV-452 AVM-032, AVM-049 Reta, Jorge IPAS-272 Seo, Sungwon COLOR-082 Norcia, Anthony HVEI-235 Pedersen, Marius MAAP-202, Rizzi, Alessandro COLOR-084 Sespede, Braulio SD&A-657 Norman, Kendal G. IMAWM-419, MWSF-528 Robben, Matt MWSF-525 Sha, Lingdao MWSF-531 IQSP-327 Pein, Brandt COIMG-139 Robitza, Werner IQSP-314 Shah, Jugal MWSF-541 Nüchter, Andreas PMII-353 Peizerat, Arnaud IMSE-368 Roca, Jordi MAAP-479 Shahpaski, Marjan MAAP-477 Nussbaum, Peter COLOR-095 Perales, Esther COLOR-076 Rodricks, Brian PMII-280 Shankar, Karthick MWSF-544 Nyumura, Masaya IRIACV-456 Perez, Fábio IMAWM-406 Rodriguez, Nancy COLOR-090 Sharifzadeh, Mehdi MWSF-531, Perez, Pablo HVEI-217 Rodríguez-Vázquez, Angel MWSF-543 Perrot, Matthieu MAAP-483 IMSE-365 Sharma, Sharad ERVR-176, O Perry, Stuart W. IQSP-312 Rogowitz, Bernice E. HVEI-233 ERVR-181 Peterzell, David H. HVEI-236 Ross, Miriam SD&A-640 Shashua, Amnon EISS-711 O’Brien, Cecilia SD&A-633 Pfluegl, Christian COIMG-139 Rossi, Giuseppe AVM-029 Shen, Minghao AVM-053 O’Connor, Sean P. SD&A-630 Phillips, Jonathan B. IQSP-318 Rossi, Maurizio COLOR-084 Shepherd, John 3DMP-010 O’Keefe, Eleanor SD&A-638, Pichler, Kurt AVM-041 Roux, Ludovic HVEI-213 Shepherd, Matthew MAAP-479 SD&A-639 Pilz, Kathrin COLOR-077 Roy-Chowdhury, Amit K. MWSF-532 Sherman, Sam COIMG-130 Ochoa, Sherezada ERVR-184 Pinaud, Bruno VDA-686 R R, Navinprashath AVM-033, Shi, Chang IPAS-264 Ochoa Dominguez, Humberto de Pinheiro, Antonio M. IQSP-312 AVM-045, IMSE-361 Shi, Weiqi MAAP-482 Jesus IPAS-255 Pinto Elias, Raul IPAS-255 Rubel, Oleksii IPAS-250 Shin, Dong-won IRIACV-457

electronicimaging.org #EI2019 143 Author Index

Shin, Jaemin COLOR-088 Thomas, Jean-Baptiste MAAP-202 Wandell, Brian A. AVM-053, Yang, Hongbo IMAWM-145 Shin, Jang-Kyoo IMSE-355 Thomas, Rohan IMSE-359 EISS-703 Yang, Qingyu IMAWM-413 Shin, Usuki IPAS-274 Tian, Dalin IQSP-309, IQSP-326 Wang, Chaoli VDA-681 Yang, Yurou VDA-681 Shinnishi, Makoto IRIACV-459 Timár-Fülep, Csilla HVEI-207 Wang, Chenjian IPAS-264 Yasue, Toshio IMSE-367 Shoda, Elizabeth SD&A-638 Tominaga, Shoji MAAP-485 Wang, Haoyu SD&A-642, Yasutomi, Keita IMSE-354, Shreve, Matthew IMAWM-407 Tomioka, Kohei IMSE-367 SD&A-646 IMSE-356, IMSE-358, Sidaty, Naty IQSP-322 Tomioka, Satoshi IPAS-259 Wang, Wei MWSF-539 IMSE-366 Sielicki, Milosz VDA-685 Tran, Andrew VDA-676 Wang, Wiley H. IMAWM-416 Ye, Dong Hye COIMG-127 Simmons, Jeffrey P. COIMG-130 Tran, Khanh T. IMAWM-401 Wang, Yangxiao MWSF-536 Yen, Alec COIMG-140 Simonot, Lionel MAAP-203, Traxler, Lukas 3DMP-003 Ward, Greg PMII-579 Yen-Chou, Tai IMSE-374 MAAP-481 Trinkl, Martin AVM-041 Wasserman, Thierry MAAP-483 Yi, Jihyeon PMII-578 Sistu, Ganesh AVM-042 Tripp, Johnathan IMAWM-418 Watanabe, Hayato SD&A-626 Yogamani, Senthil AVM-042, Sitnik, Robert 3DMP-002, Tseng, Benjamin IQSP-313 Watanabe, Shuhei MAAP-476 AVM-044, AVM-048 IMSE-375 Tsujimoto, Yukiko SD&A-638 Watanabe, Takashi IMSE-367 Yonker, Shea ERVR-188 Sjöström, Mårten HVEI-218 Tyler, Christopher HVEI-211, Watanabe, Yuta SD&A-628 Yoshihama, Sachiko MWSF-533 Skorka, Orit AVM-046 HVEI-239 Watson, Andrew EISS-704 Yoshikawa, Keisuke SD&A-648 Skowronski, Moritz ERVR-177 Watts, Patricia MWSF-538 Yotam, Ben-Shoshan COLOR-102 Slabihoud, Ralph IRIACV-463 Weber, David SD&A-641 You, Jane IMAWM-145 Smolic, Aljosa IQSP-323 U Wedel, Simon HVEI-215 Youssfi, Ziad PMII-589 Snow, Jacqueline C. HVEI-221 Westheimer, Gerald HVEI-227 Yuan, Chang AVM-038 IMAWM-415 Sobh, Ibrahim AVM-048 Uchida, Hidetsugu IMAWM-403 Widenhorn, Ralf IMSE-369 Yuan, Zhenxun 3DMP-010 Sobhiyeh, Sima Uchida, Kazutaka IPAS-276 Wieland, Jonathan ERVR-177 Sohn, Chan-Young IRIACV-457 Ulichney, Robert COLOR-102 Wikelski, Martin ERVR-177 ERVR-177 Sommer, Björn , Ullah, Mohib IRIACV-466, Wilcox, Laurie M. SD&A-627 Z SD&A-641 VDA-679 Willett, Rebecca IPAS-253 Sone, Takuroh MAAP-476 Unlu, Eren AVM-035 Winer, Eliot ERVR-179 AVM-031 Zakhor, Avideh COIMG-137, Author Index Sorg, Brad IMAWM-414 Unser, Michael COIMG-133 Winkens, Christian , Soubies, Emmanuel COIMG-133 AVM-049 COIMG-138 Uricar, Michal AVM-048 IRIACV-462 Steed, Chad VDA-680 Winterbottom, Marc SD&A-638, Zauner, Gerald , IRIACV-463 Stentiford, Frederick W. SD&A-639 IMAWM-404 Wischgoll, Thomas VDA-675, Zauner, Michael IRIACV-463 Stevenson, Robert L. COIMG-136 V VDA-677 Zebelein, Julian HVEI-215 COIMG-133 Štolc, Svorad 3DMP-003 Witinski, Mark COIMG-139 Zehni, Mona , COIMG-134 Stork, David G. HVEI-234 Valente, Augusto C. IMAWM-406 Wolenski, Peter 3DMP-010 Stuart, Andrew M. IPAS-264 Van Atta, Alexander SD&A-639 Woo, Seongyoun COIMG-148 Zenou, Emmanuel AVM-035 IQSP-323 Sudoh, Yoshifumi IRIACV-460 van Beek, Peter AVM-043, Wu, Alexander ERVR-188 Zerman, Emin IMAWM-421 Sun, Shih-Wei AVM-056 PMII-581 Wu, Hanzhou MWSF-539 Zhang, Cheng VDA-683 Sundaram, Sathya COLOR-099, van de Wouw, Dennis W. Wu, Hongzhi MAAP-482 Zhang, Hui IMAWM-412, IMAWM-413, IRIACV-458, IRIACV-465 Wu, Jiaxin HVEI-205 Zhang, Jiaqi AVM-053 IQSP-300 IMAWM-415, IMAWM-419 van Lankveld, Thijs IRIACV-455 Wu, Min MWSF-535, MWSF-536 Zhang, Runzhe COIMG-136 Suominen, Olli SD&A-631 van Riel, Sjors IRIACV-465 Wu, Wencheng W. IPAS-265, Zhang, Shuang 3DMP-001 Süsstrunk, Sabine COIMG-135, Vashist, Abhishek IRIACV-451 IRIACV-454 Zhang, Song , ERVR-187, MAAP-477 Venkatakrishnan, Singanallur Wueller, Dietmar IQSP-315 3DMP-007 COIMG-129 Zhang, Wende AVM-036 Villamar Villarreal, Juan Jose Zhang, Yi PMII-280 T IQSP-314 Zhang, Zihang VDA-681 Vincent, Amal VDA-682 X Zhao, Baiyue COLOR-079 IQSP-310 Zhao, Guoying IMAWM-401, Taburet, Théo MWSF-542 Viola, Irene Xiang, Xiaoyu IMAWM-400, IPAS-269 Takahashi, Keita IQSP-310 Vo, Quang-Nhat IMAWM-401 IQSP-303 Zhao, Zhizhen COIMG-133, Takanashi, Tomoyuki IPAS-267 Vogel, Patrick IQSP-314 IPAS-222 Xu, Lihao COLOR-079, IQSP-309, COIMG-134 Takasawa, Taishi IMSE-358, Voronin, Viacheslav , IPAS-223 IPAS-277 IQSP-326 Zhen, Ada COIMG-136 IMSE-366 , Xu, Mohan EISS-701 Zhong, Sheng-hua HVEI-205 Tanaka, Masayuki IPAS-276 Xu, Shaoyuan IMAWM-402 Zhou, Ruofan COIMG-135 Tanaka, Midori COLOR-089, Xu, Yujian MWSF-526 Zhu, Fengqing MAAP-484 COLOR-104 W Ziga, Kyle MAAP-484 Tandogan, Erkam S. MWSF-545 Tao, Jun VDA-681 Waeny, Martin IMSE-364 Tastl, Ingeborg MAAP-479 Waliman, Matthew COIMG-137 Y Taylor, Holly ERVR-175, SD&A-647, Walowit, Eric COLOR-096 SD&A-654 Wan, Qianwen ERVR-175, Yahiaoui, Lucie AVM-044 Theiss, Andreas IRIACV-462 IPAS-273, SD&A-647, Yamada, Kohei IMSE-356 Thomas, Christina HVEI-226 SD&A-654 Yamaguchi, Shohei SD&A-655

144 #EI2019 electronicimaging.org imaging across applications

IS&T DIGITAL LIBRARY

Open Access Papers EI Symposium Proceedings and Journal of Perceptual Imaging (JPI)

Library includes

ARCHIVING: Digitization, COLOR and IMAGING DIGITAL PRINTING and JOURNAL of Imaging Preservation, and Access FABRICATION Science and Technology

Also proceedings from CGIV: European Conference on Colour in Graphics, Imaging, and Vision and TDPF: International Symposium on Technologies for Digital Photo Fulfillment. Members have unlimited access to full papers from all IS&T conference proceedings. Members with JIST as their subscription choice, have access to all issues of the journal.

www.ingentaconnect.com/content/ist/ your resource for imaging knowledge We hope to see you next year in SFO/Burlingame, CA! 26-31 January 2020