FINAL PROGRAM 26-30 JANUARY • BURLINGAME, CA USA 26-30 JANUARY Imaging Across Applications Across Imaging Where Industry and Academia Meet to discuss to Meet Industry Where discuss and Academia EI2020 Conference Acronym/Name

3DMP 3D Measurement and Data Processing AVM Automonous Vehicles and Machines Color Imaging XXV: Displaying, Processing, Hardcopy, and Applications COIMG Computational Imaging XVIII ERVR The Engineering Reality of Virtual Reality FAIS Food and Agricultural Imaging Systems HVEI Human Vision and Electronic Imaging IPAS Image Processing: Algorithms and Systems XVIII IQSP Image Quality and System Performance XVII IMAWM Imaging and Multimedia Analytics in a Web and Mobile World IRIACV Intelligent Robotics and Industrial Applications using Computer Vision ISS Imaging Sensors and Systems (formerly PMII and IMSE) MAAP Material Appearance MOBMU Mobile Devices and Multimedia: Enabling Technologies, Algorithms, and Applications MWSF Media Watermarking, Security, and Forensics SD&A Steroscopic Displays and Applications XXXI VDA Visualization and Data Analysis

Hyatt San Francisco Airport Floor Plans

Lobby Level

E G C

D D C

F GRAND PENINSULA B

B BALLROOM B SANDPEBBLE C ROOM BAYSIDE HARBOUR REGENCY ROOM REGENCY B ROOM FOYER BALLROOM B A E A A

RESTROOMS RESTROOMS A A

RESTROOMS GRAND PENINSULA FOYER Atrium Level

RESTROOMS

CONFERENCE OFFICE

3 SIXTY FRONT DESK RESTAURANT V

THE GROVE

B IV

3 SIXTY BAR SEQUOIA ROOM RESTROOMS BOARD ROOMS

MARKET III A

SKY LOUNGE

II

I

C B A

CYPRESS ROOM

26 – 30 January 2020

Hyatt Regency San Francisco Airport 1333 Bayshore Highway Burlingame, California USA

2020 Symposium Co-Chair Welcome Radka Tezaur Intel Corporation (US) On behalf of the Society for Imaging Science and Technology (IS&T), we would like to welcome you to the 32nd annual International Symposium on Electronic Imaging (EI 2020).

The EI Symposium is a premier international meeting that brings together colleagues from 2020 Symposium academia and industry to discuss topics on the forefront of research and innovation in all Co-Chair areas and aspects of Electronic Imaging, from sensors and image capture, through image Jonathan B. Phillips processing and computational imaging, computer vision, human visual perception and Google, Inc. (US) displays, to applications in science, medicine, autonomous driving, arts and entertain- ment, and other fields.

This year we are highlighting the themes of Frontiers in Computational Imaging, Automotive Imaging, and AR/VR Future Technology; these, along with the themes of 3D Imaging, Deep Learning, Medical/Diagnostic Imaging, Robotic Imaging, Security, 2020 Short Course and Remote Sensing are denoted in the Combined Paper Schedule beginning on page Co-Chair 23. In addition, there is a new Food and Agricultural Imaging Systems conference and Elaine Jin a special session in MWSF on Digital vs Physical Document Security held jointly with NVIDIA Corporation (US) Reconnaissance International’s Optical Document Security Conference.

The whole week offers a great opportunity to learn from leading experts from around the world. You can attend talks in any of the 17 different technical conferences, take short courses (26 options), visit the exhibits, and so much more. We encourage you to take full advantage of the many special events and networking opportunities available at the symposium, including the plenary presentations, individual conference keynotes, special 2020 Short Course joint sessions, EI Reception, 3D Theatre, as well as other special events arranged by Co-Chair various conferences. Create your own program using the itinerary planner available on Stuart Perry the EI website. University of Technology Sydney (Australia) You can learn more about the work presented at EI 2020 by accessing EI Conference Proceedings, available via www.electronicimaging.org and on the IS&T Digital Library (ingentaconnect.com/content/ist/ei). Proceedings from the past years, since 2016, can be found at the same location, and they are all open access.

2020 Short Course We look forward to seeing you and welcoming you to this unique event. Co-Chair Maribel Figuera Microsoft Corporation (US) —Radka Tezaur and Jonathan Phillips, EI 2020 Symposium Co-chairs Conference Keynotes

Conference Keynotes Where Industry and Academia Meet to discuss Imaging Across Applications Keynotes

EI SYMPOSIUM LEADERSHIP

EI 2020 Symposium Committee Atanas Gotchev, Tampere University of Symposium Co-Chairs Technology (Finland) Mathieu Hebert, Université Jean Monnet de Radka Tezaur, Intel Corporation (US) Saint Etienne (France) Jonathan B. Phillips, Google, Inc. (US) Ella Hendriks, University of Amsterdam (the Netherlands) Short Course Co-Chairs Nicolas S. Holliman, Newcastle University (UK) Elaine Jin, NVIDIA Corporation (US) Robin Jenkin, NVIDIA Corporation (US) Stuart Perry, University of Technology Sydney David L. Kao, NASA Ames Research Center (Australia) (US) Maribel Figuera, Microsoft Corporation (US) Takashi Kawai, Waseda University (Japan) Qian Lin, HP Labs, HP Inc. (US) At-large Conference Chair Representative Gabriel Marcu, Apple Inc. (US) Adnan Alattar, Digimarc Corporation (US) Mark E. McCourt, University of North Dakota (US) Past Symposium Chair Ian McDowell, Fakespace Labs, Inc. (US) Jon S. McElvain, Dolby Laboratories, Inc. (US) Andrew Woods, Curtin University (Australia) Nasir D. Memon, New York University (US) Jeffrey B. Mulligan, NASA Ames Research IS&T Executive Director Center (US) Suzanne E. Grinnan, IS&T (US) Henry Ngan, ENPS (Hong Kong) Kurt S. Niel, Upper Austria University of Applied EI 2020 Technical Program Committee Sciences (Austria) Sos S. Agaian, College of Staten Island, CUNY Arnaud Peizerat, Commissariat à l’Énergie (US) Atomique-LETI (France) David Akopian, The University of Texas at San William Puech, Laboratoire d’Informatique Antonio (US) de Robotique et de Microelectronique de Adnan M. Alattar, Digimarc Corporation (US) Montpellier (France) Jan Allebach, Purdue University (US) Alessandro Rizzi, Università degli Studi di Nicolas Bonnier, Apple Inc. (US) Milano (Italy) Charles A. Bouman, Purdue University (US) Juha Röning, University of Oulu (Finland) Gregery Buzzard, Purdue University (US) Nitin Sampat, Edmund Optics, Inc. (US) Damon M. Chandler, Shizuoka University Gaurav Sharma, University of Rochester (US) (Japan) Lionel Simonot, Université de Poitiers (France) Yi-Jen Chiang, New York University (US) Robert Sitnik, Warsaw University of Technology Reiner Creutzburg, Technische Hochschule (Poland) Brandenburg (Germany) Robert L. Stevenson, University of Notre Dame Patrick Denny, Valeo Vision Systems (Ireland) (US) Margaret Dolinsky, Indiana University (US) David G. Stork, Consultant (US) Karen Egiazarian, Tampere University of Ingeborg Tastl, HP Labs, HP Inc. (US) Technology (Finland) Peter van Beek, Intel Corporation (US) Reiner Eschbach, Norwegian University Ralf Widenhorn, Portland State University (US) of Science and Technology, and Monroe Thomas Wischgoll, Wright State University (US) Community College (Norway & US) Andrew J. Woods, Curtin University (Australia) Zhigang Fan, Apple Inc. (US) Song Zhang, Mississippi State University (US) Mylène Farias, University of Brasilia (Brazil) Gregg E. Favalora, Draper (US)

IS&T expresses its deep appreciation to the symposium chairs, conference chairs, program committee members, session chairs, and authors who generously give their time Sponsored by and expertise to enrich the Symposium. Society for Imaging Science and Technology (IS&T) EI would not be possible without the 7003 Kilworth Lane • Springfield, VA 22151 dedicated contributions of our participants and members. 703/642-9090 / 703/642-9094 fax / [email protected] / www.imaging.org

2 #EI2020 electronicimaging.org Where Industry and Academia Meet to discuss Imaging Across Applications

Table of Contents Registration Desk Symposium Overview...... 5 EI 2020 Short Courses At-a-Glance...... 6 Onsite Registration and Plenary Speakers...... 7 Badge Pick-Up Hours Special Events...... 8 EI 2020 Conference Keynotes...... 9 Sunday, Jan. 26 ­— 7:00 am to 8:00 pm Joint Sessions...... 15 Monday, Jan. 27 ­— 7:00 am to 5:00 pm Paper Schedule by Day/Time...... 23 Tuesday, Jan. 28 ­— 8:00 am to 5:00 pm Conferences and Papers Program by Session Wednesday, Jan. 29 ­— 8:00 am to 5:00 pm 3D Measurement and Data Processing 2020...... 40 Thursday, Jan. 30 ­— 8:30 am to 4:00 pm Autonomous Vehicles and Machines 2020...... 42 Color Imaging XXV: Displaying, Processing, Hardcopy, and Applications...... 47 Computational Imaging XVIII...... 51 The Engineering Reality of Virtual Reality 2020...... 58 Food and Agricultural Imaging Systems 2020...... 61 Human Vision and Electronic Imaging 2020...... 64 Image Processing: Algorithms and Systems XVIII...... 69 Image Quality and System Performance XVII...... 73 Imaging and Multimedia Analytics in a Web and Mobile World 2020...... 78 Imaging Sensors and Systems 2020...... 82 Intelligent Robotics and Industrial Applications using Computer Vision 2020...... 86 Material Appearance 2020...... 89 Media Watermarking, Security, and Forensics 2020...... 92 Mobile Devices and Multimedia: Enabling Technologies, Algorithms, and Applications 2020...... 96 Stereoscopic Displays and Applications XXXI...... 99 Visualization and Data Analysis 2020...... 103

EI 2020 Short Courses and Descriptions...... 105 General Information ...... 119 Author Index...... 123

Plan Now to Participate Join us for Electronic Imaging 2021 January 17 – 21, 2021 Parc 55 Hotel downtown San Francisco

electronicimaging.org #EI2020 3 IS&T CORPORATE MEMBERS

SUSTAINING CORPORATE MEMBERS / SYMPOSIUM SPONSORS

®

SUPPORTING CORPORATE MEMBERS

DONOR CORPORATE MEMBERS

IS&T Board of Directors July 2019 - June 2020 President Treasurer Chapter Directors Scott Silence, Corning Inc. Eric Hanson, retired, HP Laboratories Rochester: David Odgers, Odgers Imaging Tokyo: Teruaki Mitsuya, Ricoh Company, Ltd. Executive Vice President Vice Presidents Susan Farnand, Rochester Institute of Masahiko Fujii, Fuji Xerox Co., Ltd. IS&T Executive Director Technology Jennifer Gille, Oculus VR Suzanne E. Grinnan Liisa Hakola, VTT Research Centre of Finland Conference Vice President Sophie Triantaphillidou, University of Graham Finlayson, University of East Anglia Westminster Maria Vanrell, Universitat Autònoma de Publications Vice President Barcelona Robin Jenkin, NVIDIA Michael Willis, Pivotal Resources, Ltd.

Secretary Immediate Past President Dietmar Wueller, Image Engineering GmbH Steven Simkse, Colorado State University & Co. KG Where Industry and Academia Meet to discuss Imaging Across Applications

SYMPOSIUM OVERVIEW EI 2020 Exhibitors Engage with advances in electronic imaging Exhibit Hours Imaging is integral to the human experience and to exciting technology advances taking shape Tuesday 10:00 am – 7:00 pm around us—from personal taken every day with mobile devices, to autonomous imaging Wednesday 10:00 am – 3:30 pm algorithms in self-driving cars, to the mixed reality technology that underlies new forms of entertain- ment, to capturing images of celestial formations, and the latest in image data security. At EI 2020, leading researchers, developers, and entrepreneurs from around the world discuss, learn about, and share the latest imaging developments from industry and academia. EI 2020 includes theme day topics and virtual tracks, plenary speakers, 26 technical courses, 3D theatre, and 17 conferences including cross-topic joint sessions, keynote speakers, and peer- reviewed research presentations.

Symposium Silver Sponsors

Frontiers in Computational Imaging Monday, Jan 27 Plenary Session: Imaging the Unseen: Taking the First Picture of a Black Hole, Katie Bouman (California Institute of Technology) Related Sessions: See Computational Imaging Conference program, beginning on page 51. Automotive Imaging Tuesday, Jan 28 Plenary Session: Imaging in the Autonomous Vehicle Revolution, Gary Hicok (NVIDIA)

Panel Session: Sensors Technologies for Autonomous Vehicles, see details page 18. Short Courses: • Fundamentals of Deep Learning, R. Ptucha (RIT) • Optics and Hardware Calibration of Compact Modules for AR/ VR, Automotive, and Machine Vision Applications, Uwe Artmann (Image Engineering) and Kevin J. Matherson (Microsoft) … and more, see course listing and descriptions, page 105. AR/VR Future Technology Wednesday, Jan 29 Plenary Session: Quality Screen Time: Leveraging Computational Displays for Spatial Computing, Douglas Lanman (Facebook Reality Labs) Related See The Engineering Reality of Virtual Reality 2020 (page 58) and the Sessions: Stereoscopic Displays and Applications XXXI (page 99). Short Course: Color and Calibration in Compact Camera Modules for AR, Machine Vision, Automotive, and Mobile Applications, Uwe Artmann (Image Engineering) and Kevin J. Matherson (Microsoft)

electronicimaging.org #EI2020 5 Where Industry and Academia Meet to discuss Imaging Across Applications Short Courses Short

EI 2020 SHORT COURSES AT-A-GLANCE (see Course Descriptions beginning on page 105) Sunday Jan 26

8:00 - 12:15 8:00 - 12:15 8:00 - 12:15 8:00 - 12:15 8:00 - 12:15 8:00 - 12:15

SC01: SC02: SC03: Optics and SC04: SC05: SC06: Stereoscopic Advanced Image Hardware 3D Point Cloud Computer Imaging Enhancement and Calibration of Processing Image Quality Vision and Image Fundamentals Deblurring Compact Camera Tuning Analysis of Art Modules for AR/VR, Automotive, and Machine Vision Applications

1:30 - 5:45 1:30 - 5:45 1:30 - 3:30 1:30 - 5:45 1:30 - 3:30 SC09: SC11: Color SC07: SC08: Resolution in SC10: Image Optimization for Perceptual Fundamentals Mobile Imaging Quality: Industry Displays Metrics for Image BioInspired Image Devices ... Standards for and Video Processing Mobile, Quality: From 3:45 - 5:45 3:45 - 5:45 Automotive and 3:45 - 5:45 Perceptual SC12: Color & SC13: Normal Machine Vision SC14: Transparency to Calibration in and Defective Applications HDR Theory and Structural Compact Camera Color Vision Technology Equivalence Modules ... Across ... Monday Jan 27 Tuesday Jan 28

8:30 - 12:45 8:30 - 12:45 8:30 - 10:30 8:30 - 12:45 8:30 - 12:45 8:30 - 12:45 SC17: Camera SC15: SC16: Noise Sources SC18: SC20: SC21: 3D Imaging Classical and and its Character- Perception and Fundamentals Production Line Deep ization ... Cognition for Im- of Camera Color Learning-based aging Deep Learning Calibration Computer 10:45 - 12:45 Vision SC19: Camera Image 3:15 - 5:15 3:15 - 5:15 Quality Bench- SC22: SC23: Using marking An Introduction to Cognitive and Blockchain Behavioral Sci- Wed. Jan 29 Thurs. Jan 30 ences ... in AI...

8:30 - 12:45 8:30 - 12:45

SC24: SC26: Imaging Introduction to Applications of Probabilistic Artificial Models for Intelligence Machine Learning 3:15 - 5:15 SC25: Smart- phone Imaging for Secure Applications

6 #EI2020 electronicimaging.org Where Industry and Academia Meet to discuss Imaging Across Applications

PLENARY SPEAKERS

Imaging the Unseen: Taking the First Picture of a Black Hole being used to train, test and operate safe autonomous vehicles. Attendees will walk away with a better understanding of how deep learning, sen- Monday, January 27, 2020 sor fusion, surround vision and accelerated computing are enabling this 2:00 – 3:00 pm deployment. Grand Peninsula Ballroom D Gary Hicok is senior vice president of hardware development at NVIDIA, Katie Bouman, assistant professor, Computing and and is responsible for Tegra System Engineering, which oversees Shield, Mathematical Sciences Department, California Jetson, and DRIVE platforms. Prior to this role, Hicok served as senior Institute of Technology vice president of NVIDIA’s Mobile Business Unit. This vertical focused on NVIDIA’s Tegra mobile processor, which was used to power next-generation This talk will present the methods and procedures used to produce the mobile devices as well as in-car safety and infotainment systems. Before first image of a black hole from the Event Horizon Telescope. It has been that, Hicok ran NVIDIA’s Core Logic (MCP) Business Unit also as senior vice theorized for decades that a black hole will leave a “shadow” on a president. Throughout his tenure with NVIDIA, Hicok has also held a variety background of hot gas. Taking a picture of this black hole shadow could of management roles since joining the company in 1999, with responsibili- help to address a number of important scientific questions, both on the ties focused on console gaming and chipset engineering. He holds a BSEE nature of black holes and the validity of general relativity. Unfortunately, degree from Arizona State University and has authored 33 issued patents. due to its small size, traditional imaging approaches require an Earth-sized radio telescope. In this talk, I discuss techniques we have developed to a black hole using the Event Horizon Telescope, a network of Quality Screen Time: Leveraging Computational telescopes scattered across the globe. Imaging a black hole’s structure with Displays for Spatial Computing this computational telescope requires us to reconstruct images from sparse measurements, heavily corrupted by atmospheric error. The resulting image Wednesday, January 29, 2020 is the distilled product of an observation campaign that collected approxi- 2:00 – 3:00 pm mately five petabytes of data over four evenings in 2017. I will summarize Grand Peninsula Ballroom D how the data from the 2017 observations were calibrated and imaged, explain some of the challenges that arise with a heterogeneous telescope Douglas Lanman, director, Display Systems Research, array like the EHT, and discuss future directions and approaches for event Facebook Reality Labs horizon scale imaging. Displays pervade our lives and take myriad forms, Katie Bouman is an assistant professor in the Computing and Mathematical spanning smart watches, mobile phones, laptops, monitors, televisions, Sciences Department at the California Institute of Technology. Before joining and theaters. Yet, in all these embodiments, modern displays remain largely Caltech, she was a postdoctoral fellow in the Harvard-Smithsonian Center limited to two-dimensional representations. Correspondingly, our applica- for Astrophysics. She received her PhD in the Computer Science and tions, entertainment, and user interfaces must work within the limits of a flat Artificial Intelligence Laboratory (CSAIL) at MIT in EECS. Before coming canvas. Head-mounted displays (HMDs) present a practical means to move to MIT, she received her bachelor’s degree in electrical engineering from forward, allowing compelling three-dimensional depictions to be merged the University of Michigan. The focus of her research is on using emerging seamlessly with our physical environment. As personal viewing devices, computational methods to push the boundaries of interdisciplinary imaging. head-mounted displays offer a unique means to rapidly deliver richer visual experiences than past direct-view displays that must support a full audience. Viewing optics, display components, rendering algorithms, and sensing Imaging in the Autonomous Vehicle Revolution elements may all be tuned for a single user. It is the latter aspect that most differentiates from the past, with individualized eye tracking playing an im- Tuesday, January 28, 2020 portant role in unlocking higher resolutions, wider fields of view, and more 2:00 – 3:00 pm comfortable visuals than past displays. This talk will explore such “compu- Grand Peninsula Ballroom D tational display” concepts and how they may impact VR/AR devices in the coming years. Gary Hicok, senior vice president, hardware devel- opment, NVIDIA Corporation Douglas Lanman is the director of Display Systems Research at Facebook Reality Labs, where he leads investigations into advanced display and To deliver on the myriad benefits of autonomous driv- imaging technologies for augmented and virtual reality. His prior research ing, the industry must be able to develop self-driving technology that is truly has focused on head-mounted displays, glasses-free 3D displays, light-field safe. Through redundant and diverse automotive sensors, algorithms, and , and active illumination for 3D reconstruction and interaction. He high-performance computing, the industry is able to address this challenge. received a BS in applied physics with honors from Caltech (2002), and NVIDIA brings together AI deep learning, with data collection, model his MS and PhD in electrical engineering from Brown University (2006 training, simulation, and a scalable, open autonomous vehicle computing and 2010, respectively). He was a senior research scientist at NVIDIA platform to power high-performance, energy-efficient computing for function- Research from 2012 to 2014, a postdoctoral associate at the MIT Media ally safe self-driving. Innovation of imaging capabilities for AVs has been Lab from 2010 to 2012, and an assistant research staff member at MIT rapidly improving to the point that the cornerstone AV sensors are cameras. Lincoln Laboratory from 2002 to 2005. His most recent work has focused Much like the human brain processes visual data taken in by the eyes, on developing Half Dome: an eye-tracked, wide-field-of-view varifocal AVs must be able to make sense of this constant flow of information, which HMD with AI-driven rendering. requires high-performance computing to respond to the flow of sensor data. This presentation will delve into how these developments in imaging are

electronicimaging.org #EI2020 7 Special Events

Special Events Where Industry and Academia Meet to discuss Imaging Across Applications Special Events

SPECIAL EVENTS

Monday, January 27, 2020 Wednesday, January 29, 2020

All-Conference Welcome Reception Industry Exhibition

The Grove Grand Peninsula Foyer 5:00 pm – 6:00 pm 10:00 am – 3:30 pm Join colleagues for a light reception featuring beer, wine, soft drinks, and EI’s annual industry exhibit provides a unique opportunity to meet company hors d’oeuvres. Make plans to enjoy dinner with old and new friends at representatives working in areas related to electronic imaging. The exhibit one of the many area restaurants. Conference registration badges are highlights products and services, as well as offers the opportunity to meet required for entrance. prospective employers.

SD&A Conference 3D Theatre Interactive Papers (Poster) Session

Grand Peninsula Ballroom D Sequoia 6:00 pm – 7:30 pm 5:30 pm – 7:00 pm Hosted by Andrew J. Woods, Curtin University (Australia) Conference attendees are encouraged to attend the Interactive Papers (Poster) Session where authors display their posters and are available to The 3D Theatre Session of each year’s Stereoscopic Displays and answer questions and engage in in-depth discussions about their work. Applications C onference showcases the wide variety of 3D content that is Please note that conference registration badges are required for entrance being produced and exhibited around the world. All 3D footage screened and that posters may be previewed by all attendees beginning on Monday in the 3D Theatre Session is shown in high-quality polarized 3D on a large afternoon. screen. The final program will be announced at the conference and 3D glasses will be provided. Authors are asked to set up their posters starting at 10:00 am on Monday. Pushpins are provided. Authors must remove poster materials at the conclu- sion of the Interactive Session. Posters not removed after the session are Tuesday, January 28, 2020 considered unwanted and will be removed by staff and discarded. IS&T does not assume responsibility for posters displayed after the Interactive Women in Electronic Imaging Breakfast Session concludes.

Location provided at Registration Desk 7:30 am – 8:45 am Meet the Future: A Showcase of Student and Young Start your day with female colleagues and senior women scientists to Professionals Research share stories and make connections at the Women in Electronic Imaging breakfast. The complimentary breakfast is open to EI full registrants. Space Sequoia is limited to 40 people. Visit the onsite registration desk for more informa- 5:30 pm - 7:00 pm tion about this special event. This annual event will bring invited students together with academic and industry representatives who may have opportunities to offer. Each student is asked to present and discuss their academic work via an interactive Industry Exhibition poster session. Student presenters may expand their professional network and explore employment opportunities with the audience of academic and Grand Peninsula Foyer industry representatives. 10:00 am – 7:30 pm EI’s annual industry exhibit provides a unique opportunity to meet company Conference attendees are encouraged to attend this Showcase to engage representatives working in areas related to electronic imaging. The exhibit with these invited students about their work. Please note that conference highlights products and services, as well as offers the opportunity to meet registration badges are required for entrance. prospective employers. Authors are asked to set up their Showcase posters starting at 10:00 am on Monday. Pushpins are provided. Authors must remove poster materials Symposium Demonstration Session at the conclusion of the Showcase. Posters not removed are considered unwanted and will be removed by staff and discarded. IS&T does not as- Ballrooms E/F/G sume responsibility for posters displayed after the Showcase concludes. 5:30 pm – 7:30 pm This symposium-wide, hands-on, interactive session, which traditionally has showcased the largest and most diverse collection of stereoscopic 2020 Friends of HVEI Banquet and electronic imaging research and products in one location, represents Location provided at Registration Desk a unique networking opportunity. Attendees can see the latest research in 7:00 pm - 10:00 pm action, compare commercial products, ask questions of knowledgeable This annual event brings the HVEI community together for great food and demonstrators, and even make purchasing decisions about a range of elec- convivial conversation. Registration required, online or at the registration tronic imaging products. The demonstration session hosts a vast collection desk. of stereoscopic products providing a perfect opportunity to witness a wide array of stereoscopic displays with your own two eyes.

8 #EI2020 electronicimaging.org Where Industry and Academia Meet to discuss Imaging Across Applications

EI 2020 CONFERENCE KEYNOTES

Monday, January 27, 2020 3D Digitization and Optical Material Interactions Session Chair: Ingeborg Tastl, HP Labs, HP Inc. (United States)

Automotive Camera Image Quality Joint Session 9:30 – 10:10 am Regency C Session Chair: Luke Cui, Amazon (United States) MAAP-020 8:45 – 9:30 am Capturing and 3D rendering of optical behavior: The physical ap- proach to realism, Martin Ritz, Fraunhofer Institute for Computer Graphics Regency B Keynotes This session is jointly sponsored by: Autonomous Vehicles and Machines Research (Germany) 2020, and Image Quality and System Performance XVII. Martin Ritz has been deputy head of Competence Center Cultural Heritage Digitization at the Fraunhofer Institute for Computer Graphics Research AVM-001 IGD since 2012, prior to which he was a research fellow there in the LED flicker measurement: Challenges, considerations, and updates from department of Industrial Applications (today: Interactive Engineering IEEE P2020 working group, Brian Deegan, Valeo Vision Systems (Ireland) Technologies). In parallel to technical coordination, his research topics Brian Deegan is a senior expert at Valeo Vision Systems. The LED flicker include acquisition of 3D geometry, as well as optical material properties, work Deegan is involved with came about as part of the IEEE P2020 work- meaning light interaction of surfaces up to complete objects, for arbitrary ing group on Automotive Image Quality standards. One of the challenges combinations of light and observer directions. Challenges in both domains facing the industry is the lack of agreed standards for assessing camera are equally design and implementation of algorithms as well as concep- image quality performance. Deegan leads the working group specifically tualization and realization of novel scanning systems in hardware and covering LED flicker. He holds a BS in computer engineering from the with the goal of complete automation in mind. He received his University of Limerick (2004), and an MSc in biomedical engineering from MS in Informatics (2009) from the Technische Universität Darmstadt where the University of Limerick (2005). Biomedical engineering has already his focus in the domain of photogrammetry was the extension of “Multi- made its way into the automotive sector. A good example would be view Stereo” by the advantages of the “Photometric Stereo” approach to driver monitoring. By analyzing a drivers patterns, facial expressions, eye reach better results and more complete measurement data coverage during movements etc, automotive systems can already tell if a driver has become 3D reconstruction. drowsy and provide an alert.

Visibility Watermarking and Recycling Session Chair: Robin Jenkin, NVIDIA Corporation (United States) Session Chair: Adnan Alattar, Digimarc Corporation (United States) 3:30 – 4:10 pm 8:55 – 10:00 am Regency B Cypress A AVM-057 MWSF-017 The automated drive west: Results, Sara Sargent, VSI Labs (United States) Watermarking to turn plastic packaging from waste to asset through Sara Sargent is the engineering project manager with VSI Labs. In this role improved optical tagging, Larry Logan, Digimarc Corporation (United she is the bridge between the client and the VSI Labs team of autono- States) mous solutions developers. She is engaged in all lab projects, leads the Larry Logan is chief evangelist with Digimarc Corporation. Logan is a Sponsorship Vehicle program, and the internship program. She contributes visionary and a risk taker with a talent for finding gamechanging products to social media, marketing & business development. Sargent brings sixteen and building brand recognition that resonates with target audiences. He years of management experience, including roles as engineering project recognizes opportunities in niche spaces, capitalizing on investments manager for automated vehicle projects, project manager for software made. He has a breadth of relationships and media contacts in diverse application development, president of a high powered collegiate rocket industries which expand his reach. Logan holds a BA from the University of team, and involvement in the Century College Engineering Club, and the Arkansas at Fayetteville. St. Thomas IEEE student chapter. Sargent holds a BS in electrical engineer- ing from the University of St. Thomas.

electronicimaging.org #EI2020 9 KEYNOTES

Immersive 3D Display Systems Computation and

Session Chair: Takashi Kawai, Waseda University (Japan) Session Chair: Charles Bouman, Purdue University (United States) 3:30 – 4:30 pm 8:50 – 9:30 am Grand Peninsula D Grand Peninsula B/C SD&A-065 COIMG-089 High frame rate 3D-challenges, issues, and techniques for success, Larry Computation and photography: How the became a cam-

Keynotes Paul, Christie Digital Systems (United States) era, Peyman Milanfar, Google Research (United States) Abstract: Larry Paul shares some of his more than 25 years of experience Peyman Milanfar is a principal scientist / director at Google Research, in the development of immersive 3D display systems and discusses the where he leads Computational Imaging. Previously, he was professor of challenges, issues and successes in creating, displaying, and experiencing electrical engineering at UC Santa Cruz (1999-2014). Most recently, 3D content for audiences. Topics range from working in dome and curved Peyman’s team at Google developed the “Super Res Zoom” pipeline screen projection systems to 3D in use at Los Alamos National Labs to for the phones. Peyman received his BS in electrical engineering working with Ang Lee on “Billy Lynn’s Long Half Time Walk“ and “Gemini and mathematics from UC Berkeley, and his MS and PhD in EECS from Man” at 4K, 120Hz per eye 3D, as well as work with Doug Trumbull on MIT. He founded MotionDSP, which was acquired by Cubic Inc. He is a the 3D Magi format. Paul explores the very important relationship between Distinguished Lecturer of the IEEE Signal Processing Society, and a Fellow the perception of 3D in resolution, frame rate, viewing distance, field of of the IEEE. view, motion blur, angles, color, contrast, and “HDR” and image brightness and how all those things combined add to the complexity of making 3D work effectively. In addition, he discusses his expertise with ac- Technology in Context tive and polarized 3D systems and “color-comb” 6P 3D projection systems. He will also explain the additional value of expanded color volume and Session Chair: Adnan Alattar, Digimarc Corporation (United States) the inter-relationship with HDR on the reproduction of accurate color. 9:00 – 10:00 am Larry Paul is a technologist with more than 25 years of experience in the Cypress A design and deployment of high-end specialty themed entertainment, giant MWSF-102 screens, visualization, and simulation projects. He has passion for and Technology in context: Solutions to foreign propaganda and disin- expertise with true high-frame rate, multi-channel high resolution 2D and 3D formation, Samaruddin Stewart, technology and media expert, Global display solutions and is always focused on solving specific customer chal- Engagement Center, US State Department, and Justin Maddox, adjunct lenges and improving the visual experience. He has his name on 6 pat- professor, Department of Information Sciences and Technology, George ents. A life-long transportation enthusiast, he was on a crew that restored Mason University (United States) a WWII flying wing. He has rebuilt numerous classic cars and driven over Samaruddin Stewart is a technology and media expert with the US 300,000 miles in electric vehicles over the course of more than 21 years. Department of State, Global Engagement Center, based in the San Francisco Bay Area. Concurrently, he manages Journalism 360 with the Online News Association, a global network of storytellers accelerating Tuesday, January 28, 2020 the understanding and production of immersive journalism (AR/VR/XR). Journalism 360 is a partnership between the Google News Initiative, the Knight Foundation, and the Online News Association. From 2016 Human Interaction through mid-2019 he was an invited expert speaker/trainer with the US Session Chair: Robin Jenkin, NVIDIA Corporation (United States) Department of State, speaking on combating disinformation, technical verification of content, and combating violent extremism. He holds a BA 8:50 – 9:30 am in journalism and an MA in mass communication both from Arizona State Regency B University, an MBA from Central European University, and received the AVM-088 John S. Knight Journalism Fellowship, for Journalism and Media Innovation, Regaining sight of humanity on the roadway towards automation, from Stanford University in 2012. Mónica López-González, La Petite Noiseuse Productions (United States) Justin Maddox is an adjunct professor in the Department of Information Mónica López-González is a multilingual cognitive scientist, educator, and Sciences and Technology at George Mason University. Maddox is a coun- practicing multidisciplinary artist. A firm believer in the intrinsic link between terterrorism expert with specialization in emerging technology applications. art and science, she is the cofounder and chief science and art officer at He is the CEO of Inventive Insights LLC, a research and analysis consultan- La Petite Noiseuse Productions. Her company’s work merges questions, cy. He recently served as the deputy coordinator of the interagency Global methods, data, and theory from the visual, literary, musical, and performing Engagement Center, where he implemented cutting-edge technologies to arts with the cognitive, brain, behavioral, health and data sciences. She counter terrorist propaganda. He has led counterterrorism activities at the has recently been a Fellow at the Salzburg Global Seminar in Salzburg, CIA, the State Department, DHS, and NNSA, and has been a Special Austria. Prior to co-founding her company, she worked in the biotech indus- Operations Team Leader in the US Army. Since 2011, Maddox has taught try as director of business development. López-González has pioneered a National Security Challenges, a graduate-level course, requiring students range of multidisciplinary STEamM (science, technology, engineering, art, to devise realistic solutions to key strategic threats. Maddox holds an MA mathematics, medicine) courses for students at Johns Hopkins University, from Georgetown University’s National Security Studies Program and a BA Peabody Institute, and Maryland Institute College of Art. She has been an in liberal arts from St. John’s College, the “great books” school. He has HVEI program committee member since 2015 and was co-founded its ‘Art lived and worked in Iraq, India, and Germany, and can order a drink in & Perception’ session. She holds a BA in psychology and French and a Russian, Urdu and German. MA and PhD in cognitive science, all from Johns Hopkins University.

10 #EI2020 electronicimaging.org KEYNOTES

Sensor Design Technology Remote Sensing in Agriculture II Joint Session Session Chairs: Jon McElvain, Dolby Laboratories (United States) and Session Chairs: Vijayan Asari, University of Dayton (United States) and Arnaud Peizerat, CEA (France) Mohammed Yousefhussien, General Electric Global Research (United 10:30 – 11:10 am States) Regency A 11:40 am – 12:30 pm ISS-115 Cypress B 3D-IC smart image sensors, Laurent Millet and Stephane Chevobbe, This session is jointly sponsored by: Food and Agricultural Imaging Systems CEA/LETI (France) 2020, and Imaging and Multimedia Analytics in a Web and Mobile World 2020. Laurent Millet received his MS in electronic engineering from PHELMA University, Grenoble, France (2008). Since then, he has been with CEA FAIS-151 LETI, Grenoble, in the smart ICs for and display laboratory Practical applications and trends for UAV remote sensing in agriculture, Keynotes (L3I), where he leads projects in analog design on infra-red and visible Kevin Lang, PrecisionHawk (United States) imaging. His first work topic was high-speed pipeline analog to digital converter for infra-red image sensors. His current field of expertise is 3-D Kevin Lang is general manager of PrecisionHawk’s agriculture business stacked integration technology applied to image sensors, in which he ex- (Raleigh, North Carolina). PrecisionHawk is a commercial drone and data plores highly parallel topologies for high speed and very high-speed vision company that uses aerial mapping, modeling, and agronomy platform chips, by combining fast readout and near sensor digital processing. specifically designed for precision agriculture. Lang advises clients on how to capture value from aerial data collection, artificial intelligence, and advanced analytics in addition to delivering implementation programs. Remote Sensing in Agriculture I Joint Session Lang holds a BS in mechanical engineering from Clemson University and an MBA from Wake Forest University. Session Chairs: Vijayan Asari, University of Dayton (United States) and Mohammed Yousefhussien, General Electric Global Research (United States) Multiple Viewer Stereoscopic Displays

10:50 – 11:40 am Session Chair: Gregg Favalora, The Charles Stark Draper Laboratory, Inc. Cypress B (United States) This session is jointly sponsored by: Food and Agricultural Imaging Systems 2020, and Imaging and Multimedia Analytics in a Web and Mobile 4:10 – 5:10 pm World 2020. Grand Peninsula D SD&A-400 FAIS-127 Challenges and solutions for multiple viewer stereoscopic displays, Kurt Managing crops across spatial and temporal scales - The roles of Hoffmeister, Mechdyne Corp. (United States) UAS and satellite remote sensing, Jan van Aardt, Rochester Institute of Abstract: Many 3D experiences, such as movies, are designed for a single Technology (United States) viewer perspective. Unfortunately this means that all viewers must share Jan van Aardt obtained a BSc in forestry (biometry and silviculture spe- that one perspective view. Any viewer positioned away from the design cialization) from the University of Stellenbosch, South Africa (1996). He eye point will see a skewed perspective and less comfortable stereoscopic completed his MS and PhD in forestry, focused on remote sensing (imaging viewing experience. For the many situations where multiple perspectives spectroscopy and light detection and ranging), at Virginia Polytechnic would be desired, we ideally want perspective viewpoints unique to each Institute and State University, (2000 and 2004, respectively). This was viewer’s position and head orientation. Today there are several possible followed by post-doctoral work at the Katholieke Universiteit Leuven, Multiviewer solutions available including personal Head Mounted Displays Belgium, and a stint as research group leader at the Council for Scientific (HMDs), multiple overlapped projection displays, and high frame rate and Industrial Research, South Africa. Imaging spectroscopy and structural projection. Each type of solution and application unfortunately has its (lidar) sensing of natural resources form the core of his efforts, which vary own pros and cons such that there is no one ideal solution. This presenta- between vegetation structural and system state (physiology) assessment. tion will discuss the need for multiviewer solutions as a key challenge for He has received funding from NSF, NASA, Google, and USDA, among stereoscopic displays and multiple participant applications, it will review others, and has published more than 70 peer-reviewed papers and more some historical approaches, the challenges of technologies used and their than 90 conference contributions. van Aardt is currently a professor in the implementation, and finally some current solutions readily available. As we Chester F. Carlson Center for Imaging Science at the Rochester Institute of all live and work in a collaborative world it is only natural our Virtual Real- Technology. ity and data visualization experiences should account for multiple viewers. For collocated participants there are several available solutions now that have built on years of previous development, some of these solutions can Quality Metrics also accommodate remote participants. The intent of this presentation is an enlightened look at multiple viewer stereoscopic display solutions. Session Chair: Robin Jenkin, NVIDIA Corporation (United States) As a co-founder of Mechdyne Corporation, Kurt Hoffmeiste r has been a 10:50 – 11:30 am pioneer and worldwide expert in large-screen virtual reality and simula- Regency B tion system design, installation, and integration. A licensed professional AVM-124 engineer with several patents, Hoffmeister was in charge of evaluating and Automated optimization of ISP hyperparameters to improve computer implementing new AV/IT technology and components into Mechdyne’s vision accuracy, Doug Taylor, Avinash Sharma, Karl St. Arnaud, and Dave solutions. He has contributed to more than 500 Mechdyne projects, includ- Tokic, Algolux (Canada) ing more than 30 projects worth + $1 million investment. Today he consults

electronicimaging.org #EI2020 11 KEYNOTES

as a highly experienced resource for Mechdyne project teams. Hoffmeister Image Capture has been involved in nearly every Mechdyne project for the past 20 years serving in a variety of capacities, including researcher, consultant, Session Chair: Nicolas Bonnier, Apple Inc. (United States) systems designer, and systems engineer. Before co-founding Mechdyne, he 8:50 – 9:50 am spent 10 years in technical and management roles with the Michelin Tire Company’s North American Research Center, was an early employee and Harbour A/B consultant at Engineering Animation, Inc. (now a division of Siemens), and IQSP-190 was a researcher at Iowa State University. His current role at Mechdyne is Camera vs : How electronic imaging changed the game,

Keynotes technology consultant since retiring in 2018. Frédéric Guichard, DXOMARK (France) Frédéric Guichard is the chief technology officer at DXOMARK. He brings an extensive scientific and technical expertise in imaging, cameras, image Wednesday, January 29, 2020 processing and computer vision. Prior to co-founding DxO Labs he was the chief scientist at Vision IQ and prior to that a researcher at Inrets. He did his postdoc internship at Cognitech, after completing his MS and PhD Imaging Systems and Processing Joint Session in mathematics at Ecole normale supérieure (1989-1993), and his PhD Session Chairs: Kevin Matherson, Microsoft Corporation (United States) in applied mathematics from Université Paris Dauphine (1992 – 1994). and Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany) Guichard earned his engineering degree from École des Ponts ParisTech (1994 – 1997). 8:50 – 9:30 am Regency A This session is jointly sponsored by: The Engineering Reality of Virtual Real- Digital vs Physical Document Security ity 2020, Imaging Sensors and Systems 2020, and Stereoscopic Displays and Applications XXXI. Session Chair: Gaurav Sharma, University of Rochester (United States) 9:00 – 10:00 am ISS-189 Cypress A Mixed reality guided neuronavigation for non-invasive brain stimula- tion treatment, Christoph Leuze, Stanford University (United States) MWSF-204 Digital vs physical: A watershed in document security, Ian Lancaster, Abstract: Medical imaging is used extensively world-wide to visualize Lancaster Consulting (United Kingdom) the internal anatomy of the human body. Since medical imaging data is traditionally displayed on separate 2D screens, it needs an intermediary Ian Lancaster is a specialist in and authentication and served or well-trained clinician to translate the location of structures in the medical as the general secretary to the International Hologram Manufacturers imaging data to the actual location in the patient’s body. Mixed reality can Association from its founding in 1994 until 2015. Having stepped into a solve this issue by allowing to visualize the internal anatomy in the most part-time role as associate, he is responsible for special projects. Lancaster intuitive manner possible, by directly projecting it onto the actual organs began his career as an arts administrator, working at the Gulbenkian inside the patient. At the Incubator for Medical Mixed and Extended Real- Foundation, among other places. He received a Fellowship from the US ity (IMMERS) in Stanford, we are connecting clinicians and engineers to State Department to survey the video and holographic arts fields in the US, develop techniques that allow to visualize medical imaging data directly which led him to help set up the UK’s first open-access holography studio, overlaid on the relevant anatomy inside the patient, making navigation and where he learned to make holograms. In 1982, Lancaster founded the first guidance for the clinician both simpler and safer. In this presentation I will successful display hologram producer, Third Dimension Ltd, later becom- talk about different projects we are pursuing at IMMERS and go into detail ing the executive director of the Museum of Holography in New York. about a project on mixed reality neuronavigation for non-invasive brain In 1990, he co-founded Reconnaissance International (www.reconnais- stimulation treatment of depression. Transcranial Magnetic Stimulation is a sance.net), where he was managing director for 25 years. In that position non-invasive brain stimulation technique that is used increasingly for treating he founded Holography News, Authentication News, Pharmaceutical depression and a variety of neuropsychiatric diseases. To be effective the AntiCounterfeiting News, and Currency News; directed Reconnaissance’s clinician needs to accurately stimulate specific brain networks, requiring ac- anti-counterfeiting, product protection, and holography conferences; was curate stimulator positioning. In Stanford we have developed a method that chief analyst and writer of the company’s holography industry reports; allows the clinician to “look inside” the brain to see functional brain areas led the company into the pharmaceutical anti-counterfeiting field; and ex- using a mixed reality device and I will show how we are currently using panded the company into the currency and tax stamps fields. In 2015, he this method to perform mixed reality-guided brain stimulation experiments. was awarded the Russian Optical Society’s Denisyuk Medal for services to holography worldwide and the Chinese Security Identification Union’s Blue Christoph Leuze is a research scientist in the Incubator for Medical Mixed Shield award for lifetime achievement in combating counterfeits. For more and Extended Reality at Stanford University where he focuses on techniques https://www.lancaster-consult.com/. for visualization of MRI data using virtual and augmented reality devices. He published BrainVR, a virtual reality tour through his brain and is closely working with clinicians on techniques to visualize and register medical imaging data to the real world using optical see-through augmented reality devices such as the Microsoft Hololens and the Magic Leap One. Prior to joining Stanford, he worked on high-resolution brain MRI measurements at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, for which he was awarded the Otto Hahn medal by the Max Planck Society for outstanding young researchers.

12 #EI2020 electronicimaging.org KEYNOTES

Personal Health Data and Surveillance Visualization Facilities Joint Session Session Chair: Jan Allebach, Purdue University (United States) Session Chairs: Margaret Dolinsky, Indiana University (United States) and Andrew Woods, Curtin University (Australia) 9:10 – 10:10 am Cypress B 4:10 – 5:10 pm IMAWM-211 Grand Peninsula D Health surveillance, Ramesh Jain, University of California, Irvine This session is jointly sponsored by: The Engineering Reality of Virtual Real- (United States) ity 2020, and Stereoscopic Displays and Applications XXXI. Ramesh Jain is a scientist and entrepreneur in the field of information The keynote will be co-presented by Derek Van Tonder and Andy and computer science. He is a Bren Professor in Information & Computer McCutcheon. Sciences, Donald Bren School of Information and Computer Sciences,

University of California, Irvine. Prior to this he was a professor at University ERVR-295 Keynotes of Michigan, Ann Arbor; University of California, San Diego; and Georgia Social holographics: Addressing the forgotten human factor, Derek Van Tech. His research interests started in cybernetic systems, then pattern Tonder and Andy McCutcheon, Euclideon Holographics (Australia) recognition, computer vision, and artificial intelligence. He coauthored Abstract: With all the hype and excitement surrounding Virtual and the first computer vision paper addressing analysis of real video sequence Augmented Reality, many people forget that while powerful technology of a traffic scene. After working on several aspects of computer vision can change the way we work, the human factor seems to have been left systems and machine vision, he realized that to solve hard computer vision out of the equation for many modern-day solutions. For example, most problems one must include all available information from other signals and modern Virtual Reality HMDs completely isolate the user from their external contextual sources, which led to work in developing multimedia computing environment, causing a wide variety of problems. “See-Through” technol- systems. Jain participated in developing the concept of immersive as well ogy is still in its infancy. In this submission we argue that the importance as multiple perspective interactive videos, to use multiple video cameras to of the social factor outweighs the headlong rush towards better and more build 3D video where a person can decide what they want to experience. realistic graphics, particularly in the design, planning and related engineer- Since 2012, he has been engaged in developing a navigational ap- ing disciplines. Large-scale design projects are never the work of a single proach to guide people to achieve personal health goals. He founded/co- person, but modern Virtual and Augmented Reality systems forcibly channel founded multiple startup companies including Imageware, Virage, Praja, users into single-user simulations, with only very complex multi-user solutions and Seraja. He has served as chairman of ACM SIG Multimedia; was slowly becoming available. In our presentation, we will present three founding editor-in-chief of IEEE MultiMedia magazine and the Machine different Holographic solutions to the problems of user isolation in Virtual Vision and Applications journal; and serves on the editorial boards of sev- Reality, and discuss the benefits and downsides of each new approach. eral journals. He is a Fellow of ACM, IEEE, IAPR, AAAI, AAAS, and SPIE, With all the hype and excitement surrounding Virtual and Augmented and has published more than 400 research papers. Jain holds a PhD from Reality, many people forget that while powerful technology can change Indian Institute of Technology in India. the way we work, the human factor seems to have been left out of the equation for many modern-day solutions. For example, most modern Virtual Reality HMDs completely isolate the user from their external environment, Image Processing causing a wide variety of problems. “See-Through” technology is still in its Session Chair: Robin Jenkin, NVIDIA Corporation (United States) infancy. In this submission we argue that the importance of the social factor outweighs the headlong rush towards better and more realistic graphics, 3:30 – 4:10 pm particularly in the design, planning and related engineering disciplines. Regency B Large-scale design projects are never the work of a single person, but modern Virtual and Augmented Reality systems forcibly channel users into AVM-262 single-user simulations, with only very complex multi-user solutions slowly Deep image processing, Vladlen Koltun, Intel Labs (United States) becoming available. In our presentation, we will present three different Vladlen Koltun is the chief scientist for Intelligent Systems at Intel. He directs Holographic solutions to the problems of user isolation in Virtual Reality, the Intelligent Systems Lab, which conducts high-impact basic research in and discuss the benefits and downsides of each new approach. computer vision, machine learning, robotics, and related areas. He has mentored more than 50 PhD students, postdocs, research scientists, and Derek Van Tonder is senior business development manager specializing in PhD student interns, many of whom are now successful research leaders. B2B product sales and project management with Euclideon Holographics in Brisbane Australia. Van Tonder began his career in console game development with I-Imagine, then was a senior developer with Pandemic Studios, a senior engine programmer with Tantalus Media and then Sega Studios, and a lecturer in game programming at Griffith University in Brisbane. In 2010, he founded Bayside Games to pursue development of an iOS game called “Robots Can’t Jump” written from scratch in C++. In 2012 he joined Euclideon, transitioning from leading software develop- ment to technical business development. In 2015 he joined Taylors—apply- ing VR technology to urban development. Currently Van Tonder is involved with several projects, including a Safe Site Pty Ltd developing a revolu- tionary new Immersive Training software platform, and a CSIRO Data61 Robotics and Autonomous Systems Group to produce a Windows port of the “Wildcat” robotics software framework, which functions as the “brains” of a range of different robotics platforms. Andy McCutcheon is a former Special Forces Commando who transitioned

electronicimaging.org #EI2020 13 KEYNOTES

into commercial aviation as a pilot after leaving the military in 1990. Thursday, January 30, 2020 He dove-tailed his specialized skill-set to become one of the world’s most recognizable celebrity bodyguards, working with some of the biggest names in music and film before moving to Australia in 2001. In 2007, he Multisensory and Crossmodal Interactions pioneered the first new alcohol beverages category in 50 years with his unique patented ‘Hard Iced Tea,’ which was subsequently sold in 2013. Session Chair: Lora Likova, Smith-Kettlewell Eye Research Institute He is the author of two books and is currently the Global Sales Manager, (United States) Aerospace & Defence for Brisbane based Euclideon Holographics, 9:10 – 10:10 am Keynotes recently named ‘Best Technology Company’ in 2019. Grand Peninsula A HVEI-354 2020 Friends of HVEI Banquet Multisensory interactions and plasticity – Shooting hidden assumptions, revealing postdictive aspects, Shinsuke Shimojo, California Institute of Hosts: Damon Chandler, Shizuoka University (Japan); Mark McCourt, Technology (United States) North Dakota State University (United States); and Jeffrey Mulligan, NASA Ames Research Center (United States) Shinsuke Shimojo is professor of biology and principle investigator with the Shimojo Psychophysics Laboratory at California Institute of Technology, 7:00 – 10:00 pm one of the few laboratories at Caltech that exclusively concentrates on the Offsite Restaurant study of perception, cognition, and action in humans. The lab employs This annual event brings the HVEI community together for great food and psychophysical paradigms and a variety of recording techniques such as convivial conversation. Registration required, online or at the registration eye tracking, functional magnetic resonance imaging (fMRI), and electroen- desk. Location will be provided with registration. cephalogram (EEG), as well as, brain stimulation techniques such as tran- scranial magnetic stimulation (TMS), transcranial direct current stimulation HVEI-401 (tDCS), and recently ultrasound neuromodulation (UNM). The research tries Perception as inference, Bruno Olshausen, UC Berkeley (United States) to bridge the gap between cognitive and neurosciences and to understand Bruno Olshausen is a professor in the Helen Wills Neuroscience Institute, how the brain adapts real-world constraints to resolve perceptual ambiguity the School of Optometry, and has a below-the-line affiliated appoint- and to reach ecologically valid, unique solutions. In addition to continuing ment in EECS. He holds a BS and a MS in electrical engineering from interest in surface representation, motion perception, attention, and action, Stanford University, and a PhD in computation and neural systems from the research also focuses on crossmodal integration (including VR environ- the California Institute of Technology. He did his postdoctoral work in ments), visual preference/attractiveness decision, social brain, flow and the Department of Psychology at Cornell University and at the Center for choke in the game-playing brains, individual differences related to “neural, Biological and Computational Learning at the Massachusetts Institute of dynamic fingerprint” of the brain. Technology. From 1996-2005 he was on the faculty in the Center for Neuroscience at UC Davis, and in 2005 he moved to UC Berkeley. He also directs the Redwood Center for Theoretical Neuroscience, a Visualization and Cognition multidisciplinary research group focusing on building mathematical and computational models of brain function (see http://redwood.berkeley. Session Chair: Thomas Wischgoll, Wright State University (United States) edu). Olshausen’s research focuses on understanding the information pro- 2:00 – 3:00 pm cessing strategies employed by the visual system for tasks such as object Regency C recognition and scene analysis. Computer scientists have long sought to emulate the abilities of the visual system in digital computers, but achiev- VDA-386 Augmenting cognition through data visualization, ing performance anywhere close to that exhibited by biological vision Alark Joshi, University systems has proven elusive. Olshausen’s approach is based on studying of San Francisco (United States) the response properties of neurons in the brain and attempting to construct Alark Joshi is a data visualization researcher and an associate professor mathematical models that can describe what neurons are doing in terms of of computer science at the University of San Francisco. He has published a functional theory of vision. The aim of this work is not only to advance research papers in the field of data visualization and has been on award- our understanding of the brain but also to devise new algorithms for image winning panels at the top Data Visualization conferences. His research analysis and recognition based on how brains work. focuses on develop warded the Distinguished Teaching Award at the University of San Francisco in 2016. He received his postdoctoral train- ing at Yale University and PhD in computer science from the University of Maryland Baltimore County.

14 #EI2020 electronicimaging.org 9:10 electronicimaging.org #EI2020 electronicimaging.org Danli Wang, Chinese AcademyofSciences(China) (JIST Visual fatigueassessmentbasedonmultitasklearning University (Canada) slant (JIST-first),JonathanTong, Allison,andLaurieWilcox,York Robert inVRheadsetsonperceivedsurface The impactofradialdistortions Medical School(UnitedStates) Eli Peli,Harvard image acquisitionanddisplayparameters(JIST Stereoscopic 3Dopticflowdistor 8:50 2020, andStereoscopicDisplaysApplicationsXXXI. This sessionisjointlysponsoredby:HumanVisionandElectronicImaging Grand PeninsulaD 8:45 –10:10am (United States) Mulligan,NASAAmesResearchCenter Kingdom) andJeffrey Session Chairs:NicolasHolliman,UniversityofNewcastle(United Human FactorsinStereoscopicDisplays drowsy andprovideanalert. movements etc,automotivesystemscanalreadytellifadriverhasbecome facialexpressions,eye driver monitoring.Byanalyzingadriverspatterns, made itswayintotheautomotivesector. Agood examplewouldbe the UniversityofLimerick(2005).Biomedicalengineeringhasalready University ofLimerick(2004),andanMScinbiomedicalengineeringfrom covering LEDflicker image qualityperfor isthe lack ofagreedstandardsforassessingcamera facing theindustry ing grouponAutomotiveImageQualitystandards.Oneofthechallenges oftheIEEEP2020work- work Deeganisinvolvedwithcameaboutaspart atV Brian Deeganisaseniorexpert IEEE P2020workinggroup,BrianDeegan,Valeo VisionSystems(Ireland) LED flickermeasurement:Challenges,considerationsandupdatesfrom XVII. 2020, andImageQualitySystemPerformance This sessionisjointlysponsoredby:AutonomousVehicles andMachines Regency B 8:45 –9:30am Session Chair: LukeCui,Amazon(UnitedStates) KEYNOTE: AutomotiveCameraImageQuality Monday, 27,2020 January JOINT SESSIONS 9:30 . HeholdsaBSincomputerengineeringfromthe

mance. Deeganleadstheworkinggroupspecifically tions causedbymismatchesbetween aleo VisionSystems.TheLEDflicker

-first), AlexHwangand

J J -first), oint oint S S SD&A-011 to discussImagingAcrossApplications AVM-001 HVEI-010 HVEI-009 Where Industry andAcademiaMeet Where Industry ession ession

11:50 China and Continental AG(Germany) separation probabilityandcontrastdetectionprobability, MarcGeese, Automotive imagequalityconceptsforthenextSAElevels:Color Image EngineeringGmbH&Co.KG(Germany) A newdimensioningeometriccameracalibration, 9:30 XVII. 2020, andImageQualitySystemPerformance This sessionisjointlysponsoredby:AutonomousVehicles andMachines Regency B 9:30 –10:10am Session Chair:LukeCui,Amazon(UnitedStates) Automotive CameraImageQuality Di Zhang Depth sensitivityinvestigationonmulti-viewglasses-free3Ddisplay 9:50 (JIST-first), RobinJenkin,NVIDIACorporation(UnitedStates) Comparison ofdetectabilityindexandcontrastdetectionprobability Orit Skorka,ONSemiconductor(UnitedStates) model,PaulKaneand Object detectionusinganidealobserver Jenkin, NVIDIACorporation(UnitedStates) Prediction andfastestimationofcontrastdetectionprobability Wang RochesterInstituteofTechnology andSusanFarnand, (UnitedStates) Mingming realitydrivingsimulationplatform, Demonstration ofavirtual LLC (UnitedStates) Sumner,Describing andsamplingtheLEDflickersignal,Robert Imatest, 10:50 XVII. and SystemPerformance 2020, HumanVisionandElectronicImagingImageQuality This sessionisjointlysponsoredby:AutonomousVehicles andMachines Regency B 10:50 am–12:30pm Session Chair:RobinJenkin,NVIDIACorporation(UnitedStates) Predicting CameraDetectionPerformance 9:50 11:30 11:10 12:10 1 , XinzhuSang 2 Beijing UniversityofPostsandTelecommunications (China)

2 , andPengWang

2 ; 1 Communication Universityof

Dietmar Wueller,

J J oint oint , Robin S S SD&A-012 IQSP-018 IQSP-039 AVM-041 AVM-038 AVM-019 AVM-040 AVM-042 ession ession ,

15

Joint Sessions JOINT SESSIONS

Perceptual Image Quality Joint Session Drone Imaging I Joint Session Session Chairs: Mohamed Chaker Larabi, Université de Poitiers (France) Session Chairs: Andreas Savakis, Rochester Institute of Technology (United and Jeffrey Mulligan, NASA Ames Research Center (United States) States) and Grigorios Tsagkatakis, Foundation for Research and Technology 3:30 – 4:50 pm (FORTH) (Greece) Grand Peninsula A 8:45 – 10:10 am This session is jointly sponsored by: Human Vision and Electronic Imaging Cypress 2020, and Image Quality and System Performance XVII. This session is jointly sponsored by: Food and Agricultural Imaging Systems 2020, and Imaging and Multimedia Analytics in a Web and Mobile 3:30 IQSP-066 World 2020. Perceptual quality assessment of enhanced images using a crowd- sourcing framework, Muhammad Irshad1, Alessandro Silva1,2, Sana 8:50 IMAWM-084 Alamgeer1, and Mylène Farias1; 1University of Brasilia and 2IFG (Brazil) A new training model for object detection in aerial images, Geng Yang1, Yu Geng2, Qin Li1, Jane You3, and Mingpeng Cai1; 1Shenzhen Institute of 3:50 IQSP-067 Information Technology (China), 2Shenzhen Shangda Xinzhi Information Perceptual image quality assessment for various viewing conditions Technology Co., Ltd. (China), and 3The Hong Kong Polytechnic University 1 2 2

Joint Sessions and display systems, Andrei Chubarau , Tara Akhavan , Hyunjin , (Hong Kong) Rafal Mantiuk3, and James Clark1; 1McGill University (Canada), 2IRYStec Software Inc. (Canada), and 3University of Cambridge (United Kingdom) 9:10 IMAWM-085 Small object bird detection in drone videos using mask R-CNN 4:10 HVEI-068 deep learning, Yasmin Kassim1, Michael Byrne1, Cristy Burch2, Kevin Improved temporal pooling for perceptual video quality assessment Mote2, Jason Hardin2, and Kannappan Palaniappan1; 1University of using VMAF, Sophia Batsi and Lisimachos Kondi, University of Ioannina Missouri and 2Texas Parks and Wildlife (United States) (Greece) 9:30 IMAWM-086 4:30 HVEI-069 High-quality multispectral image generation using conditional GANs, Quality assessment protocols for omnidirectional video quality evalua- Ayush Soni, Alexander Loui, Scott Brown, and Carl Salvaggio, Rochester tion, Ashutosh Singla, Stephan Fremerey, Werner Robitza, and Alexander Institute of Technology (United States) Raake, Technische Universität Ilmenau (Germany) 9:50 IMAWM-087 Deep Ram: Deep neural network architecture for oil/gas pipeline right- Tuesday, January 28, 2020 of-way automated monitoring, Ruixu Liu, Theus Aspiras, and Vijayan Asari, University of Dayton (United States)

Skin and Deep Learning Joint Session Session Chairs: Alessandro Rizzi, Università degli Studi di Milano (Italy) Video Quality Experts Group I Joint Session and Ingeborg Tastl, HP Labs, HP Inc. (United States) Session Chairs: Kjell Brunnström, RISE Acreo AB (Sweden) and Jeffrey 8:45 – 9:30 am Mulligan, NASA Ames Research Center (United States) Regency C 8:50 – 10:10 am This session is jointly sponsored by: Color Imaging XXV: Displaying, Pro- Grand Peninsula A cessing, Hardcopy, and Applications, and Material Appearance 2020. This session is jointly sponsored by: Human Vision and Electronic Imaging 2020, and Image Quality and System Performance XVII. 8:50 MAAP-082 Beyond color correction: Skin color estimation in the wild through deep 8:50 HVEI-090 learning, Robin Kips, Quoc Tran, Emmanuel Malherbe, and Matthieu The Video Quality Experts Group - Current activities and research, Perrot, L’Oréal Research and Innovation (France) Kjell Brunnström1,2 and Margaret Pinson3; 1RISE Acreo AB (Sweden), 2Mid 3 9:10 COLOR-083 Sweden University (Sweden), and National Telecommunications and SpectraNet: A deep model for skin oxygenation measurement from Information Administration, Institute for Telecommunications Sciences (United multi-spectral data, Ahmed Mohammed, Mohib Ullah, and Jacob Bauer, States) Norwegian University of Science and Technology (Norway) 9:10 HVEI-091 Quality of experience assessment of 360-degree video, Anouk van Kasteren1,2, Kjell Brunnström1,3, John Hedlund1, and Chris Snijders2; 1RISE Research Institutes of Sweden AB (Sweden), 2University of Technology Eindhoven (the Netherlands), and 3Mid Sweden University (Sweden)

16 #EI2020 electronicimaging.org für Informatik and für Informatik Azadeh AsadiShahmirzadi Corriveau of Commerce(UnitedStates), 9:50 electronicimaging.org #EI2020 electronicimaging.org Technology (NTNU)(Norway) UniversityofScienceand beyond, JonYngveHardeberg,Norwegian From colorandspectralreproduction toappearance,BRDF, and 10:40 cessing, Hardcopy, andApplications,MaterialAppearance2020. This sessionisjointlysponsoredby:ColorImagingXXV: Displaying,Pro- Regency C 10:40 am–12:30pm (France) UniversitéJeanMonnetdeSaintEtienne Session Chair:MathieuHebert, Color andAppearanceReproduction Technology (UnitedStates) BlakesleeandAndreasSavakis,RochesterInstituteof detection, Bryan LambdaNet: Afullyconvolutionalarchitecturefordirectionalchange World 2020. 2020, andImagingMultimediaAnalyticsinaWeb andMobile This sessionisjointlysponsoredby:FoodandAgriculturalImagingSystems Cypress B 10:30 –10:50am (Greece) Grigorios Tsagkatakis, FoundationforResearchandTechnology (FORTH) Session Chairs:VijayanAsari,UniversityofDayton(UnitedStates)and Drone ImagingII A multispectraldatasetofoilandwatercolorpaints, 1 image sensorsdesign,AxelClouet Visible tonearinfraredreflectancehyperspectralimagesdatasetfor 9:30 cessing, Hardcopy, andApplications,MaterialAppearance2020. This sessionisjointlysponsoredby:ColorImagingXXV: Displaying,Pro- Regency C 9:30 –10:10am Session Chair:IngeborgTastl, HPLabs,Inc.(UnitedStates) Spectral Dataset and LohicFotioTiotsop, PolitecnicodiTorino (Italy) EnricoMasala, ity measuresthroughtheITS4Sdataset,AntonioServetti, Investigating predictionaccuracyoffullreferenceobjectivevideoqual 3 reference imageandvideoqualitymetrics,MargaretPinson Open softwareframeworkforcollaborativedevelopmentofno 9:30 JOINT SESSIONS 9:50 Communications (UnitedStates) CEA-LETI and AGH UniversityofScienceandTechnology (Poland),and 2 , MikolajLeszczuk 2 CEA-LITEN (France) 2 Consultant (Germany)

2 , andHans-PeterSeidel 3 , andMichaelColligan 2

Intel Corporation(UnitedStates),

1 , JérômeVaillant

1 1 ; , andCéliaViola 4 ; 1 Vahid Babaei Max-Planck-Institut 1 US Department US Department 4 Spirent J J J 1 oint oint oint , Philip IMAWM-114 MAAP-107 MAAP-396 MAAP-106 S S S HVEI-092 HVEI-093 ession ession ession 1 , 2 ; - 11:50 Inc. (UnitedStates) 11:30 10:50 –11:40am States) Mohammed Yousefhussien, GeneralElectricGlobal Research(United Session Chairs: VijayanAsari,UniversityofDayton(UnitedStates)and KEYNOTE: RemoteSensinginAgricultureI Wei-Chung Cheng,USFoodandDrugAdministration(UnitedStates) microscope forcolortissueslidesassessment, estimationofareferencehyperspectral Colorimetrical performance Arai, andTakahiko Horiuchi,ChibaUniversity(Japan) material objectsondisplaydevices(JIST-first),MidoriTanaka, Ryusuke Parameter estimationofPuRetalgorithmformanagingappearance and Technology (Norway) UniversityofScience Habib, PhilGreen,andPeterNussbaum,Norwegian Spectral reproduction:Drivers,usecases,andworkflow 2020, and Image Quality and System Performance XVII. 2020, andImageQualitySystemPerformance This sessionisjointlysponsoredby:HumanVisionandElectronicImaging Grand PeninsulaA 10:50 am–12:30pm Session Chair:KjellBrunnström,RISEAcreoAB(Sweden) GroupII Video QualityExperts Science attheRochesterInstituteofTechnology, NewYork. Aardt iscurrentlyaprofessorintheChesterF. CarlsonCenterforImaging 70 peer-reviewedpapersandmorethan90conferencecontributions. van NASA, Google,andUSDA,amongothers,haspublishedmorethan system state(physiology)assessment.HehasreceivedfundingfromNSF, betweenvegetationstructuraland whichvary thecoreofhisefforts, form Imaging spectroscopyandstructural(lidar)sensingofnaturalresources leader attheCouncilforScientificandIndustrialResearch,SouthAfrica. Katholieke UniversiteitLeuven,Belgium,andastintasresearchgroup and 2004,respectively).Thiswasfollowedbypost-doctoralworkatthe Polytechnic InstituteandStateUniversity, Blacksburg,Virginia (2000 ing (imagingspectroscopyandlightdetectionranging),attheVirginia (1996). HecompletedhisMSandPhDinforestry, focusedonremotesens- cialization) fromtheUniversityofStellenbosch,SouthAfrica andsilviculturespe- (biometry Jan vanAardtobtainedaBScinforestry Technology (UnitedStates) UAS andsatelliteremotesensing,JanvanAardt,RochesterInstituteof Managing cropsacrossspatialandtemporalscales-Therolesof World 2020. 2020, andImagingMultimediaAnalyticsinaWeb and Mobile This sessionisjointlysponsoredby:FoodandAgriculturalImagingSystems Cypress B 3D printers,IngeborgTastl HP 3Dcolorgamut–AreferencesystemforHP’s JetFusion580color 11:10 12:10

1 andAlexandraJu 2 ; Paul Lemailletand 1 HP Labs,Inc.and

, Tanzima J J oint oint COLOR-122 COLOR-121 COLOR-123 MAAP-120 S S FAIS-127 ession ession

2 HP 17

Joint Sessions JOINT SESSIONS

10:50 HVEI-128 Systems (United States) Quality evaluation of 3D objects in mixed reality for different lighting conditions, Jesús Gutiérrez, Toinon Vigier, and Patrick Le Callet, Université 3:30 – 5:30 pm de Nantes (France) Regency A This session is jointly sponsored by: Autonomous Vehicles and Machines 11:10 HVEI-129 2020, and Imaging Sensors and Systems 2020. A comparative study to demonstrate the growing divide between 2D and 3D gaze tracking quality, William Blakey1,2, Navid Hajimirza1, and Imaging sensors are at the heart of any self-driving car project. However, Naeem Ramzan2; 1Lumen Research Limited and 2University of the West of selecting the right technologies isn’t simple. Competitive products span a Scotland (United Kingdom) gamut of capabilities including traditional visible-light cameras, thermal cameras, lidar, and radar. Our session includes experts in all of these 11:30 HVEI-130 areas, and in emerging technologies, who will help us understand the Predicting single observer’s votes from objective measures using neural strengths, weaknesses, and future directions of each. Presentations by the networks, Lohic Fotio Tiotsop1, Tomas Mizdos2, Miroslav Uhrina2, Peter speakers listed below will be followed by a panel discussion. Pocta2, Marcus Barkowsky3, and Enrico Masala1; 1Politecnico di Torino (Italy), 2Zilina University (Slovakia), and 3Deggendorf Institute of Technology Introduction: David Cardinal, ExtremeTech.com, Moderator (DIT) (Germany) David Cardinal has had an extensive career in high-tech, including as

Joint Sessions a general manager at Sun Microsystems and co-founder and CTO of 11:50 HVEI-131 A simple model for test subject behavior in subjective experiments, FirstFloor Software and Calic o Commerce. More recently he operates a technology consulting business and is a technology journalist, writing for Zhi Li1, Ioannis Katsavounidis2, Christos Bampis1, and Lucjan Janowski3; publications including PC Magazine, Ars Technica, and ExtremeTech.com. 1Netflix, Inc. (United States),2 Facebook, Inc. (United States), and 3AGH University of Science and Technology (Poland) LiDAR for Self-driving Cars: Nikhil Naikal, VP of Software Engineering, 12:10 HVEI-132 Velodyne Characterization of user generated content for perceptually-optimized Nikhil Naikal is the VP of software engineering at Velodyne Lidar. He video compression: Challenges, observations, and perspectives, Suiyi joined the company through their acquisition of Mapper.ai where he Ling1,2, Yoann Baveye1,2, Patrick Le Callet2, Jim Skinner3, and Ioannis was the founding CEO. At Mapper.ai, Naikal recruited a skilled team of Katsavounidis3; 1CAPACITÉS (France), 2Université de Nantes (France), and scientists, engineers and designers inspired to build the next generation of 3Facebook, Inc. (United States) high precision machine maps that are crucial for the success of self-driving vehicles. Naikal developed his passion for self driving technology while working with Carnegie Mellon University’s Tartan Racing team that won the KEYNOTE: Remote Sensing in Agriculture II Joint Session DARPA Urban Challenge in 2007 and honed his expertise in high preci- sion navigation while working at Robert Bosch research and subsequently Session Chairs: Vijayan Asari, University of Dayton (United States) and Flyby Media, which was acquired by Apple in 2015. Naikal holds a Mohammed Yousefhussien, General Electric Global Research (United PhD in electrical engineering from UC Berkeley, and a MS in robotics from States) Carnegie Mellon University. 11:40 am – 12:30 pm Challenges in Designing Cameras for Self-driving Cars: Nicolas Cypress B Touchard, VP of Marketing, DXOMARK This session is jointly sponsored by: Food and Agricultural Imaging Systems 2020, and Imaging and Multimedia Analytics in a Web and Mobile Nicolas Touchard leads the development of new business opportunities World 2020. for DXOMARK, including the recent launch of their new Audio Quality Benchmark, and innovative imaging applications including automotive. FAIS-151 Starting in 2008 he led the creation of .com, now a reference Practical applications and trends for UAV remote sensing in agriculture, for scoring the image quality of DSLRs and . Prior to DxO, Kevin Lang, PrecisionHawk (United States) Touchard spent 15+ years at managing international R&D teams, Kevin Lang is general manager of PrecisionHawk’s agriculture business where he initiated and headed the company’s worldwide mobile imaging (Raleigh, North Carolina). PrecisionHawk is a commercial drone and data R&D program. company that uses aerial mapping, modeling, and agronomy platform Using Thermal Imaging to Help Cars See Better: Mike Walters, VP of specifically designed for precision agriculture. Lang advises clients on how Product Management for Thermal Cameras, FLIR Systems to capture value from aerial data collection, artificial intelligence, and advanced analytics in addition to delivering implementation programs. Abstract: The existing suite of sensors deployed on autonomous vehicles Lang holds a BS in mechanical engineering from Clemson University and today have proven to be insufficient for all conditions and roadway an MBA from Wake Forest University. scenarios. That’s why automakers and suppliers have begun to examine complementary sensor technology, including thermal imaging, or long- wave infrared (LWIR). This presentation will explore and show how thermal sensors detect a different part of the electromagnetic spectrum compared to PANEL: Sensors Technologies for Autonomous Vehicle Joint Session other existing sensors, and thus are very effective at detecting living things, Panel Moderator: David Cardinal, Cardinal Photo & Extremetech.com including pedestrians, and other important roadside objects in challenging (United States) conditions such as complete darkness, in cluttered city environments, in Panelists: Sanjai Kohli, Visible Sensors, Inc. (United States); Nikhil Naikal, direct sun glare, or in inclement weather such as fog or rain. Velodyne Lidar (United States); Greg Stanley, NXP Semiconductors (United States); Alberto Stochino, Perceptive Machines (United States); Nicolas Touchard, DXOMARK Image Labs (France); and Mike Walters, FLIR

18 #EI2020 electronicimaging.org electronicimaging.org #EI2020 electronicimaging.org Chanas, andFrédéricGuichard,DXOMARK (France) DXOMARK objectivevideoqualitymeasurements,EmilieBaudin,Laurent 3:30 XVII. 2020, andImageQualitySystemPerformance This sessionisjointlysponsoredby:HumanVisionandElectronicImaging Grand PeninsulaA 3:30 –5:10pm Session Chair: JonathanPhillips,GoogleInc.(UnitedStates) Radar’s Role: from StanfordUniversity. Mike residesinSanJoseandheholdsamasterselectricalengineering camera development,includingforautonomousautomotiveapplications. FLIR SystemsInc.Mikecurrentlyleadsallproductmanagementforthermal ous executivetechnologyrolesatHP, AgilentTechnologies, Flexandnow Mike Walters hasspentmorethan35yearsinSiliconValley, holding vari- JOINT SESSIONS Image QualityMetrics technology atApple. Perceptivein2017,Stochinodevelopedautonomous Before starting for NASAspacecraftatStanfordandtheAustralianNationalUniversity. MIT andCaltech.Healsobuiltinstrumentalrangingtimingtechnology at anhDinphysicsforhisworkontheLIGOobservatories observatories is bringingcuttingedgetechnologyfirstpioneeredingravitationalwave Stochino is thefounderandCEOofPerceptive,acompanythat Alberto Apple, PerceptiveisreinventingsensingforAutonomy2.0. sensors atLIGO,AIsiliconGoogle,andautonomoustechnology vehicles. Basedonexperiencedevelopingsomeoftheworldmostprecise approach isneededtoenablenext-generationautonomous effective of magnitudeabovethecapabilitytoday’s availablesensors.Amore Abstract: ThesensingrequirementsofLevel4and5autonomyareorders Perceptive Stochino,FounderandCEO, Auto SensorsfortheFuture:Alberto Fellow oftheIEEE. andisa He hasbeenrecognizedforhiscontributionsintheindustry of localization,communication,andsensing.MostrecentlyVisibleSensors. Sanjai Kohlihasbeeninvolvedincreatingmultiplecompaniesthearea Sensors, Inc. Tales fromtheAutomotiveSensorTrenches: SanjaiKohli,CEO,Visible systems forbothsafetyandemissionsrelatedautomotiveapplications. ment rolesatTier 1automotivesuppliers,predominatelydevelopingsensor Stanley livedinMichiganwhereheworkedelectronicproductdevelop- automated vehicleandelectricapplications.PriortojoiningNXP, At NXP, NXPtechnologiesastheyareintegratedinto Stanleysupports Greg StanleyisafieldapplicationsengineeratNXPSemiconductors. vehicles? Howwillradarimproveintheno-too-distantfuture? IR camera,ultrasonicandlidar. Whereisradarusedtoday, includingL4 fromvisiblecamera, description ofradarandthereasonsisdifferent space. Thebasicsofautomotiveradarwillbepresented,includinga there isstillroomforsignificantadvanceswithintheautomotiveradar ofmanyautomotivesafetysystems, Abstract: Whileradarisalreadypart Semiconductors

Greg Stanley, FieldApplicationsEngineer, NXP

J oint S IQSP-166 ession Dublin (Ireland) Kingdom) and Carlos Quiroga University Javeriana,Cali(Colombia)and Benitez Restrepo Edward Fry Wednesday, 29,2020 January using amixedrealitydeviceandIwill showhowwearecurrentlyusing allows theclinicianto“lookinside” the braintoseefunctionalareas curate stimulatorpositioning.InStanfordwehavedevelopedamethod that clinician needstoaccuratelystimulatespecificbrainnetworks,requiring ac- depression andavarietyofneuropsychiatricdiseases.To the beeffective non-invasive brainstimulationtechniquethatisusedincreasinglyfortreating stimulation treatmentofdepression.Transcranial MagneticStimulationisa about aprojectonmixedrealityneuronavigationfornon-invasivebrain projectswearepursuingatIMMERSandgointodetail talk aboutdifferent guidance fortheclinicianbothsimplerandsafer. InthispresentationIwill overlaid ontherelevantanatomyinsidepatient,makingnavigation and develop techniquesthatallowtovisualizemedicalimagingdatadirectly ity (IMMERS)inStanford,weareconnectingcliniciansandengineersto inside thepatient.AtIncubatorforMedicalMixedandExtendedReal - intuitive mannerpossible,bydirectlyprojectingitontotheactualorgans anatomyinthemost solve thisissuebyallowingtovisualizetheinternal imaging datatotheactuallocationinpatient’s body. Mixedrealitycan or well-trainedcliniciantotranslatethelocationofstructuresinmedical traditionally displayedonseparate2Dscreens,itneedsanintermediary anatomyofthehumanbody.the internal Sincemedicalimagingdatais Abstract: Medicalimagingisusedextensivelyworld-widetovisualize tion treatment,ChristophLeuze,StanfordUniversity(UnitedStates) Mixed realityguidedneuronavigationfornon-invasivebrainstimula and ApplicationsXXXI. ity 2020,ImagingSensorsandSystemsStereoscopicDisplays This sessionisjointlysponsoredby:TheEngineeringRealityofVirtualReal- Regency A 8:50 –9:30am and DietmarWueller, ImageEngineeringGmbH&Co.KG(Germany) Session Chairs: KevinMatherson,MicrosoftCorporation(UnitedStates) KEYNOTE: ImagingSystemsandProcessing age qualityandrelevantmetrics,SophieTriantaphillidou ofmegapixelsensorresolutionondisplayedim- Studies ontheeffects 1 Quality awarefeatureselectionforvideoobjecttracking, 3-D deepconvolutionalneuralnetwork,RogerNieto using No referencevideoqualityassessmentwithauthenticdistortions metrics onaudio-visualcontent,HelardBecerra objectivequality ofautoencoder-based Analyzing theperformance 3:50 Andrew Hines 4:30 (United States) 4:50 4:10 (Colombia), and Pontificia UniversityJaveriana,Cali(Colombia), 1 , andChuangHsinHung

2 2 ;

Huawei (China) 2 1 1 , JoseRuiz-Munoz , RogerFigueroaQuintero University ofBrasilia(Brazil)and 3 University ofFlorida(UnitedStates)

3 , and Hernan Benitez-Restrepo , andHernan 2 ; 1 University ofWestminster (United 2 The UniversityofTexas atAustin 1 , andAlanBovik 2 1 Universidad delValle , MylèneFarias 2 University College

1 , Hernan Dario , Hernan 1 , JanSmejkal Roger Nieto J 2 oint ; 1 Pontificia 1 1 S , and ; IQSP-167 IQSP-169 IQSP-170 IQSP-168

ession ISS-189 - 1 19 , 1 ,

Joint Sessions JOINT SESSIONS

this method to perform mixed reality-guided brain stimulation experiments. University Hannover (Germany) Christoph Leuze is a research scientist in the Incubator for Medical Mixed and Extended Reality at Stanford University where he focuses on techniques Visualization Facilities Joint Session for visualization of MRI data using virtual and augmented reality devices. He published BrainVR, a virtual reality tour through his brain and is closely Session Chairs: Margaret Dolinsky, Indiana University (United States) and working with clinicians on techniques to visualize and register medical Andrew Woods, Curtin University (Australia) imaging data to the real world using optical see-through augmented reality 3:30 – 4:10 pm devices such as the Microsoft Hololens and the Magic Leap One. Prior to Grand Peninsula joining Stanford, he worked on high-resolution brain MRI measurements This session is jointly sponsored by: The Engineering Reality of Virtual Real- at the Max Planck Institute for Human Cognitive and Brain Sciences in ity 2020, and Stereoscopic Displays and Applications XXXI. Leipzig, for which he was awarded the Otto Hahn medal by the Max Planck Society for outstanding young researchers. 3:30 SD&A-265 Immersive design engineering, Bjorn Sommer, Chang Lee, and Savina Toirrisi, Royal College of Art (United Kingdom) Augmented Reality in Built Environments Joint Session 3:50 SD&A-266 Session Chairs: Raja Bala, PARC (United States) and Matthew Shreve, Palo Joint Sessions Using a random dot stereogram as a test image for 3D demonstrations, Alto Research Center (United States) Andrew Woods, Wesley Lamont, and Joshua Hollick, Curtin University 10:30 am – 12:40 pm (Australia) Cypress B This session is jointly sponsored by: The Engineering Reality of Virtual Real- ity 2020, and Imaging and Multimedia Analytics in a Web and Mobile KEYNOTE: Visualization Facilities Joint Session World 2020. Session Chairs: Margaret Dolinsky, Indiana University (United States) and 10:30 IMAWM-220 Andrew Woods, Curtin University (Australia) Augmented reality assistants for enterprise, Matthew Shreve and Shiwali 4:10 – 5:10 pm Mohan, Palo Alto Research Center (United States) Grand Peninsula D 11:00 IMAWM-221 This session is jointly sponsored by: The Engineering Reality of Virtual Real- Extra FAT: A photorealistic dataset for 6D object pose estimation, ity 2020, and Stereoscopic Displays and Applications XXXI. 1 1 2 1 Jianhang Chen , Daniel Mas Montserrat , Qian Lin , Edward Delp , and The keynote will be co-presented by Derek Van Tonder and 1 1 2 Jan Allebach ; Purdue University and HP Labs, HP Inc. (United States) Andy McCutcheon. 11:20 IMAWM-222 ERVR-295 Space and media: Augmented reality in urban environments, Luisa Social holographics: Addressing the forgotten human factor, Derek Van Caldas, University of California, Berkeley (United States) Tonder and Andy McCutcheon, Euclideon Holographics (Australia) 12:00 ERVR-223 Abstract: With all the hype and excitement surrounding Virtual and Active shooter response training environment for a building evacua- Augmented Reality, many people forget that while powerful technology tion in a collaborative virtual environment, Sharad Sharma and Sri Teja can change the way we work, the human factor seems to have been left Bodempudi, Bowie State University (United States) out of the equation for many modern-day solutions. For example, most modern Virtual Reality HMDs completely isolate the user from their external 12:20 ERVR-224 environment, causing a wide variety of problems. “See-Through” technol- Identifying anomalous behavior in a building using HoloLens for emer- ogy is still in its infancy. In this submission we argue that the importance gency response, Sharad Sharma and Sri Teja Bodempudi, Bowie State of the social factor outweighs the headlong rush towards better and more University (United States) realistic graphics, particularly in the design, planning and related engineer- ing disciplines. Large-scale design projects are never the work of a single person, but modern Virtual and Augmented Reality systems forcibly channel Psychophysics and LED Flicker Artifacts Joint Session users into single-user simulations, with only very complex multi-user solutions Session Chair: Jeffrey Mulligan, NASA Ames Research Center (United slowly becoming available. In our presentation, we will present three States) different Holographic solutions to the problems of user isolation in Virtual Reality, and discuss the benefits and downsides of each new approach. 10:50 – 11:30 am With all the hype and excitement surrounding Virtual and Augmented Regency B Reality, many people forget that while powerful technology can change This session is jointly sponsored by: Autonomous Vehicles and Machines the way we work, the human factor seems to have been left out of the 2020, and Human Vision and Electronic Imaging 2020. equation for many modern-day solutions. For example, most modern Virtual Reality HMDs completely isolate the user from their external environment, 10:50 HVEI-233 causing a wide variety of problems. “See-Through” technology is still in its Predicting visible flicker in temporally changing images,Gyorgy Denes infancy. In this submission we argue that the importance of the social factor and Rafal Mantiuk, University of Cambridge (United Kingdom) outweighs the headlong rush towards better and more realistic graphics, particularly in the design, planning and related engineering disciplines. 11:10 HVEI-234 Large-scale design projects are never the work of a single person, but Psychophysics study on LED flicker artefacts for automotive digital mir- modern Virtual and Augmented Reality systems forcibly channel users into ror replacement systems, Nicolai Behmann and Holger Blume, Leibniz

20 #EI2020 electronicimaging.org electronicimaging.org #EI2020 electronicimaging.org recently named‘BestTechnology Company’in2019. Aerospace &DefenceforBrisbanebasedEuclideonHolographics, He istheauthoroftwobooksandcurrentlyGlobalSalesManager, unique patented‘HardIcedTea,’ whichwassubsequently soldin2013. pioneered thefirstnewalcoholbeveragescategoryin50yearswithhis names inmusicandfilmbeforemovingtoAustralia2001.In2007,he recognisable celebritybodyguards,workingwithsomeofthebiggest He dove-tailedhisspecialisedskill-settobecomeoneoftheworld’smost in1990. into commercialaviationasapilot,afterleavingthemilitary SpecialForcesCommandowhotransitioned Andy McCutcheonisaformer roboticsplatforms. of arangedifferent the “Wildcat”roboticssoftwareframework,whichfunctionsas“brains” of Robotics andAutonomousSystemsGrouptoproduceaWindowsport newImmersiveTrainingtionary andaCSIROData61 softwareplatform, with severalprojects,includingaSafeSitePtyLtddevelopingrevolu- ing VRtechnologytourbandevelopment.CurrentlyVan Tonder isinvolved ment to technical business development. In 2015 he joined Taylors—apply- 2012 hejoinedEuclideon,transitioningfromleadingsoftwaredevelop- an iOSgamecalled“RobotsCan’t Jump”writtenfromscratchinC++.In Brisbane. In2010,hefoundedBaysideGamestopursuedevelopmentof Studios, andalectureringameprogrammingatGriffithUniversity Studios, aseniorengineprogrammerwithTantalus MediaandthenSega development withI-Imagine,thenwasaseniordeveloperPandemic in BrisbaneAustralia.Van Tonder beganhiscareerinconsolegame B2B productsalesandprojectmanagementwithEuclideonHolographics Derek Van Tonder isseniorbusinessdevelopmentmanagerspecializingin and discussthebenefitsdownsidesofeachnewapproach. Reality,Holographic solutionstotheproblemsofuserisolationinVirtual becoming available.Inourpresentation,wewillpresentthreedifferent complexmulti-usersolutionsslowly single-user simulations,withonlyvery JOINT SESSIONS 21

Joint Sessions Where Industry and Academia Meet to discuss Imaging Across Applications

NOTES

22 #EI2020 electronicimaging.org Schedule

PAPER SCHEDULE Where Industry and Academia Meet BY DAY/TIME to discuss Imaging Across Applications

PAPER SCHEDULE BY DAY/TIME Monday, January 27, 2020 8:45 am Key AVM-001 LED flicker measurement: Regency B Challenges, considerations and EI2020 Theme: Automotive Imaging uwpdates from IEEE P2020 working group (Deegan) EI2020 Theme: AR/VR Future Technology

8:50 am EI2020 Theme: Frontiers in Computational Imaging 3DMP-002 Deadlift recognition and Grand application based on multiple Peninsula A EI2020 Virtual Track: 3D Imaging modalities using recurrent neural network (Chang) EI2020 Virtual Track: Deep Learning

COIMG-005 Plug-and-play amP for image Grand EI2020 Virtual Track: Medical Imaging recovery with Fourier-structured Peninsula operators (Schniter) B/C EI2020 Virtual Track: Remote Sensing HVEI-009 Stereoscopic 3D optic Grand flow distortions caused by Peninsula D EI 2020 Virtual Track: Security mismatches between image acquisition and display parameters (JIST-first) (Hwang)

IRIACV-013 Passive infrared markers for Regency A IQSP-018 A new dimension in geometric Regency B indoor robotic positioning and camera calibration (Wueller) navigation (Chen) IRIACV-015 Creation of a fusion Regency A 8:55 am image obtained in various electromagnetic ranges used MWSF-017 Watermarking to turn plastic Cypress A in industrial robotic systems Schedule packaging from waste to (Voronin) asset through improved optical tagging (Logan) MAAP-020 Capturing and 3D rendering of Regency C optical behavior: The physical 9:10 am approach to realism (Ritz)

3DMP-003 Learning a CNN on multiple Grand SD&A-011 Visual fatigue assessment based Grand sclerosis lesion segmentation Peninsula A on multitask learning (JIST-first) Peninsula D with self-supervision (Fenneteau) (Wang) COIMG-006 A splitting-based iterative Grand 9:50 am algorithm for GPU-accelerated Peninsula statistical dual-energy x-ray CT B/C AVM-019 Automotive image quality Regency B reconstruction (Li) concepts for the next SAE levels: Color separation HVEI-010 The impact of radial distortions Grand probability and contrast in VR headsets on perceived Peninsula D detection probability (Geese) surface slant (JIST-first) (Tong) COIMG-008 Integrating learned data Grand IRIACV-014 Improving multimodal Regency A and image models through Peninsula

localization through self- consensus equilibrium (Karl) B/C supervision (Relyea) IRIACV-016 Locating mechanical switches Regency A 9:30 am using RGB-D sensor mounted on a disaster response robot 3DMP-004 Action recognition using pose Grand (Kanda) estimation with an artificial 3D Peninsula A coordinates and CNN (Kim) SD&A-012 Depth sensitivity investigation Grand on multi-view glasses-free 3D Peninsula D COIMG-007 Proximal Newton Methods for Grand display (Zhang) x-ray imaging with non-smooth Peninsula regularization (Ge) B/C

electronicimaging.org #EI2020 23 SecuritySecurity Regency B Grand Peninsula B/C Harbour A/B Regency A Regency C Grand Peninsula D Cypress A Grand Peninsula A Regency B Grand Peninsula B/C Harbour A/B Regency A Regency C Grand Peninsula D Cypress A Grand Peninsula D

#EI2020 electronicimaging.org Remote Remote SensingSensing Medical Medical Computational imaging in infrared sensing of the atmosphere (Milstein) CNN-based classification of degraded images (Endo) Real-time small-object change detection from ground vehicles using a Siamese convolutional neural network (JIST-first) (Klomp) Caustics and translucency perception (Gigilashvili) Application of a high resolution autostereoscopic display for medical purposes (Higuchi) Prediction and fast estimation Prediction and fast estimation of contrast detection probability (Jenkin) Generalized tensor learning with applications to 4D-STEM image denoising (Zhang) Adaptive context encoding module for semantic segmentation (Wang) Detection and characterization of rumble strips in roadway video logs (Aykac) BTF image recovery based on U-Net and texture interpolation (Tada) HoloExtension - AI-based 2D backwards compatible super- multiview display technology (Naske) Estimating watermark synchronization signal using partial pixel least squares (Lyons) Object detection using an ideal observer (Kane) model Signal rich art: Improvements and extensions (Kamath) Quality assessment for 3D reconstruction of building interiors (Raman Kumar) Morpholo: A hologram Morpholo: A (Canessa) generator algorithm Deep Deep LearningLearning COIMG-046 IPAS-028 IRIACV-051 MAAP-033 SD&A-055 AVM-040 COIMG-045 IPAS-027 IRIACV-050 MAAP-032 SD&A-054 11:45 am MWSF-024 11:50 am AVM-041 11:20 am MWSF-023 11:30 am 3DMP-036 SD&A-053

3D3D Cypress A Grand Peninsula A Grand Peninsula B/C Harbour A/B Regency B Regency A Regency C Grand Peninsula A Regency B Grand Peninsula B/C Harbour A/B Regency A Regency C Cypress A

ComputationalComputational ominaga)

atanabe) AR/VR AR/VR -first) (W A joint reconstruction and lambda tomography regularization technique for energy-resolved x-ray imaging (Webber) Real-world fence removal from a single-image via deep neural network (Matsui) Demonstration of a virtual reality driving simulation platform (Wang) Rare-class extraction using cascaded pretrained networks applied to crane classification (Klomp) Appearance reproduction of material surface with strong specular reflection (T Describing and sampling the Describing and sampling LED flicker signal (Sumner) Learned priors for the joint ptycho-tomography reconstruction (Aslan) Pruning neural networks via gradient information (Molchanov) A review and quantitative evaluation of small face detectors in deep learning (Xionog) One-shot multi-angle measurement device for evaluating the sparkle impression (JIST Watermarking in deep neural Watermarking networks via error back- propagation (Wu) 3D shape estimation for smooth surfaces using grid-like structured light patterns (Wang) Reducing invertible embedding distortion matching using graph model (Wu) precision depth Variable encoding for 3D range geometry compression (Finley) Automotive Automotive COIMG-044 IPAS-026 IQSP-039 IRIACV-049 MAAP-031 AVM-038 COIMG-043 IPAS-025 IRIACV-048 MAAP-030 24 MWSF-022 11:10 am 3DMP-035 10:55 am 10:30 am MWSF-021 10:50 am 3DMP-034 PAPER SCHEDULE BY DAY/TIME SCHEDULE PAPER

Schedule electronicimaging.org #EI2020 electronicimaging.org PAPER SCHEDULEBYDAY/TIME AVM-057 3:30 pm PLENARY-056 2:00 pm AVM-042 12:10 pm SD&A-065 SD&A-403 IRIACV-052 IPAS-029 COIMG-047 MWSF-075 MAAP-060 IRIACV-070 IQSP-066 IPAS-062 COIMG-058 Automotive Automotive Results (Sargent) The automateddrivewest: (Bouman) the firstpictureofablackhole Imaging theunseen:Taking probability (JIST-first)(Jenkin) index andcontrastdetection Comparison ofdetectability success (Paul) issues andtechniquesfor High framerate3D-challenges, modulator (Favalora) electroholographic optical Monolithic surface-emitting resolution withCTCloss(Bilkova) Perceptual licenseplatesuper- photos (Voronin) and removingonarchival approach fordefectdetection A deeplearning-based computational imaging(Sun) optimalsamplingfor Learning compression quality(Y scalable withrespectto JPEG steganalysisdetectors aging (Sidorov) wood causedby(accelerated) appearance ofpolychrome Changes inthevisual control (Tokola) and applicationfortraf from overheadcameraimagery Estimating vehiclefueleconomy sourcing framework(Irshad) enhanced imagesusingacrowd- Perceptual qualityassessmentof (Voronin) framework using aquaternion medical imagesegmentation An activecontourmodelfor nanoscopy (Huang) with live-celland3Dfluorescence Revealing subcellularstructures AR/VR AR/VR

ousfi) fic Computational

Peninsula D Grand Cypress A Peninsula D Grand Regency A A/B Harbour B/C Peninsula Grand Regency B Peninsula D Grand Regency A A/B Harbour B/C Peninsula Grand Regency B Regency C Peninsula A Grand 3D

MWSF-076 3:55 pm COIMG-059 3:50 pm MWSF-077 4:20 pm AVM-079 4:10 pm IRIACV-072 IPAS-064 HVEI-068 MAAP-061 IRIACV-071 IQSP-067 IPAS-063 Learning Deep (Hadar) convolutional neuralnetworks over noisychannelsusing spatial-domain steganography Detection ofmalicious estimation (Galvis) system for3Dobjectshape Single-shot codeddiffraction Hosseini) Morshedi computation (Darvish factor estimationforPRNU Semi-blind imageresampling (Yogamani) for autonomousdriving detection andimagerestoration VisibilityNet: Cameravisibility videos (Ullah) Crowd congestiondetectionin density maps(Jaber) featuresandcell- deep-learning lung cancersubtypingusing Pathology image-based assessment usingVMAF(Kondi) for perceptualvideoquality Improved temporalpooling (Takahashi)deep learning renewing oldobjectsusing Image processingmethodfor (Kapeller) materials positions fordifferent Optimization oflightsource Tailored photometricstereo: (Chubarau) conditions anddisplaysystems assessment forvariousviewing Perceptual imagequality (Zerva) spatiotemporal coherence compression efficiencyusing Improving 3Dmedicalimage Medical Remote Sensing Remote

Cypress A Regency A A/B Harbour Peninsula A Grand Regency B Cypress A Regency C Regency A Peninsula A Grand A/B Harbour B/C Peninsula Grand Security 25

Schedule SecuritySecurity Harbour A/B Regency A Regency C Grand Peninsula D Regency C Grand Peninsula A Cypress B Harbour A/B Regency A Grand Peninsula D Regency B Grand Peninsula B/C Grand Peninsula A Cypress B

#EI2020 electronicimaging.org Remote Remote SensingSensing Medical Medical Indirect time-of-flight CMOS image sensor using 4-tap charge-modulation and range-shifting multi-zone technique (Mars) Visible to near infrared reflectance hyperspectral images dataset for image sensors design (Clouet) Full-parallax 3D display using time-multiplexing projection technology (Omura) Quality of experience assessment of 360-degree video (Brunnström) in Small object bird detection infrared drone videos using mask R-CNN deep learning (Palaniappan) Introducing scene understanding using to person re-identification a spatio-temporal multi-camera model (Liu) A 4-tap global shutter pixel with enhanced IR sensitivity for VGA time-of-flight CMOS image sensors (Jung) Projection type 3D display using spinning screen (Hayakawa) Spectral shearing LADAR (Stafford) Open software framework for collaborative development of no reference image and video quality metrics (Pinson) High-quality multispectral image generation using conditional GANs (Soni) Use of retroreflective markers for object detection in harsh sensing conditions (Gotchev) VRUNet: Multitask learning model for intent prediction of vulnerable road users (Ranga) SpectraNet: A deep model for SpectraNet: measurement skin oxygenation (Bauer) from multi-spectral data Deep Deep LearningLearning ISS-104 MAAP-106 SD&A-100 HVEI-091 IMAWM-085 IPAS-095 ISS-103 SD&A-099 COIMG-111 HVEI-092 IMAWM-086 IPAS-096 9:30 am AVM-109 9:10 am COLOR-083

3D3D Regency B Grand Peninsula B/C Grand Peninsula A Cypress B Harbour A/B Regency C Grand Peninsula D Cypress A Regency B Grand Peninsula A Regency A Cypress A Regency B

ComputationalComputational

AR/VR AR/VR Computation and photography: became phone mobile the How a camera (Milanfar) The Video Quality Experts Group - Current activities and research (Brunnström) A new training model for object detection in aerial images (You) cascading algorithm Two-step for camera-based night fire detection (Park) Beyond color correction: Skin color estimation in the wild through deep learning (Kips) Dynamic zero-parallax-setting techniques for multi-view autostereoscopic display (Jiao) Quality assessment protocols Quality assessment protocols for omnidirectional video quality evaluation (Singla) Head-based tracking (Ullah) Technology in context: Solutions in context: Technology to foreign propaganda and disinformation (Stewart) A CNN-based correlation A CNN-based correlation predictor for PRNU-based image manipulation localization (Chakraborty) Single image haze removal using multiple scattering model for road scenes (Kim) Regaining sight of humanity on the roadway towards automation (López-González) Sun-glare detection using late Sun-glare detection and image fusion of CNN (Yahiaoui) processing operators Automotive Automotive COIMG-089 HVEI-090 IMAWM-084 IPAS-094 MAAP-082 SD&A-098 HVEI-069 IRIACV-074 26 MWSF-102 9:00 am Tuesday, January 28, 2020 Tuesday, 8:50 am AVM-088 4:50 pm AVM-081 4:45 pm MWSF-078 4:30 pm AVM-080 PAPER SCHEDULE BY DAY/TIME SCHEDULE PAPER

Schedule electronicimaging.org #EI2020 electronicimaging.org IMAWM-114 10:30 am AVM-110 10:10 am AVM-108 9:50 am PAPER SCHEDULEBYDAY/TIME SD&A-101 MAAP-107 ISS-105 IPAS-097 IMAWM-087 HVEI-093 COIMG-112 MWSF-116 ISS-115 COIMG-113 Automotive Automotive (Savakis) directional changedetection convolutional architecturefor LambdaNet: Afully estimation (Ewaisha) for drivergazeandheadpose End-to-end multitasklearning neural networks(Lee) and shallowconvolutional using Siameserandomforests Multiple pedestriantracking multiplexing (Yamauchi) wavelength division Light fielddisplayusing Shahmirzadi) and watercolorpaints(Asadi A multispectraldatasetofoil (Shin) pixelapertures offset CMOS imagesensorwith the pixelheightinmonochrome depth extractionbydecreasing Improving thedisparityfor GoogLeNet (Yang) saliency modeland approach usingmultiscale A novelimagerecognition monitoring (Aspiras) pipeline right-of-wayautomated network architectureforoil/gas Deep RAM:neural dataset (FotioTiotsop) measures throughtheITS4S objective videoquality accuracy offullreference Investigating prediction scattering samples(Waller) microscopy withmultiple- 3D computationalphase (Zmudzinski) compression ghostartifacts H.264 videodatausing Detecting “deepfakes”in (Millet) imagesensors 3D-IC smart solutions (Spencer) turbulence andemerging Imaging throughdeep AR/VR AR/VR

Computational

Peninsula D Grand Regency C Regency A A/B Harbour Cypress B Peninsula A Grand B/C Peninsula Grand Regency B Cypress A Cypress B B/C Peninsula Grand Regency B Regency A 3D

AVM-124 10:50 am MAAP-396 10:40 am COIMG-126 11:10 am MWSF-117 10:55 am COIMG-125 SD&A-139 MAAP-120 ISS-143 IPAS-134 HVEI-129 SD&A-138 IPAS-133 HVEI-128 FAIS-127 Learning Deep computer visionaccuracy(Taylor) hyperparameters toimprove Automated optimizationofISP BRDF, andbeyond(Hardeberg) reproduction toappearance, From colorandspectral 3D ranging(Wagner) Intensity interferometry-based (Alattar) videos usingwatermarking problem ofdeepfakenews A systemformitigatingthe conditions (Watnik) highly attenuatingfog Holographic imagingthrough stereoscopic panorama(Wang) Flow mapguidedcorrectionto color 3Dprinters(Tastl) system forHP’s JetFusion580 HP 3Dcolorgamut–Areference trench capacitors(Fujihara) stage lateraloverflowintegration CMOS imagesensorwithtwo- linear responsesingleexposure An over120dBdynamicrange RGBW colorfilterarray(Jeong) for aperiodicwhite-dominant Color interpolationalgorithm tracking quality(Blakey) between 2Dand3Dgaze demonstrate thegrowingdivide A comparativestudyto reconstruction system(Kapeller) evaluation ofamulti-stereo3D Objective andsubjective adjustable blockspace(Yoon) distancewith Bhattacharyya Edge detectionusingthe lighting conditions(LeCallet) in mixedrealityfordifferent Quality evaluationof3Dobjects sensing (vanAardt) of UASandsatelliteremote and temporalscales-Theroles Managing cropsacrossspatial Medical Remote Sensing Remote

Regency C Regency B Peninsula D Grand Regency C Regency A A/B Harbour Peninsula A Grand B/C Peninsula Grand Cypress A Peninsula D Grand A/B Harbour Peninsula A Grand Cypress B B/C Peninsula Grand Security 27

Schedule SecuritySecurity Grand Peninsula D Regency A Grand Peninsula D Regency B Grand Peninsula B/C Regency C Grand Peninsula A Harbour A/B Regency C Grand Peninsula A Harbour A/B Grand Peninsula D

#EI2020 electronicimaging.org Remote Remote SensingSensing Medical Medical Processing legacy underwater stereophotography for new applications (Woods) 3D DiffuserCam: Computational microscopy with a lensless imager (Waller) Colorimetrical performance estimation of a reference hyperspectral microscope for color tissue slides assessment (Lemaillet) Characterization of user generated content for perceptually-optimized video compression: Challenges, observations and perspectives (Ling) An expandable image database for evaluation of full- reference image visual quality metrics (Egiazarian) Multifunctional stereoscopic machine vision system with multiple 3D outputs (Ezhov) Imaging in the autonomous vehicle revolution (Hicok) A simple model for test subject behavior in subjective experiments (Li) Per clip Lagrangian multiplier optimisation for HEVC (Ringis) in Event threshold modulation dynamic vision spiking imagers for data throughput reduction (Cubero) methods for Validation geometric camera calibration (Romanczyk) Parameter estimation of PuRet Parameter estimation managing algorithm for of material objects appearance (JIST-first) on display devices (Tanaka) Deep Deep LearningLearning SD&A-141 COIMG-152 COLOR-123 HVEI-132 IPAS-137 SD&A-142 2:00 pm PLENARY-153 HVEI-131 IPAS-136 ISS-145 12:10 pm AVM-150 COLOR-122

3D3D Regency A Grand Peninsula D Cypress B Cypress A Regency B Grand Peninsula B/C Cypress A Regency B Grand Peninsula B/C Regency C Grand Peninsula A Harbour A/B

ComputationalComputational

AR/VR AR/VR Spatial distance-based interpolation algorithm for computer generated 2D+Z images (Jiao) Multi-wavelength remote digital holography: Seeing the unseen by imaging off scattering surfaces and imaging through scattering media (Willomitzer) Constrained phase retrieval Constrained phase retrieval using a non-linear forward model for x-ray phase contrast tomography (Mohan) Spectral reproduction: Drivers, use cases, and workflow (Habib) Predicting single observer’s votes from objective measures using neural networks (Fotio Tiotsop) Computational color constancy under multiple light sources (Han) Planar microlenses for near infrared CMOS image sensors (Dilhan) Practical applications and sensing remote trends for UAV in agriculture (Lang) The effect of class definitions on the transferability of adversarial attacks against forensic CNNs (Zhao) Simulating tests to test simulation (Braun) Checking the integrity of Checking the signed thumbnail images with images (Berchtold) Using the dead leaves pattern for more than spatial frequency response measurements (Artmann) Automotive Automotive 28 PAPER SCHEDULE BY DAY/TIME SCHEDULE PAPER SD&A-140 COIMG-147 COIMG-146 COLOR-121 HVEI-130 IPAS-135 ISS-144 11:50 am AVM-149 FAIS-151 11:45 am MWSF-119 11:40 am 11:20 am MWSF-118 11:30 am AVM-148

Schedule electronicimaging.org #EI2020 electronicimaging.org PAPER SCHEDULEBYDAY/TIME COIMG-157 3:50 pm COIMG-156 3:30 pm SD&A-154 MWSF-215 IQSP-166 IPAS-177 IMAWM-183 FAIS-171 COLOR-161 FAIS-172 COLOR-162 IPAS-178 Automotive Automotive acquisition (Preza) microscopy withreduceddata- structured illumination Recent advancesin3D radiation (Gursoy) imaging withsynchrotron Computational nanoscale (Routhier) display ofstereoscopiccontent and universalcapture comfortable for proportional, CubicSpace: Areliablemodel (Reinders) in cameradeviceidentification Score-based likelihoodratios quality measurements(Baudin) DXOMARK objectivevideo (Trongtirakul) and satelliteimages(JIST image enhancementofaerials Fractional contraststretchingfor products (Invited)(Fageth) more interestingprinted Actual usageofAItogenerate networks (Tsagkatakis) with convolutionalneural analysis ofmultispectralimages Fish freshnessestimationthough evaluation (Xiong) associated psychophysical image segmentationand Automated multicoloredfabric (Ananthanarayana) sensors andedgeprocessors detection withCMOSimage freshness classificationand basedfruit Deep learning (Choudhury) and WCGqualityassessment recent colormetricsforHDR with S-CIELABandother of ICtCpcolorrepresentation Comparing aspatialextension filtering (Gadgil) iterative adaptivesparse Image debandingusing AR/VR AR/VR

-first) Computational

Cypress A Peninsula A Grand A/B Harbour Cypress B Regency B Regency C B/C Peninsula Grand B/C Peninsula Grand Peninsula D Grand Regency C A/B Harbour Regency B 3D

COIMG-158 4:10 pm IMAWM-184 4:00 pm MWSF-216 3:55 pm IMAWM-185 4:20 pm SD&A-155 IQSP-167 MWSF-217 SD&A-400 IQSP-168 IPAS-179 FAIS-173 COLOR-163 Learning Deep microscopy (Singer) cryo-electron single-particle Method ofmomentsfor defect grading(Chen) forprinted mottle Deep learning onto photographs(Demaree) forcibly conveyinginformation Amethodfor watermarks: Camera unavoidablescene localization (Mao) network forfaciallandmark A local-globalaggregate autostereoscopic prints(Lin) based onDSLRcamerasfor A cameraarraysystem content (Becerra) quality metricsonaudio-visual autoencoder-based objective of Analyzing theperformance model identification(Fang) MRI scannermanufacturerand approachto A deeplearning displays (Hoffmeister) multiple viewerstereoscopic Challenges andsolutionsfor (Nieto) convolutional neuralnetwork using3-D deep distortions assessment withauthentic No referencevideoquality (Egiazarian) (CCDBM3D) algorithm Cube complex-domainBM3D domain imagedenoising: Hyperspectral complex- (Carpenter) color analysisoftomatoes imaging for Smartphone colorimetric (Finlayson) filter tomakeacameramore method forfindingacolor An improvedoptimisation Medical Remote Sensing Remote

Peninsula D Grand Peninsula A Grand Cypress A Cypress B Peninsula D Grand Peninsula A Grand A/B Harbour Regency B Regency C B/C Peninsula Grand Cypress B Cypress A Security 29

Schedule SecuritySecurity Cypress A Regency B Grand Peninsula B/C Cypress B Regency B Harbour A/B Cypress B Regency B Grand Peninsula B/C Regency C Harbour A/B Regency A

#EI2020 electronicimaging.org Remote Remote SensingSensing Medical Medical Computational pipeline and optimization for automatic multimodal reconstruction of marmoset brain histology (Lee) OEC-CNN: A simple method OEC-CNN: A simple method in for over- correction photographs (Chesnokov) Model comparison metrics require adaptive correction if parameters are discretized: Application to a transient neurotransmitter signal in PET data (Liu) New results for aperiodic, dispersed-dot halftoning (Liu) Camera vs smartphone: How electronic imaging changed the game (Guichard) Mixed reality guided neuronavigation for non- invasive brain stimulation treatment (Leuze) Digital vs physical: A watershed in document security (Lancaster) A low-cost approach to data collection and processing for autonomous vehicles with a realistic virtual environment (Fernandes) Gun source and muzzle head Gun source detection (Zhou) A survey on deep learning in food imaging applications (Jaber) Semi-supervised multi-task network for image aesthetic assessment (Xiang) A tool for semi-automatic ground truth annotation of traffic videos (Groh) Deep Deep LearningLearning COIMG-192 IPAS-182 COIMG-191 COLOR-195 IQSP-190 ISS-189 9:00 am MWSF-204 9:10 am AVM-201 5:00 pm IMAWM-187 5:10 pm FAIS-176 5:20 pm IMAWM-188 January 29, 2020 Wednesday, 8:50 am AVM-200

3D3D Cypress A Grand Peninsula B/C Regency C Regency B Harbour A/B Grand Peninsula A Regency B Harbour A/B Grand Peninsula A Cypress B Grand Peninsula B/C Regency C

ComputationalComputational

AR/VR AR/VR Teaching color and color Teaching science: The experience of an international Master course (Rossi) technology imaging High-speed for online monitoring of food safety and quality attributes: Research trends and challenges (Yoon) Non-blind image deconvolution based on “ringing” removal using convolutional neural network (Kudo) Studies on the effects of megapixel sensor resolution on displayed image quality and relevant metrics (Triantaphillidou) Color restoration of multispectral images: Near- infrared (NIR) filter-to-color (RGB) image (Agaian) Quality aware feature selection (Nieto) tracking object video for Random spray Retinex extensions considering ROI -first) and eye movements (JIST (Tanaka) activity Cattle identification and recognition by surveillance camera (Guan) 3D and 4D computational imaging of molecular orientation with multiview polarized fluorescence microscopy (Chandler) The blessing and the curse of the noise behind facial landmark annotations (Xiang) Motion vector based robust video hash (Liu) Computational imaging Computational electron in transmission microscopy: Atomic electron contrast tomography and phase imaging (Ophus) Automotive Automotive COLOR-165 FAIS-175 IPAS-181 IQSP-170 IPAS-180 IQSP-169 COLOR-164 FAIS-174 30 4:50 pm COIMG-160 4:40 pm IMAWM-186 4:45 pm MWSF-218 4:30 pm COIMG-159 PAPER SCHEDULE BY DAY/TIME SCHEDULE PAPER

Schedule electronicimaging.org #EI2020 electronicimaging.org AVM-203 9:50 am AVM-202 9:30 am IMAWM-211 HVEI-208 COLOR-196 ISS-212 HVEI-209 COLOR-197 COIMG-193 MOBMU-205 HVEI-210 COLOR-198 COIMG-194 MOBMU-206 PAPER SCHEDULEBYDAY/TIME Automotive Automotive nighttime trafficscenes(Unger) selection forobjectdetectionin A studyontrainingdata (Iacomussi) driver assistancesystems Metrology impactofadvanced (Jain) Health surveillance lightness perception(Rudd) in and scrambled-Gelbeffects accounts forthestaircase-Gelb Neural edgeintegrationmodel (Zhao) 3D surface alignment andassessmenton Data bearinghalftoneimage screening (Farrell) systems fororalcancer Soft-prototyping imaging composition (JPI-first)(Pappas) on theperceptionofcolor Influence oftexturestructure (Reed) watermark for useinabinary optimized pairofspotcolors measurements toselectan visibility Using watermark therapy (Medrano) ratio estimationforproton more accuratestoppingpower Model-based approachto in aVFXenvironment(Hasche) modification transforms(LMTs) Strategies ofusingACESlook contrast sensitivity(Mulligan) methods forassessmentof Evaluation oftablet-based channel (Ulichney) Rendering dataintheblue gated PET(Li) reconstruction forrespiratory regularized image based Deep learning technologies (Hasche) room usingfilmcompositing 15:1-content foraconference Creating highresolution360° AR/VR AR/VR

Computational

Peninsula A Grand Regency C Peninsula A Grand Regency C B/C Peninsula Grand Regency B A/B Sandpebble Cypress B Peninsula A Grand Regency C B/C Peninsula Grand Regency B A/B Sandpebble Regency A 3D

MOBMU-207 ISS-213 IQSP-214 IMAWM-220 10:30 am MOBMU-232 ISS-226 IQSP-239 HVEI-233 COLOR-235 COIMG-247 10:50 am SD&A-243 MWSF-398 ISS-225 Learning Deep (Berchtold) polychrome 2Dbarcode JAB code-Aversatile extension (Riza) camera lineardynamicrange image processingtechniquefor minimalistic multi-exposure Calibration empowered representations (Choudhury) Wide ColorGamut(WCG) Dynamic Range(HDR)and quality metricsinrecentHigh Comparing commonstillimage enterprise (Shreve) Augmented realityassistantsfor (Creutzburg) basis forsecurityawareness and personalitytypesasa engineering -Personalitytraits The humanfactorandsocial linear dynamicrange(Riza) over90dBscene recovery robust lowcontrastimage camera-based CAOS smart scenes (JIST-first)(Fry) power spectrafromnatural transfer functionsandnoise Validation ofmodulation (Denes) temporally changingimages Predicting visibleflickerin about thecolorred(Ichihara) in feelings Individual differences (Niezgoda) microstructure synthesis regularization fordigital incorporating physics-based Adversarial training (Holliman) in complexenvironments display ofvisualentropyglyphs Evaluating thestereoscopic documents (Hodgson) systems forsecure Smartphone (Piadyk) a lightfieldbasedapparatus scattering acquisitionthrough Anisotropic subsurface Medical Remote Sensing Remote

Regency A A/B Harbour Regency A A/B Harbour Regency B Regency C B/C Peninsula Grand Peninsula D Grand A/B Sandpebble Cypress A Regency A Cypress B A/B Sandpebble Security 31

Schedule SecuritySecurity Sandpebble A/B Grand Peninsula D Cypress B Sandpebble A/B Grand Peninsula D Cypress A Regency B Grand Peninsula B/C Regency C Harbour A/B Regency A Regency C Harbour A/B Regency A

#EI2020 electronicimaging.org Remote Remote SensingSensing Medical Medical The single image stereoscopic auto-pseudogram – Classification and theory (Benoit) Visual quality in VR head Visual quality in VR head mounted device: Lessons learned professional making headsets (Mendiburu) Void detection and fiber extraction for statistical characterization of fiber- reinforced polymers (Aguilar Herrera) Psychophysical evaluation of grey scale functions performance (Baah) Correcting misleading image quality measurements (Koren) Expanding dynamic range in a single-shot image through a sparse grid of low exposure pixels (Eisemann) Towards sector-specific security operation (Creutzburg) Active shooter response training environment for a building evacuation in a collaborative virtual environment (Sharma) LiDAR-camera fusion for 3D object detection (Bhanushali) High-entropy optically variable device characterization – Facilitating multimodal authentication and capture of deep learning data (Lindstrand) Daltonization by spectral Daltonization filtering (Green) Camera system performanceCamera system natural scenes derived from (van Zwanenberg) Characterization of camera shake (Dietz) critical Investigation of risks for infrastructures due to the exposure of SCADA systems the and industrial controls on Internet based on the search engine Shodan (Creutzburg) Deep Deep LearningLearning SD&A-246 SD&A-245 COIMG-250 COLOR-238 IQSP-242 ISS-229 MOBMU-254 ERVR-223 11:50 am AVM-257 12:00 pm 11:45 am MWSF-219 COLOR-237 IQSP-241 ISS-228 MOBMU-253

3D3D Cypress B Regency A Sandpebble A/B Grand Peninsula D Cypress B Cypress A Regency B Grand Peninsula B/C Cypress A Grand Peninsula B/C Regency C Regency B Harbour A/B

V ComputationalComputational

AR/VR AR/VR Measuring IT security, Measuring IT security, compliance, and digital sovereignty within small and medium-sized IT enterprises (Johannsen) Evaluating user experience of 180 and 360 degree images (Banchi) Physical object security title TBA (Sharma) Multi-resolution data fusion for super resolution imaging of biological materials (Reid) before and after cataract surgery: A study of color constancy and discrimination (McCann) Psychophysics study on LED flicker artefacts for automotive digital mirror replacement systems (Behmann) Application of ISO standard methods to optical design for image capture (Burns) - A HDR spherical TunnelCam camera array for structural integrity assessments of dam interiors (Meyer) Space and media: Augmented reality in urban environments (Caldas) Multi-sensor fusion in dynamic environments using evidential grid mapping (Godaliyadda) Crystallographic symmetry for data augmentation in detecting dendrite cores (Fu) Embedding data in the blue Embedding channel* (Ulichney) A photorealistic Extra FAT: dataset for 6D object pose estimation (Chen) Automotive Automotive MOBMU-252 SD&A-244 MWSF-399 COIMG-249 COLOR-236 HVEI-234 IQSP-240 ISS-227 32 11:30 am AVM-255 11:20 am IMAWM-222 11:10 am COIMG-248 10:55 am MWSF-397 11:00 am IMAWM-221 PAPER SCHEDULE BY DAY/TIME SCHEDULE PAPER

Schedule electronicimaging.org #EI2020 electronicimaging.org PAPER SCHEDULEBYDAY/TIME AVM-262 3:30 pm PLENARY-261 2:00 pm ISS-231 12:30 pm COLOR-260 12:20 pm AVM-258 12:10 pm MOBMU-402 ISS-230 COLOR-259 COIMG-251 HVEI-267 COLOR-279 COIMG-263 ERVR-224 Automotive Automotive Deep imageprocessing(Koltun) spatial computing(Lanman) computational displaysfor Quality screentime:Leveraging systems (Leñero-Bardallo) control ofspacenavigation Sun trackersensorforattitude devices (Cwierz) reality vision testingusingvirtual computing technicsforcolor Application ofspectral (Feller) autonomous vehiclecontrol Active stereovisionforprecise materials (JIST and deployableapproachfor intelligent, economical,scalable, microcomputer clusteringusedfor (SSamSS) -Microcomputerand System Monitoring Surveillance Situational StrategicAwareness first) (Kim) submicron imagesensors(JIST- Deep imagedemosaicingfor simulations (Díaz-Barrancas) textures appliedtolighting realitythroughspectral virtual Visual fidelityimprovementin electron microscopy(Voyles) inhighresolution deep learning structure optimization,and Applications ofdenoising, distribution (Van Zuijlen) A studyofpostureandcolor inpaintedfaces: differences Conventions andtemporal (McCann) causes increaseddarkness Increases inscatteredlight materials (Li) classifying similardiffuse Mueller matriximagingfor emergency response(Sharma) in abuildingusingHoloLensfor Identifying anomalousbehavior AR/VR AR/VR -first) (Maldonado)

Computational

Peninsula D Grand A/B Sandpebble Regency A Regency C B/C Peninsula Grand Regency B Regency B Regency A Cypress B Regency C Peninsula A Grand Regency C B/C Peninsula Grand 3D

3:50 pm HVEI-268 COLOR-280 COIMG-264 IQSP-284 IMAWM-269 SD&A-266 MOBMU-276 ISS-273 IQSP-285 IMAWM-270 SD&A-265 MWSF-289 MOBMU-275 ISS-272 Learning Deep human gait(JPI-pending)(Pelah) through genderrecognitionfrom perception: Acomparativestudy Biological andbiomimetic Do youseewhatIsee?(Green) (Aguilar) behavior inmaterialsdata Modeling multivariatetail 360-degree videos(Janatra) objective qualityassessmentof Subjective andviewport-based with amobiledevice(Shankar) Identification ofutilityimages demonstrations (Woods) as atestimagefor3D Using arandomdotstereogram devices (Creutzburg) home consumer marketsmart and app-controlledIoT-based checklist ofWi-Ficonnected New methodologyand (Kokado) separation path interference sub-clock shiftingformulti- pulse-TOF depthimagingwith Single-shot multi-frequency encoded 360°video(Farias) tile decodingtimeofHEVC- Statistical characterizationof analysis duringfloods(Ullah) deep featuresforroadpassibility PSO andgeneticmodelingof (Sommer) Immersive designengineering steganography (Butora) modulation forside-informed cost Minimum perturbation lightbulbs(Creutzburg) smart IoT-based consumermarket connected andapp-controlled investigation ofWi-Fi Security andprivacy (Inoue) accelerated carrierresponse charge-modulation pixelswith flight imagesensorusing4-tap basedtime-of- A short-pulse Medical Remote Sensing Remote

Regency C B/C Peninsula Grand Cypress B Peninsula D Grand A/B Sandpebble Regency A A/B Harbour Cypress B Peninsula A Grand Peninsula D Grand Cypress A A/B Sandpebble Regency A A/B Harbour Security 33

Schedule SecuritySecurity Cypress B Sandpebble A/B Regency C Cypress B Harbour A/B Sandpebble A/B Cypress A Regency B Regency C Cypress B Harbour A/B Sandpebble A/B Regency B

#EI2020 electronicimaging.org Remote Remote SensingSensing Medical Medical International biobanking interface service - Health sciences in the digital age (Creutzburg) Shazam for food: LearningShazam for diet (Invited) (Zhu) with visual data On the improvement of 2D quality metrics for the images assessment of 360-deg (Larabi) Conception and implementation of professional laboratory exercises in the field of open source intelligence (OSINT) (Creutzburg) Does computer vision need color science? (Allebach) Visual processing of dietary data: A nutrition science perspective (Eicher-Miller) The cone model: Recognizing gaze uncertainty in virtual environments (Jogeshwar) Mobile head tracking for eCommerce and beyond (Cicek) Image analytics for food safety (Zhao) Colors challanges in navigating Colors challanges vehicles (Wueller) autonomous Object tracking continuity through track and trace method (Williams) Analyzing the decoding rate of circular coding in a noisy transmission channel (Sun) Progress on the AUTOSAR adaptive platform for intelligent vehicles (Derrick) Deep Deep LearningLearning MOBMU-304 IMAWM-300 IQSP-287 MOBMU-278 COLOR-283 IMAWM-301 IQSP-288 MOBMU-303 IMAWM-302 COLOR-282 5:10 pm AVM-299 4:45 pm MWSF-292 4:50 pm AVM-298

3D3D Sandpebble A/B Cypress A Regency B Grand Peninsula B/C Cypress A Regency B Grand Peninsula B/C Regency C Grand Peninsula D Cypress B Harbour A/B Regency A

ComputationalComputational t home

AR/VR AR/VR Metal artifact reduction in dual-energy CT with synthesized monochromatic basis for baggage screening (Devadithya) A spectrum-adaptive for decomposition method effective atomic number estimation using dual energy CT (Manerikar) Replacing test charts with pictures (Triantaphillidou) Social holographics: Addressing the forgotten human Tonder) factor (Van A deep neural network-based indoor positioning algorithm by cascade of image and WiFi (Cui) Complexity optimization for the upcoming versatile video coding standard (Larabi) A high-linearity time-of-flight image sensor using a time- domain feedback technique (Kim) implementation and Conception of a course for professional training and education in the field of IoT and smar security (Creutzburg) Generative text steganography based on adaptive arithmetic (Wu) network LSTM and coding Federated semantic mapping and localization for autonomous driving (Yogamani) Synchronizing embedding Synchronizing changes in side-informed (Boroumand) steganography End-to-end deep path planning and automatic emergency braking camera cocoon-based solution (Abdou) Automotive Automotive COIMG-294 COIMG-293 COLOR-281 ERVR-295 IMAWM-271 IQSP-286 ISS-274 MOBMU-277 34 4:30 pm AVM-297 4:20 pm MWSF-291 3:55 pm MWSF-290 4:10 pm AVM-296 PAPER SCHEDULE BY DAY/TIME SCHEDULE PAPER

Schedule electronicimaging.org #EI2020 electronicimaging.org PAPER SCHEDULEBYDAY/TIME COIMG-305 5:30 pm IPAS-313 IPAS-312 IPAS-311 IPAS-310 IMAWM-309 COIMG-307 COIMG-306 IQSP-320 IQSP-319 IQSP-318 IQSP-317 IQSP-316 IQSP-315 IQSP-314 Automotive Automotive (Li) unsupervised 3Dfiberdetection Connected-tube MPPmodelfor (Imbriaco) visual placerecognition descriptor aggregationfor Multiscale convolutional review(Ullah) networks: Ashort Generative adversarial (Dornaika) adaptive lossregression embeddingwith supervised Elastic graph-basedsemi- textures (Omer) data reductiononimage of CNNandoptimallinear Comparing trainingvariability mobile devices(Pan) Real-time whiteboard coding on sampled sources(Galvis) wavefields fromnonunifor Reconstruction of2Dseismic prior (Schiffers) based media withalearning Imaging throughscattering in mobileimagesensor(Cha) motion correctionperformance Quantification methodforvideo parameter (Egiazarian) adaptive selectionofits of 2DDCT-basedfilterand Prediction ofperformance boosting inimages(Jiang) Human preferenceonchroma endoscopes (Wang) characteristicsof performance Evaluation ofoptical feedback (Yang) based onuserpreference ISPtuning framework Effective optimization (Kim) automatic imagequality inference algorithmfor DNN-based ISPparameter quality defects(Zhang) analyzing thepresenceofprint A comprehensivesystemfor AR/VR AR/VR

mly Computational

Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia 3D

HVEI-401 7:00 pm IRIACV-325 IQSP-323 IQSP-322 IQSP-321 MOBMU-336 MOBMU-335 MOBMU-334 MOBMU-333 MOBMU-332 MOBMU-331 ISS-330 ISS-329 ISS-328 ISS-327 IRIACV-326 Learning Deep (Olshausen) Perception asinference algorithms (Peng) GPU systemsforvisualSLam An evaluationofembedded II(Hu) and resolution-Part Relation betweenimagequality I(Hu) and resolution-Part Relation betweenimagequality (Zhang) for imagequalityassessment Region ofinterestextraction (Creutzburg) box Secure remoteservice app (Inupakutika) cloud architecturesformHealth analysisofmobile Performance systems forhealthresearch(Vo) MessageSpace. Messaging review 2020(Creutzburg) challenges -Abibliographic Cybersecurity andforensic projected images(Choi) projector: Stabilizationof An implementationofdrone- environment (Creutzburg) monitoring oftheprocess prototype forautomated systems -Creationofa for cyberattacksonWindows AI-based anomalydetection captures (Eberhart) integrationofTDCI Non-uniform Gouvello) for imagesensordesign(de A comprehensivesimulator From photonstodigitalvalues: (Lee) simulation ofcameramodule methodology usingco- CIS bandnoiseprediction (Dietz) unchipped manuallenses foruseof Camera support Systems (Peng) methods onNVIDIAJetson An evaluationofvisualSLam Medical Remote Sensing Remote

Restaurant Offsite Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Sequoia Security 35

Schedule SecuritySecurity Regency A Grand Peninsula A Harbour A/B Regency C Grand Peninsula B/C Regency C Regency A Harbour A/B Regency A Grand Peninsula A Harbour A/B Grand Peninsula B/C Regency C Grand Peninsula B/C

#EI2020 electronicimaging.org Remote Remote SensingSensing y) Medical Medical -first) (Fr Face perception as a multisensory process (Likova) Prediction of Lee filter performance SAR for Sentinel-1 images (Egiazarian) A gaze-contingent system for foveated multiresolution visualization of vector and volumetric data (Joshi) Developing an inkjet printer IV: IV: Developing an inkjet printer for Printer mechanism control best print quality (Kenzhebalin) type virtual image Transparent array display using small mirror (Temochi) Scene-and-process-dependent spatial image quality metrics (JIST Multisensory contributions to associations face-name learning (Murray) Depth map quality evaluation for photographic applications (Thomas) Leaving the windows open: Indeterminate situations through composite 360-degree photography (Williams) Programming paradigm for streaming reconfigurable architectures (Nousias) Background subtraction in Background diffraction using x-ray images (Agam) deep CNN Using acoustic information to diagnose the health of a printer (Chen) Estimation of the background illumination in optical reflectance microscopy (Hirakawa) Designing a VR arena: Integrating virtual environments and physical spaces for social, sensorial data-driven virtual experiences (West) Deep Deep LearningLearning HVEI-366 IQSP-371 VDA-374 COLOR-349 ERVR-340 IQSP-348 HVEI-365 IQSP-370 ERVR-361 11:10 am COIMG-356 9:50 am COIMG-344 10:10 am COLOR-353 10:50 am COIMG-355 ERVR-360

3D3D Regency A Grand Peninsula A Harbour A/B Grand Peninsula B/C Regency C Regency A Harbour A/B Grand Peninsula B/C Regency C Regency A Harbour A/B Grand Peninsula B/C Regency C

3

ComputationalComputational

AR/VR AR/VR erification of long-range MTF Multisensory interactions and plasticity – Shooting hidden assumptions, revealing postdictive aspects (Shimojo) V testing through intermediary optics (Schwartz) Developing an inkjet printer III: Multibit CMY halftones to hardware-ready bits (Hu) Enhancing lifeguard training through virtual reality (Miller) Measuring camera Shannon information capacity with a Siemens star image (Koren) Developing an inkjet printer I: Developing printer I: inkjet an RGB image to CMY ink amounts -- Image preprocessing (Wang) and Using virtual reality for spinal cord injury rehabilitation (Woods) scene- Noise power spectrum dependency in simulated image capture systems (Fry) Developing an inkjet printer II: CMY ink amounts to multibit CMY halftones (Choi) Heads-up LiDAR imaging with sensor fusion (Cai) Deep learning method for height estimation of sorghum in the field using LiDAR (Waliman) ProPaCoL-Net: A novel recursive stereo SR net with progressive parallax coherency learning (Kim) 2D label free microscopy 2D label free machine using analysis imaging learning (Hu) Automotive Automotive HVEI-354 IQSP-346 COLOR-352 ERVR-339 IQSP-347 COLOR-350 ERVR-337 IQSP-345 COLOR-351 ERVR-338 36 9:30 am COIMG-343 9:10 am COIMG-342 Thursday, January 30, 2020 Thursday, 8:50 am COIMG-341 PAPER SCHEDULE BY DAY/TIME SCHEDULE PAPER

Schedule electronicimaging.org #EI2020 electronicimaging.org ERVR-364 COIMG-359 12:10 pm ERVR-363 COIMG-358 11:50 am COIMG-357 11:30 am PAPER SCHEDULEBYDAY/TIME VDA-375 IQSP-372 HVEI-367 ERVR-362 HVEI-369 VDA-376 IQSP-373 HVEI-368 Automotive Automotive (Schulze) on Androidmobiledevices augmented realityapplications CalAR: AC++enginefor mobile imaging(Siddiqui) Statistical inversionmethodsin (Schulze) analytics inaugmentedreality Interactive multi-user3Dvisual considerations (Goma) Algorithm/hardware co-design Computational imaging: analysis (Polania) independent component images usingconstrained estimation frommobileselfie Skin chromophoreandmelanin classification models(Park) analysisofimage performance A visualizationsystemfor pathology (Lemaillet) viewers usedindigital Evaluating whole-slideimaging illusions(Stiles) multisensory vision loss:Useofnovel perception inducedbypartial Changes inauditory-visual (Niu) physiological signals(JIST subjective feelingsand realitybasedon in virtual User experienceevaluation (Saron) and electrophysiologicalstudy development: Abehavioral integration inautismspectrum variability inmultisensory Inter- andintra-individual many-core architectures (Lessley) hash tableformulti-coreand HashFight: Aplatform-portable (Yang) predictor and printuniformity Ink qualityrulerexperiments individuals (Jiang) processing inearlydeaf temporal Multisensory AR/VR AR/VR

-first) Computational

B/C Peninsula Grand B/C Peninsula Grand Regency C A/B Harbour Peninsula A Grand Regency A B/C Peninsula Grand Peninsula A Grand Regency A Peninsula A Grand Regency A Regency C A/B Harbour 3D

COIMG-378 2:20 pm COIMG-377 2:00 pm COIMG-390 3:30 pm COIMG-379 2:40 pm VDA-387 HVEI-393 HVEI-385 ERVR-382 HVEI-384 ERVR-381 VDA-386 HVEI-383 ERVR-380 Learning Deep architectures (Nuanes) renderingwithefficient depth reconstructionand Sky segmentationforenhanced single image(Savakis) for depthestimationfroma Efficient multilevelarchitecture imaging (Chen) light fieldforPDAFofmobile phase imagesandtwo-view On thedistinctionbetween sensor data(Zhang) deblurring aidedbyinertial A datasetfordeepimage repositories (Haber) software librariesviatheircode analyzing thesuitabilityof A visualizationtoolfor (Spehar) for fractal-scalingcharacteristics preferences tactile andauditory aesthetics:Visual, Multisensory signals (MacNeilage) motion dependsonmotor environment duringhead Perception ofastablevisual video 6DoFVR(Bonatto) view synthesizerforimmersive RaViS: Real-timeaccelerated (Green) processing invisualcortex andaudiovisual Auditory (Shreve) holographic objectlabeler WARHOL: Wearable data visualization(Joshi) Augmenting cognitionthrough (McCourt) stimulus onsetasynchrony ofaudio-visual motion: Effect captureofvisual Auditory situation (Suto) ability intrafficaccident improve driver’s riskprediction immersive educational system to Development andevaluationof Medical Remote Sensing Remote

Regency C Peninsula A Grand B/C Peninsula Grand Peninsula A Grand Regency A B/C Peninsula Grand Peninsula A Grand Regency A B/C Peninsula Grand Regency C Peninsula A Grand Regency A B/C Peninsula Grand Security 37

Schedule SecuritySecurity

#EI2020 electronicimaging.org Remote Remote SensingSensing Medical Medical Deep Deep LearningLearning

3D3D Grand Peninsula A Regency C Grand Peninsula B/C Grand Peninsula A Regency C Grand Peninsula B/C

ComputationalComputational

AR/VR AR/VR Visualization of search results of Visualization of search results large document sets (Anderson) An accelerated Minkowski summation rule for multisensory cue combination (Tyler) Human-computer interface based on tongue and lips movements and its application for speech therapy system (Bilkova) Introducing Vis+Tact(TM) iPhone Introducing Vis+Tact(TM) app (Mahoney) Senscape: Modeling and Senscape: Modeling and presentation of uncertainty in fused sensor data live image streams (Dietz) Indoor layout estimation by 2D Indoor layout fusion (Li) LiDAR and camera Automotive Automotive VDA-388 HVEI-395 VDA-389 HVEI-394 38 4:10 pm COIMG-392 3:50 pm COIMG-391 PAPER SCHEDULE BY DAY/TIME SCHEDULE PAPER

Schedule Your research is making important contributions. Consider the Journal of Electronic Imaging as your journal of choice to publish this important work.

Karen Egiazarian Tampere University of Technology, Finland Editor-in-Chief Benefits of publishing The Journal of Electronic Imaging publishes papers in all in the Journal of Electronic Imaging: technology areas that make up the field of electronic imaging and are normally considered in the design, engineering, • Wide availability to readers via the and applications of electronic imaging systems. SPIE Digital Library.

• Rapid publication; The Journal of Electronic Imaging covers research and each article is published when it is ready. applications in all areas of electronic imaging science and technology including image acquisition, data storage, display • Open access for articles immediately and communication of data, visualization, processing, hard with voluntary payment copy output, and multimedia systems. of $960 per article. • Coverage by Web of Science, Journal Citation Reports, and other relevant databases.

www.spie.org/jei Where Industry and Academia Meet to discuss Imaging Across Applications

Conference Chairs: William Puech, 3D Measurement and Data Processing 2020 Laboratory d’Informatique de Robotique et de Microelectronique de Montpellier (France); Conference overview Robert Sitnik, Warsaw University of Technology Scientific and technological advances during the last decade in the fields of image (Poland) acquisition, processing, telecommunications, and computer graphics have contributed to the emergence of new multimedia, especially 3D digital data. Nowadays, the acquisi- Program Committee: Silvia Biasotti, Consiglio Florent Dupont, tion, processing, transmission, and visualization of 3D objects are a part of possible and Nazionale delle Ricerche (Italy); University Claude Bernard Lyon 1 (France); realistic functionalities over the internet. Confirmed 3D processing techniques exist and a Frédéric Payan, University of Nice Sophia large scientific community works hard on open problems and new challenges, including Antipolis - I3S Laboratory, CNRS (France); 3D data processing, transmission, fast access to huge 3D databases, or content security Stefano Tubaro, Politecnico di Milano (Italy) management.

The emergence of 3D media is directly related to the emergence of 3D acquisition technolo- gies. Indeed, recent advances in 3D scanner acquisition and 3D graphics rendering tech- nologies boost the creation of 3D model archives for several application domains. These include archaeology, cultural heritage, computer assisted design (CAD), medicine, face recognition, video games, and bioinformatics. New devices such as time-of-flight cameras open challenging new perspectives on 3D scene analysis and reconstruction.

Three-dimensional objects are more complex to handle than other multimedia data, such as audio signals, images, or videos. Indeed, only a unique and simple 2D grid representation is associated to a 2D image. All the 2D acquisition devices generate this same representa- tion (digital cameras, scanners, 2D medical systems). Unfortunately (for the users), but fortu- nately (for scientists), there exist different 3D representations for a 3D object. For example, an object can be represented on a 3D grid () or in 3D Euclidian space. In the latter, the object can be expressed by a single equation (like algebraic implicit surfaces), by a set of facets representing its boundary surface, or by a set of mathematical surfaces. One can easily imagine the numerous open problems related to these different representations and their processing, a new challenge for the image processing community. 3DMP

40 #EI2020 electronicimaging.org

3D MEASUREMENT AND DATA PROCESSING 2020

Monday, January 27, 2020 11:50 3DMP Q&A Session Discussion

3D/4D NN-based Data Processing 12:30 – 2:00 pm Lunch

Session Chair: Tyler Bell, University of Iowa (United States) PLENARY: Frontiers in Computational Imaging 8:45 – 10:10 am Grand Peninsula A Session Chairs: Radka Tezaur, Intel Corporation (United States) and Jonathan Phillips, Google Inc. (United States) 8:45 Conference Welcome 2:00 – 3:10 pm Grand Peninsula Ballroom D 8:50 3DMP-002 Imaging the Unseen: Taking the First Picture of a Black Hole, Deadlift recognition and application based on multiple modalities using Katie Bouman, assistant professor, Computing and Mathematical recurrent neural network, Shih-Wei Sun1, Ting-Chen Mou2, and Pao-Chi Sciences Department, California Institute of Technology (United States) Chang2; 1Taipei National University of the Arts and 2National Central University (Taiwan) For abstract and speaker biography, see page 7

9:10 3DMP-003 Learning a CNN on multiple sclerosis lesion segmentation with self- 3:10 – 3:30 pm Coffee Break supervision, Alexandre Fenneteau1, Pascal Bourdon1,2,3, David Helbert1,2,3, and Christophe Habas4; 1University of Poitiers, 2I3M, Common Laboratory 5:00 – 6:00 pm All-Conference Welcome Reception CNRS-Siemens, University and Hospital of Poitiers, 3XLIM Laboratory, and 4Quinze-Vingts Hospital (France)

9:30 3DMP-004 Action recognition using pose estimation with an artificial 3D coordi- nates and CNN, Jisu Kim and Deokwoo Lee, Keimyung University (Republic of Korea)

9:50 3DMP Q&A Session Discussion

10:10 – 10:50 am Coffee Break

3D/4D Measurement and Processing

Session Chair: Tyler Bell, University of Iowa (United States) 3DMP 10:50 am – 12:10 pm Grand Peninsula A 10:50 3DMP-034 Variable precision depth encoding for 3D range geometry compression, Matthew Finley and Tyler Bell, University of Iowa (United States)

11:10 3DMP-035 3D shape estimation for smooth surfaces using grid-like structured light patterns, Yin Wang and Jan Allebach, Purdue University (United States)

11:30 3DMP-036 Quality assessment for 3D reconstruction of building interiors, Umamaheswaran Raman Kumar, Inge Coudron, Steven Puttemans, and Patrick Vandewalle, Katholieke Universiteit Leuven (Belgium)

electronicimaging.org #EI2020 41 Where Industry and Academia Meet to discuss Imaging Across Applications

Conference Chairs: Peter van Beek, Intel Autonomous Vehicles and Machines 2020 Corporation (United States); Patrick Denny, Valeo Vision Systems (Ireland); and Robin Jenkin, Conference overview NVIDIA Corporation (United States) Advancements in sensing, computing, imaging processing, and computer vision technolo- gies are enabling unprecedented growth and interest in autonomous vehicles and intelligent Program Committee: Umit Batur, Rivian machines, from self-driving cars to unmanned drones and personal service robots. These Automotive (United States); Zhigang Fan, Apple Ching Hung, new capabilities have the potential to fundamentally change the way people live, work, Inc. (United States); NVIDIA Corporation (United States); Dave Jasinski, commute, and connect with each other and will undoubtedly provoke entirely new applica- ON Semiconductor (United States); Darnell tions and commercial opportunities for generations to come. Moore, Texas Instruments (United States); Bo Mu, Quanergy, Inc. (United States); Binu Nair, Successfully launched in 2017, Autonomous Vehicles and Machines (AVM) considers a United Technologies Research Center (United broad range of topics as it relates to equipping vehicles and machines with the capac- States); Dietrich Paulus, Universität Koblenz- ity to perceive dynamic environments, inform human participants, demonstrate situational Landau (Germany); Pavan Shastry, Continental awareness, and make unsupervised decisions on self-navigating. The conference seeks (Germany); Luc Vincent, Lyft (United States); high-quality papers featuring novel research in areas intersecting sensing, imaging, vision, Weibao Wang, Xmotors.ai (United States); Buyue and perception with applications including, but not limited to, autonomous cars, ADAS (ad- Zhang, Apple Inc. (United States); and Yi Zhang, Argo AI, LLC (United States) vanced driver assistance system), drones, robots, and industrial automation. AVM welcomes both academic researchers and industrial experts to join the discussion. In addition to the main technical program, AVM will include interactive and open forum sessions between AVM speakers, committee members, and conference participants. Conference Sponsor

Awards Best Paper Award and Best Student Paper Award AVM

42 #EI2020 electronicimaging.org

AUTONOMOUS VEHICLES AND MACHINES 2020

Monday, January 27, 2020 11:30 AVM-040 Prediction and fast estimation of contrast detection probability, Robin Jenkin, NVIDIA Corporation (United States) KEYNOTE: Automotive Camera Image Quality Joint Session 11:50 AVM-041 Session Chair: Luke Cui, Amazon (United States) Object detection using an ideal observer model, Paul Kane and Orit 8:45 – 9:30 am Skorka, ON Semiconductor (United States) Regency B 12:10 AVM-042 This session is jointly sponsored by: Autonomous Vehicles and Comparison of detectability index and contrast detection probability Machines 2020, and Image Quality and System Performance XVII. (JIST-first),Robin Jenkin, NVIDIA Corporation (United States)

8:45 12:30 – 2:00 pm Lunch Conference Welcome

8:50 AVM-001 LED flicker measurement: Challenges, considerations, and updates PLENARY: Frontiers in Computational Imaging from IEEE P2020 working group, Brian Deegan, senior expert, Valeo Session Chairs: Radka Tezaur, Intel Corporation (United States) and Vision Systems (Ireland) Jonathan Phillips, Google Inc. (United States) Biographies and/or abstracts for all keynotes are found on pages 2:00 – 3:10 pm 9–14 Grand Peninsula Ballroom D Imaging the Unseen: Taking the First Picture of a Black Hole, Katie Bouman, assistant professor, Computing and Mathematical Automotive Camera Image Quality Joint Session Sciences Department, California Institute of Technology (United States) Session Chair: Luke Cui, Amazon (United States) For abstract and speaker biography, see page 7 9:30 – 10:10 am Regency B This session is jointly sponsored by: Autonomous Vehicles and Machines 3:10 – 3:30 pm Coffee Break 2020, and Image Quality and System Performance XVII. 9:30 IQSP-018 A new dimension in geometric camera calibration, Dietmar Wueller, KEYNOTE: Visibility Image Engineering GmbH & Co. KG (Germany) Session Chair: Robin Jenkin, NVIDIA Corporation (United States) 9:50 AVM-019 3:30 – 4:10 pm Automotive image quality concepts for the next SAE levels: Color Regency B separation probability and contrast detection probability, Marc Geese, Continental AG (Germany) AVM-057 The automated drive west: Results, Sara Sargent, engineering project 10:10 – 10:50 am Coffee Break manager, VSI Labs (United States) Biographies and/or abstracts for all keynotes are found on pages Predicting Camera Detection Performance Joint Session 9–14 Session Chair: Robin Jenkin, NVIDIA Corporation (United States) 10:50 am – 12:30 pm

Regency B Visibility AVM This session is jointly sponsored by: Autonomous Vehicles and Machines Session Chair: Robin Jenkin, NVIDIA Corporation (United States) 2020, Human Vision and Electronic Imaging 2020, and Image Quality and System Performance XVII. 4:10 – 5:10 pm 10:50 AVM-038 Regency B Describing and sampling the LED flicker signal,Robert Sumner, Imatest, 4:10 AVM-079 LLC (United States) VisibilityNet: Camera visibility detection and image restoration for au- tonomous driving, Hazem Rashed, Senthil Yogamani, and Michal Uricar, 11:10 IQSP-039 Valeo Group (Egypt) Demonstration of a virtual reality driving simulation platform, Mingming Wang and Susan Farnand, Rochester Institute of Technology (United States) 4:30 AVM-080 Sun-glare detection using late fusion of CNN and image processing operators, Lucie Yahiaoui and Senthil Yogamani, Valeo Vision Systems (Ireland)

electronicimaging.org #EI2020 43 Autonomous Vehicles and Machines 2020

4:50 AVM-081 Single image haze removal using multiple scattering model for road KEYNOTE: Quality Metrics scenes, Minsub Kim, Soonyoung Hong, and Moon Gi Kang, Yonsei University (Republic of Korea) Session Chair: Robin Jenkin, NVIDIA Corporation (United States) 10:50 – 11:30 am 5:00 – 6:00 pm All-Conference Welcome Reception Regency B AVM-124 Tuesday, January 28, 2020 Automated optimization of ISP hyperparameters to improve com- puter vision accuracy, Doug Taylor, Avinash Sharma, Karl St. Arnaud, 7:30 – 8:45 am Women in Electronic Imaging Breakfast; and Dave Tokic, Algolux (Canada) pre-registration required Biographies and/or abstracts for all keynotes are found on pages 9–14

KEYNOTE: Human Interaction

Session Chair: Robin Jenkin, NVIDIA Corporation (United States) Quality Metrics 8:50 – 9:30 am Session Chair: Robin Jenkin, NVIDIA Corporation (United States) Regency B 11:30 am – 12:30 pm AVM-088 Regency B Regaining sight of humanity on the roadway towards automation, 11:30 AVM-148 Mónica López-González, La Petite Noiseuse Productions (United States) Using the dead leaves pattern for more than spatial frequency response Biographies and/or abstracts for all keynotes are found on pages measurements, Uwe Artmann, Image Engineering GmbH & Co. KG 9–14 (Germany)

11:50 AVM-149 Simulating tests to test simulation, Patrick Müller, Matthias Lehmann, and Human Interaction Alexander Braun, Düsseldorf University of Applied Sciences (Germany)

Session Chair: Robin Jenkin, NVIDIA Corporation (United States) 12:10 AVM-150 Validation methods for geometric camera calibration, Paul Romanczyk, 9:30 – 10:30 am Imatest, LLC (United States) Regency B 9:30 AVM-109 12:30 – 2:00 pm Lunch VRUNet: Multitask learning model for intent prediction of vulnerable road users, Adithya Pravarun Reddy Ranga1, Filippo Giruzzi2, Jagdish Bhanushali1, Emilie Wirbel3, Patrick Pérez4, Tuan-Hung VU4, and Xavier PLENARY: Automotive Imaging Perrotton3; 1Valeo NA Inc. (United States), 2MINES Paristech (France), Session Chairs: Radka Tezaur, Intel Corporation (United States) and 3Valeo France (France), and 4Valeo.ai (France) Jonathan Phillips, Google Inc. (United States) 9:50 AVM-108 2:00 – 3:10 pm Multiple pedestrian tracking using Siamese random forests and shallow Grand Peninsula Ballroom D convolutional neural networks, Jimi Lee, Jaeyeal Nam, and ByoungChul Imaging in the Autonomous Vehicle Revolution, Gary Hicok, senior Ko, Keimyung University (Republic of Korea) vice president, hardware development, NVIDIA Corporation (United 10:10 AVM-110 States) End-to-end multitask learning for driver gaze and head pose estima- 1 1 1 AVM tion, Marwa El Shawarby , Mahmoud Ewaisha , Hazem Abbas , and For abstract and speaker biography, see page 7 Ibrahim Sobh2; 1Ain Shams University and 2Valeo Group (Egypt) 3:10 – 3:30 pm Coffee Break 10:00 am – 7:30 pm Industry Exhibition - Tuesday

10:10 – 10:50 am Coffee Break

44 #EI2020 electronicimaging.org Autonomous Vehicles and Machines 2020

9:10 AVM-201 PANEL: Sensors Technologies for Autonomous Vehicles Joint Session A low-cost approach to data collection and processing for autonomous vehicles with a realistic virtual environment, Victor Fernandes1, Verônica Panel Moderator: David Cardinal, Cardinal Photo & Extremetech.com Silva2, and Thais Rêgo1; 1Federal University of Paraiba and 2Ufersa (Brazil) (United States) Panelists: Sanjai Kohli, Visible Sensors, Inc. (United States); 9:30 AVM-202 Nikhil Naikal, Velodyne Lidar (United States); Greg Stanley, NXP Metrology impact of advanced driver assistance systems, Paola Semiconductors (United States); Alberto Stochino, Perceptive Machines Iacomussi, INRIM (Italy) (United States); Nicolas Touchard, DXOMARK (France); and Mike Walters, FLIR Systems (United States) 9:50 AVM-203 A study on training data selection for object detection in nighttime traf- 3:30 – 5:30 pm fic scenes,Astrid Unger1,2, Margrit Gelautz1, Florian Seitner2, and Michael Regency A Hödlmoser2; 1TU Wien and 2Emotion3D (Austria) This session is jointly sponsored by: Autonomous Vehicles and Machines 2020, and Imaging Sensors and Systems 2020. 10:00 am – 3:30 pm Industry Exhibition - Wednesday 10:10 – 10:50 am Coffee Break Imaging sensors are at the heart of any self-driving car project. However, selecting the right technologies isn’t simple. Competitive products span a gamut of capabilities including traditional visible-light Psychophysics and LED Flicker Artifacts Joint Session cameras, thermal cameras, lidar, and radar. This session includes ex- Session Chair: Jeffrey Mulligan, NASA Ames Research Center (United perts in all of these areas, and in emerging technologies, who will help States) attendees understand the strengths, weaknesses, and future directions of each. Presentations are followed by a panel discussion. Introduction, 10:50 – 11:30 am David Cardinal, consultant and technology journalist (United States); Regency B LiDAR for Self-driving Cars, Nikhil Naikal, VP of software engineer- This session is jointly sponsored by: Autonomous Vehicles and Machines ing, Velodyne Lidar (United States); Challenges in Designing Cameras 2020, and Human Vision and Electronic Imaging 2020. for Self-driving Cars, Nicolas Touchard, VP of marketing, DXOMARK 10:50 HVEI-233 (France); Using Thermal Imaging to Help Cars See Better, Mike Predicting visible flicker in temporally changing images,Gyorgy Denes Walters, VP of product management for thermal cameras, FLIR Systems, and Rafal Mantiuk, University of Cambridge (United Kingdom) Inc. (United States); Radar’s Role, Greg Stanley, field applications 11:10 HVEI-234 engineer, NXP Semiconductors (the Netherlands); Tales from the Psychophysics study on LED flicker artefacts for automotive digital mir- Automotive Sensor Trenches, Sanjai Kohli, CEO, Visible Sensors, Inc. ror replacement systems, Nicolai Behmann and Holger Blume, Leibniz (United States); Auto Sensors for the Future, Alberto Stochino, founder University Hannover (Germany) and CEO, Perceptive (United States)

Multi-Sensor Biographies and/or abstracts are found on pages 15–21 Session Chair: Robin Jenkin, NVIDIA Corporation (United States)

5:30 – 7:30 pm Symposium Demonstration Session 11:30 am – 12:30 pm Regency B Wednesday, January 29, 2020 11:30 AVM-255 Multi-sensor fusion in dynamic environments using evidential grid map- ping, Dilshan Godaliyadda, Vijay Pothukuchi, and JuneChul Roh, Texas Data Collection and Generation Instruments (United States)

Session Chair: Robin Jenkin, NVIDIA Corporation (United States) 11:50 AVM-257 LiDAR-camera fusion for 3D object detection, Darshan Ramesh 8:50 – 10:10 am Bhanushali, Robert Relyea, Karan Manghi, Abhishek Vashist, Clark Regency B Hochgraf, Amlan Ganguly, Michael Kuhl, Andres Kwasinski, and Ray AVM 8:50 AVM-200 Ptucha, Rochester Institute of Technology (United States) A tool for semi-automatic ground truth annotation of traffic videos, 12:10 AVM-258 Florian Groh, Margrit Gelautz, and Dominik Schörkhuber, TU Wien Active stereo vision for precise autonomous vehicle control, Song Zhang (Austria) and Michael Feller, Purdue University (United States)

12:30 – 2:00 pm Lunch

electronicimaging.org #EI2020 45 Autonomous Vehicles and Machines 2020

PLENARY: VR/AR Future Technology

Session Chairs: Radka Tezaur, Intel Corporation (United States) and Jonathan Phillips, Google Inc. (United States) 2:00 – 3:10 pm Grand Peninsula Ballroom D Quality Screen Time: Leveraging Computational Displays for Spatial Computing, Douglas Lanman, director, Display Systems Research, Facebook Reality Labs (United States)

For abstract and speaker biography, see page 7

3:10 – 3:30 pm Coffee Break

KEYNOTE: Image Processing

Session Chair: Robin Jenkin, NVIDIA Corporation (United States) 3:30 – 4:10 pm Regency B AVM-262 Deep image processing, Vladlen Koltun, chief scientist for Intelligent Systems, Intel Labs (United States) Biographies and/or abstracts for all keynotes are found on pages 9–14

Image Processing

Session Chair: Robin Jenkin, NVIDIA Corporation (United States) 4:10 – 5:30 pm Regency B 4:10 AVM-296 End-to-end deep path planning and automatic emergency braking camera cocoon-based solution, Mohammed Abdou and Eslam Bakr, Valeo Group (Egypt)

4:30 AVM-298 Progress on the AUTOSAR adaptive platform for intelligent vehicles, Keith Derrick, AUTOSAR (Germany)

4:50 AVM-299 Object tracking continuity through track and trace method, Haney Williams and Steven Simske, Colorado State University (United States) AVM 5:30 – 7:00 pm EI 2020 Symposium Interactive Posters Session

5:30 – 7:00 pm Meet the Future: A Showcase of Student and Young Professionals Research

46 #EI2020 electronicimaging.org COLOR Where Industry and Academia Meet to discuss Imaging Across Applications

Conference Chairs: Reiner Eschbach, Color Imaging XXV: Displaying, Processing, Norwegian University of Science and Technology (Norway) and Monroe Community Hardcopy, and Applications College (United States); Gabriel G. Marcu, Apple Inc. (United States); and Alessandro Conference overview Rizzi, Università degli Studi di Milano (Italy) Color imaging has historically been treated as a constant phenomenon well described by three independent parameters. Recent advances in computational resources and in the Program Committee: Jan P. Allebach, Purdue understanding of the human aspects are leading to new approaches that extend the purely University (United States); Vien Cheung, metrological view towards a perceptual view of color in documents and displays. Part of University of Leeds (United Kingdom); Scott J. this perceptual view is the incorporation of spatial aspects, adaptive color processing based Daly, Dolby Laboratories, Inc. (United States); on image content, and the automation of color tasks, to name a few. This dynamic nature Philip J. Green, Norwegian University of Science and Technology (Norway); Yasuyo G. Ichihara, applies to all output modalities, e.g., hardcopy devices, but to an even larger extent to soft- Kogakuin University (Japan); Choon-Woo Kim, copy displays. lnha University (Republic of Korea); Michael A. Kriss, MAK Consultants (United States); Fritz Spatially adaptive gamut and tone mapping, dynamic contrast, and color management Lebowsky, Consultant (France); John J. McCann, continue to support the unprecedented development of the display hardware spreading from McCann Imaging (United States); Nathan mobile displays to large size screens and emerging technologies. This conference provides Moroney, HP Inc. (United States); Carinna E. an opportunity for presenting, as well as getting acquainted, with the most recent develop- Parraman, University of the West of England ments in color imaging researches, technologies, and applications. Focus of the conference (United Kingdom); Marius Pedersen, Norwegian is on color basic research and testing, color image input, dynamic color image output and University of Science and Technology (Norway); Shoji Tominaga, Chiba University (Japan); rendering, color image automation, emphasizing color in context and color in images, and Sophie Triantaphillidou, University of Westminster reproduction of images across local and remote devices. (United Kingdom); and Stephen Westland, University of Leeds (United Kingdom) In addition, the conference covers software, media, and systems related to color. Special attention is given to applications and requirements created by and for multidisciplinary fields involving color and/or vision. COLOR

electronicimaging.org #EI2020 47

COLOR IMAGING XXV: DISPLAYING, PROCESSING, HARDCOPY, AND APPLICATIONS

Tuesday, January 28, 2020 Color and Appearance Reproduction Joint Session

7:30 – 8:45 am Women in Electronic Imaging Breakfast; Session Chair: Mathieu Hebert, Université Jean Monnet de Saint Etienne (France) pre-registration required 10:40 am – 12:30 pm Regency C Skin and Deep Learning Joint Session This session is jointly sponsored by: Color Imaging XXV: Displaying, Session Chair: Gabriel Marcu, Apple Inc. (United States) Processing, Hardcopy, and Applications, and Material Appearance 2020. 8:45 – 9:30 am Regency C 10:40 MAAP-396 This session is jointly sponsored by: Color Imaging XXV: Displaying, From color and spectral reproduction to appearance, BRDF, and beyond, Jon Yngve Hardeberg, Norwegian University of Science and Processing, Hardcopy, and Applications, and Material Appearance Technology (NTNU) (Norway) 2020. 8:45 11:10 MAAP-120 Conference Welcome HP 3D color gamut – A reference system for HP’s Jet Fusion 580 color 3D printers, Ingeborg Tastl1 and Alexandra Ju2; 1HP Labs, HP Inc. and 2HP 8:50 MAAP-082 Inc. (United States) Beyond color correction: Skin color estimation in the wild through deep learning, Robin Kips, Quoc Tran, Emmanuel Malherbe, and Matthieu 11:30 COLOR-121 Perrot, L’Oréal Research and Innovation (France) Spectral reproduction: Drivers, use cases, and workflow, Tanzima Habib, Phil Green, and Peter Nussbaum, Norwegian University of Science 9:10 COLOR-083 and Technology (Norway) SpectraNet: A deep model for skin oxygenation measurement from multi-spectral data, Ahmed Mohammed, Mohib Ullah, and Jacob Bauer, 11:50 COLOR-122 Norwegian University of Science and Technology (Norway) Parameter estimation of PuRet algorithm for managing appearance of material objects on display devices (JIST-first),Midori Tanaka, Ryusuke Arai, and Takahiko Horiuchi, Chiba University (Japan)

Spectral Dataset Joint Session 12:10 COLOR-123 Session Chair: Ingeborg Tastl, HP Labs, HP Inc. (United States) Colorimetrical performance estimation of a reference hyperspectral microscope for color tissue slides assessment, Paul Lemaillet and Wei- 9:30 – 10:10 am Chung Cheng, US Food and Drug Administration (United States) Regency C This session is jointly sponsored by: Color Imaging XXV: Displaying, 12:30 – 2:00 pm Lunch Processing, Hardcopy, and Applications, and Material Appearance 2020. PLENARY: Automotive Imaging 9:30 MAAP-106 Visible to near infrared reflectance hyperspectral images dataset for Session Chairs: Radka Tezaur, Intel Corporation (United States), and image sensors design, Axel Clouet1, Jérôme Vaillant1, and Célia Viola2; Jonathan Phillips, Google Inc. (United States) 1CEA-LETI and 2CEA-LITEN (France) 2:00 – 3:10 pm 9:50 MAAP-107 Grand Peninsula Ballroom D A multispectral dataset of oil and watercolor paints, Vahid Babaei1, Imaging in the Autonomous Vehicle Revolution, Gary Hicok, senior Azadeh Asadi Shahmirzadi2, and Hans-Peter Seidel1; 1Max-Planck-Institut vice president, hardware development, NVIDIA Corporation (United für Informatik and 2Consultant (Germany) States)

10:00 am – 7:30 pm Industry Exhibition - Tuesday For abstract and speaker biography, see page 7

10:10 – 10:40 am Coffee Break 3:10 – 3:30 pm Coffee Break COLOR

48 #EI2020 electronicimaging.org Color Imaging XXV: Displaying, Processing, Hardcopy, and Applications

9:50 COLOR-198 Color Understanding Rendering data in the blue channel, Robert Ulichney and Matthew Session Chair: Alessandro Rizzi, Università degli Studi di Milano (Italy) Gaubatz, HP Labs, HP Inc. (United States)

3:30 – 5:10 pm 10:00 am – 3:30 pm Industry Exhibition - Wednesday Regency C 3:30 COLOR-161 10:10 – 10:50 am Coffee Break Automated multicolored fabric image segmentation and associated psychophysical evaluation, Nian Xiong, North Carolina State University Color and Human Vision (United States) Session Chair: Gabriel Marcu, Apple Inc. (United States) 3:50 COLOR-162 Comparing a spatial extension of ICtCp color representation with 10:50 am – 12:10 pm S-CIELAB and other recent color metrics for HDR and WCG quality Regency C assessment, Anustup Choudhury and Scott Daly, Dolby Laboratories, Inc. 10:50 COLOR-235 (United States) Individual differences in feelings about the color red, Yasuyo Ichihara, Kogakuin University (Japan) 4:10 COLOR-163 An improved optimisation method for finding a color filter to make a 11:10 COLOR-236 camera more colorimetric, Graham Finlayson and Yuteng Zhu, University Colors before and after cataract surgery: A study of color constancy of East Anglia (United Kingdom) and discrimination, John McCann, McCann Imaging (United States)

4:30 COLOR-164 11:30 COLOR-237 Random spray Retinex extensions considering ROI and eye movements Daltonization by spectral filtering,Phil Green and Peter Nussbaum, (JIST-first),Midori Tanaka1, Matteo Lanaro2, Takahiko Horiuchi1, and Norwegian University of Science and Technology (Norway) Alessandro Rizzi2; 1Chiba University (Japan) and 2Università degli Studi di 11:50 COLOR-238 Milano (Italy) Psychophysical evaluation of grey scale functions performance, Kwame 4:50 COLOR-165 Baah, University of the Arts London (United Kingdom) Teaching color and color science: The experience of an international Master course, Maurizio Rossi1, Alice Plutino2, Andrea Siniscalco1, and Alessandro Rizzi2; 1Politecnico di Milano and 2Università degli Studi di Color Imaging XXV: Displaying, Processing, Hardcopy, and Applications Milano (Italy) Interactive Papers Oral Previews Session Chair: Gabriel Marcu, Apple Inc. (United States) 5:30 – 7:30 pm Symposium Demonstration Session 12:10 – 12:30 pm Wednesday, January 29, 2020 Regency C In this session interactive poster authors will each provide a brief oral overview of their poster presentation, which will be presented in the Color Color Halftoning Imaging XXV: Displaying, Processing, Hardcopy, and Applications 2020 Interactive Papers Session at 5:30 pm on Wednesday. Session Chair: Alessandro Rizzi, Università degli Studi di Milano (Italy) 12:10 COLOR-259 8:50 – 10:10 am Visual fidelity improvement in virtual reality through spectral textures Regency C applied to lighting simulations, Francisco Díaz-Barrancas, Halina Cwierz, 8:50 COLOR-195 Pedro José Pardo, Ángel Luis Pérez, and María Isabel Suero, University of New results for aperiodic, dispersed-dot halftoning, Jiayin Liu1, Altyngul Extremadura (Spain) Jumabayeva1, Yujian Xu1, Yin Wang1, Tal Frank2, Shani Gat2, Orel Bat 12:20 COLOR-260 2 2 3 1 1 Mor , Ben-Shoshan Yotam , Robert Ulichney , and Jan Allebach ; Purdue Application of spectral computing technics for color vision testing us- University (United States), 2HP Indigo (Israel), and 3HP Labs, HP Inc. (United ing virtual reality devices, Halina Cwierz, Francisco Díaz-Barrancas, States) Pedro José Pardo, Ángel Luis Pérez, and María Isabel Suero, University of 9:10 COLOR-196 Extremadura (Spain) Data bearing halftone image alignment and assessment on 3D surface, Ziyi Zhao1, Yujian Xu1, Robert Ulichney2, Matthew Gaubatz2, Stephen 12:30 – 2:00 pm Lunch Pollard3, and Jan Allebach1; 1Purdue University (United States), 2HP Labs, HP Inc. (United States), and 3HP Inc. UK Ltd. (United Kingdom)

9:30 COLOR-197 Using watermark visibility measurements to select an optimized pair of spot colors for use in a binary watermark, Alastair Reed1, Vlado Kitanovski2, Kristyn Falkenstern1, and Marius Pedersen2; 1Digimarc Corporation (United States) and 2Norwegian University of Science and COLOR Technology (Norway)

electronicimaging.org #EI2020 49 Color Imaging XXV: Displaying, Processing, Hardcopy, and Applications

Thursday, January 30, 2020 PLENARY: VR/AR Future Technology

Session Chairs: Radka Tezaur, Intel Corporation (United States), and Jonathan Phillips, Google Inc. (United States) Inkjet Printer Development and Diagnostic 2:00 – 3:10 pm Session Chair: Sophie Triantaphillidou, University of Westminster Grand Peninsula Ballroom D (United Kingdom) Quality Screen Time: Leveraging Computational Displays for Spatial 8:50 – 10:30 am Computing, Douglas Lanman, director, Display Systems Research, Regency C Facebook Reality Labs (United States) 8:50 COLOR-350 For abstract and speaker biography, see page 7 Developing an inkjet printer I: RGB image to CMY ink amounts -- Image preprocessing and color management, Yin Wang1, Baekdu Choi1, Daulet Kenzhebalin1, Sige Hu1, George Chiu1, Zillion Lin2, Davi He2, and Jan 3:10 – 3:30 pm Coffee Break Allebach1; 1Purdue University (United States) and 2Sunvalleytek International Inc. (China) Dark Side of Color 9:10 COLOR-351 Session Chair: Alessandro Rizzi, Università degli Studi di Milano (Italy) Developing an inkjet printer II: CMY ink amounts to multibit CMY half- tones, Baekdu Choi1, Daulet Kenzhebalin1, Sige Hu1, George Chiu1, Davi 3:30 – 5:10 pm He2, Zillion Lin2, and Jan Allebach1; 1Purdue University and 2Sunvalleytek Regency C International Inc. (United States) 3:30 COLOR-279 Increases in scattered light causes increased darkness, John McCann, 9:30 COLOR-352 McCann Imaging (United States) Developing an inkjet printer III: Multibit CMY halftones to hardware- ready bits, Sige Hu1, Daulet Kenzhebalin1, George Chiu1, Zillion Lin2, 3:50 COLOR-280 Davi He2, and Jan Allebach1; 1Purdue University (United States) and Do you see what I see?, Phil Green, Norwegian University of Science and 2Sunvalleytek International Inc. (China) Technology (Norway) 9:50 COLOR-349 4:10 COLOR-281 Developing an inkjet printer IV: Printer mechanism control for best print Replacing test charts with pictures, Sophie Triantaphillidou, Edward Fry, quality, Daulet Kenzhebalin1, Baekdu Choi1, Sige Hu1, George Chiu1, and Oliver van Zwanenberg, University of Westminster (United Kingdom) Zillion Lin2, Davi He2, and Jan Allebach1; 1Purdue University (United States) and 2Sunvalley Group (China) 4:30 COLOR-282 Colors challanges in navigating autonomous vehicles, Dietmar Wueller, 10:10 COLOR-353 Image Engineering GmbH & Co. KG (Germany) Using acoustic information to diagnose the health of a printer, Chin- Ning Chen1, Katy Ferguson2, Anton Wiranata2, Mark Shaw2, Wan-Eih 4:50 COLOR-283 Huang1, George Chiu1, Patricia Davis1, and Jan Allebach1; 1Purdue Does computer vision need color science?, Jan Allebach, Purdue University and 2HP Inc. (United States) University (United States) 10:30 – 11:00 am Coffee Break Color Imaging XXV: Displaying, Processing, Hardcopy, and Applications Interactive Posters Session 5:30 – 7:00 pm Sequoia The Color Imaging XXV: Displaying, Processing, Hardcopy, and Applications Conference works to be presented at the EI 2020 Symposium Interactive Posters Session are listed in the Color Imaging XXV: Displaying, Processing, Hardcopy, and Applications Interactive Papers Oral Previews session just before Wednesday lunch.

5:30 – 7:00 pm EI 2020 Symposium Interactive Posters Session

5:30 – 7:00 pm Meet the Future: A Showcase of Student and Young Professionals Research COLOR

50 #EI2020 electronicimaging.org COIMG

Where Industry and Academia Meet to discuss Imaging Across Applications

Conference Chairs: Charles A. Bouman, Purdue Computational Imaging XVIII University (United States); Gregery T. Buzzard, Purdue University (United States); and Robert COIMG Conference overview Stevenson, University of Notre Dame (United More than ever before, computers and computation are critical to the image formation States) process. Across diverse applications and fields, remarkably similar imaging problems ap- pear, requiring sophisticated mathematical, statistical, and algorithmic tools. This conference Program Committee: Clem Karl, Boston Eric Miller, focuses on imaging as a marriage of computation with physical devices. It emphasizes the University (United States); Tufts University (United States); Joseph A. O’Sullivan, interplay between mathematical theory, physical models, and computational algorithms that Washington University in St. Louis (United enable effective current and future imaging systems. Contributions to the conference are States); Hector J. Santos-Villalobos, Oak Ridge solicited on topics ranging from fundamental theoretical advances to detailed system-level National Laboratory (United States); and Ken D. implementations and case studies. Sauer, University of Notre Dame (United States)

Special Sessions Computational Microscopy Special Session This year Computational Imaging hosts four special sessions on Algorithm/Hardware Organizers: Singanallur V. Venkatakrishnan, Co-Design for Computational Imaging, Computational Imaging Applications to Materials Oak Ridge National Laboratory (United States), and Ulugbek S. Kamilov, Washington University Characterization, Recent Progress in Computational Microscopy, and Optically-Coherent in St. Louis (United States) and Interferometric Imaging, presented by researchers from academia, national laborato- ries, and industry. Optically-Coherent and Interferometric Imaging Special Session Organizer: Casey Pellizzari, United States Air Force Academy (United States)

Computational Imaging Applications to Materials Characterization Special Session Organizers: Jeff Simmons, Air Force Research Laboratory (United States), and Stephen Niezgoda, The Ohio State University (United States)

Algorithm/Hardware Co-Design for Computational Imaging Special Session Organizers: Sergio Goma, Qualcomm Technologies Inc. (United States), and Hasib Saddiqui, Qualcomm Technologies Inc. (United States)

Conference Sponsor

electronicimaging.org #EI2020 51 ; 1 , Adrian 1 COIMG-059 COIMG-059 COIMG-047 COIMG-047 COIMG-058

He Sun Tampere University Tampere 2 , and Henry Arguello 2 #EI2020 electronicimaging.org

California Institute of Technology and California Institute of Technology 1 ; 1 , Karen Egiazarian 1 uctures with live-cell and 3D fluorescence 12:30 – 2:00 pm Lunch 3:10 – 3:30 pm Coffee Break , Laura Galvis 1 5:00 – 6:00 pm All-Conference Welcome Reception 5:00 – 6:00 pm All-Conference Welcome

, and Katherine Bouman 2 PANEL: The Future of Computational Imaging PANEL: Panel Moderator: Charles Bouman, Purdue University (United States) 4:10 – 4:50 pm Grand Peninsula B/C Panelists TBA. PLENARY: Frontiers in Computational Imaging Frontiers in Computational PLENARY: and Intel Corporation (United States), Session Chairs: Radka Tezaur, Inc. (United States) Jonathan Phillips, Google 2:00 – 3:10 pm Grand Peninsula Ballroom D Katie the First Picture of a Black Hole, Imaging the Unseen: Taking Computing and Mathematical Sciences Bouman, assistant professor, (United States) Department, California Institute of Technology see page 7 For abstract and speaker biography, Universidad Industrial de Santander (Colombia) and Universidad Industrial de Santander (Colombia) Harvard Medical School (United States) (Finland) 3:50 Scientific Imaging II Los Alamos National Laboratory (United Session Chair: Brendt Wohlberg, States) 3:30 – 4:10 pm Grand Peninsula B/C 3:30 Revealing subcellular str Fang Huang, Purdue University (United States) nanoscopy, Single-shot coded diffraction system for 3D object shape estimation, Samuel Pinilla 1 12:10 Learning imaging, optimal sampling for computational 2 Dalca - COIMG-005 COIMG-005 COIMG-007 COIMG-007 COIMG-044 COIMG-044 COIMG-043 COIMG-043 COIMG-006 COIMG-006 COIMG-008 COIMG-008 COIMG-046 COIMG-046 COIMG-045 COIMG-045 University of

1 ; 1

, and Anru Zhang 2

University of Chicago (United States) 2

, Rebecca Willett 1

10:10 – 10:50 am Coffee Break

8:50 9:30 11:10 Scientific Imaging I University (United States) Tufts Session Chair: Eric Miller, 52 Computational imaging in infrared sensing of the atmosphere, Ryan Rachlin, Corrie Smeaton, Charles Wynn, Adam Milstein, Yaron MIT Philip Chapnik, Steven Leman, and William Blackwell, Sullenberger, Lincoln Laboratory (United States) A joint reconstruction regularization technique and lambda tomography Eric Quinto, and Eric for energy-resolved x-ray imaging, James Webber, University (United States) Tufts Miller, Generalized tensor learning image de- with applications to 4D-STEM noising, Rungang Han 10:50 am – 12:30 pm Grand Peninsula B/C 10:50 Learned priors for the joint ptycho-tomography reconstruction, Selin Aslan, Argonne National Laboratory (United States) Integrating learned equi- data and image models through consensus Usman Ghani, Boston University Clem Karl and Muhammad librium, W. (United States) Proximal Newton Methods for x-ray imaging with non-smooth regulari Proximal Newton Methods for x-ray imaging Plug-and-play amP for image recovery with Fourier-structured opera- The Ohio State Rizwan Ahmad, and Philip Schniter, tors, Subrata Sarkar, University (United States) statistical A splitting-based iterative algorithm for GPU-accelerated Tanmay dual-energy x-ray CT reconstruction, Fangda Li, Ankit Manerikar, (United States) Kak, Purdue University Prakash, and Avinash Plug and Play Approaches (United States) Clem Karl, Boston University Session Chair: W. 8:45 – 10:10 am Grand Peninsula B/C 8:45 Conference Welcome 9:10 and Joseph O’Sullivan, Ge, Umberto Villa, Ulugbek Kamilov, zation, Tao University in St. Louis (United States) Washington COMPUTATIONAL IMAGING XVII IMAGING COMPUTATIONAL January 27, 2020 Monday,

9:50 Wisconsin-Madison and 11:50 11:30

COIMG Computational Imaging XVIII

Tuesday, January 28, 2020 Optically-Coherent and Interferometric Imaging II Session Chair: Casey Pellizzari, United States Air Force Academy (United 7:30 – 8:45 am Women in Electronic Imaging Breakfast; COIMG States) pre-registration required 10:50 – 11:30 am Grand Peninsula B/C KEYNOTE: Computation and Photography Optically-coherent and interferometric imaging sensors provide a means Session Chair: Charles Bouman, Purdue University (United States) to measure both the amplitude and phase of incoming light. These sensors depend on computational-based methods to convert real-valued intensity 8:50 – 9:30 am measurements into amplitude and phase information for image reconstruc- Grand Peninsula B/C tion. Additionally, computational methods have helped overcome many of COIMG-089 the practical issues associated with these sensors as well as enabled new Computation and photography: How the mobile phone became imaging modalities. This session explores the coupling between optically- a camera, Peyman Milanfar, principal scientist/director, Google Research (United States) coherent and interferometric sensors and the computational methods that enable and extend their use. Example topic areas include both coherent Biographies and/or abstracts for all keynotes are found on pages and incoherent holography, coherent lidar, microscopy, metrology, and 9–14 astronomy. 10:50 COIMG-125 Optically-Coherent and Interferometric Imaging I Holographic imaging through highly attenuating fog conditions, Abbie Watnik1, Samuel Park1, James Lindle2, and Paul Lebow3; 1United States Session Chair: Casey Pellizzari, United States Air Force Academy (United Naval Research Laboratory, 2DCS Corporation, and 3Alaire Technologies States) (United States)

9:30 – 10:30 am 11:10 COIMG-126 Grand Peninsula B/C Intensity interferometry-based 3D ranging, Fabian Wagner1, Florian Optically-coherent and interferometric imaging sensors provide a means Schiffers1, Florian Willomitzer1, Oliver Cossairt1, and Andreas Velten2; to measure both the amplitude and phase of incoming light. These sensors 1Northwestern University and 2University of Wisconsin-Madison (United depend on computational-based methods to convert real-valued intensity States) measurements into amplitude and phase information for image reconstruc- tion. Additionally, computational methods have helped overcome many of Phase Coherent Imaging the practical issues associated with these sensors as well as enabled new imaging modalities. This session explores the coupling between optically- Session Chair: Charles Bouman, Purdue University (United States) coherent and interferometric sensors and the computational methods that 11:30 am – 12:10 pm enable and extend their use. Example topic areas include both coherent Grand Peninsula B/C and incoherent holography, coherent lidar, microscopy, metrology, and 11:30 COIMG-146 astronomy. Constrained phase retrieval using a non-linear forward model for x-ray 9:30 COIMG-111 phase contrast tomography, K. Aditya Mohan, Jean-Baptiste Forien, and Spectral shearing LADAR, Jason Stafford1, David Rabb1, Kyle Watson2, Jefferson Cuadra, Lawrence Livermore National Laboratory (United States) Brett Spivey2, and Ryan Galloway3; 1United States Air Force Research Laboratory, 2JASR Systems, and 3Montana State University (United States) 11:50 COIMG-147 Multi-wavelength remote digital holography: Seeing the unseen by 9:50 COIMG-112 imaging off scattering surfaces and imaging through scattering media, 3D computational phase microscopy with multiple-scattering samples, Florian Willomitzer1, Prasanna Rangarajan2, Fengqiang Li1, Muralidhar Laura Waller1, Shwetadwip Chowdhury1, Michael Chen1, Yonghuan David Madabhushi Balaji2, and Oliver Cossairt1; 1Northwestern University and Ren1, Regina Eckert1, Michael Kellman1, and Eemrah Bostan2; 1University 2Southern Methodist University (United States) of California, Berkeley (United States) and 2University of Amsterdam (the Netherlands) Recent Progress in Computational Microscopy I 10:10 COIMG-113 Imaging through deep turbulence and emerging solutions, Mark Session Chair: Singanallur Venkatakrishnan, Oak Ridge National Spencer1, Casey Pellizzari2, and Charles Bouman3; 1Air Force Research Laboratory (United States) 2 3 Laboratory, United States Air Force Academy, and Purdue University 12:10 – 12:30 pm (United States) Grand Peninsula B/C Microscopy is currently experiencing an exciting era of new methodologi- 10:00 am – 7:30 pm Industry Exhibition - Tuesday cal developments with computation at its core. The recent progress in com- 10:10 – 10:50 am Coffee Break pressive imaging, numerical physical models, regularization techniques, large-scale optimization methods, and machine learning are leading to a faster, quantitative, and reliable microscopic imaging. Though many computational methods are being developed independently for different

electronicimaging.org #EI2020 53 Computational Imaging XVIII COIMG modalities, their combination may be seen as example of a new paradigm 4:50 COIMG-160 of rapid, comprehensive, and information-rich computational microscopy. 3D and 4D computational imaging of molecular orientation with mul- 1 2 This session will explore cross-cutting themes in several modalities such tiview polarized fluorescence microscopy, Talon Chandler , Min Guo , 3 2 1 1 as optical, neutron, x-ray, and electron microscopy and will attempt to Rudolf Oldenbourg , Hari Shroff , and Patrick La Riviere ; The University of Chicago, 2National Institutes of Health, and 3Marine Biological Laboratory promote transfer of ideas between investigators in these different areas. (United States) 12:10 COIMG-152 3D DiffuserCam: Computational microscopy with a lensless imager, Laura Waller, University of California, Berkeley (United States) DISCUSSION: Tuesday Tech Mixer

Hosts: Charles Bouman, Purdue University (United States); Gregery 12:30 – 2:00 pm Lunch Buzzard, Purdue University (United States); and Robert Stevenson, University of Notre Dame (United States) PLENARY: Automotive Imaging 5:10 – 5:40 pm Grand Peninsula B/C Session Chairs: Radka Tezaur, Intel Corporation (United States), and Computational Imaging Conference Tuesday wrap-up discussion and Jonathan Phillips, Google Inc. (United States) refreshments. 2:00 – 3:10 pm Grand Peninsula Ballroom D 5:30 – 7:30 pm Symposium Demonstration Session Imaging in the Autonomous Vehicle Revolution, Gary Hicok, senior vice president, hardware development, NVIDIA Corporation (United Wednesday, January 29, 2020 States) For abstract and speaker biography, see page 7 Medical Imaging

Session Chair: Evan Morris, Yale University (United States) 3:10 – 3:30 pm Coffee Break 8:50 – 10:10 am Grand Peninsula B/C Recent Progress in Computational Microscopy II 8:50 COIMG-191 Session Chair: Singanallur Venkatakrishnan, Oak Ridge National Model comparison metrics require adaptive correction if parameters Laboratory (United States) are discretized: Application to a transient neurotransmitter signal in PET data, 3:30 – 5:10 pm Heather Liu and Evan Morris, Yale University (United States) Grand Peninsula B/C 9:10 COIMG-192 Microscopy is currently experiencing an exciting era of new methodologi- Computational pipeline and optimization for automatic multimodal cal developments with computation at its core. The recent progress in com- reconstruction of marmoset brain histology, Brian Lee1, Meng Lin2, pressive imaging, numerical physical models, regularization techniques, Junichi Hata2, Partha Mitra3, and Michael Miller1; 1Johns Hopkins University large-scale optimization methods, and machine learning are leading to (United States), 2RIKEN Brain Science Institute (Japan), and 3Cold Spring a faster, quantitative, and reliable microscopic imaging. Though many Harbor Laboratory (United States) computational methods are being developed independently for different 9:30 COIMG-193 modalities, their combination may be seen as example of a new paradigm Model-based approach to more accurate stopping power ratio estima- of rapid, comprehensive, and information-rich computational microscopy. tion for proton therapy, Maria Medrano1, Jeffrey Williamson2, Bruce This session will explore cross-cutting themes in several modalities such Whiting3, David Politte4, Shuanyue Zhang1, Tyler Webb1, Tianyu Zhao4, as optical, neutron, x-ray, and electron microscopy and will attempt to Ruirui Liu4, Mariela Porras-Chaverri2, Tao Ge1, Rui Liao1, and Joseph promote transfer of ideas between investigators in these different areas. O’Sullivan1; 1Washington University in St. Louis (United States), 2University of Costa Rica (Costa Rica), 3University of Pittsburg (United States), and 3:30 COIMG-156 4Washington University School of Medicine (United States) Computational nanoscale imaging with synchrotron radiation, Doga Gursoy, Argonne National Laboratory (United States) 9:50 COIMG-194 Deep learning based regularized image reconstruction for respiratory 3:50 COIMG-157 gated PET, Tiantian Li1, Mengxi Zhang1, Wenyuan Qi2, Evren Asma2, and Recent advances in 3D structured illumination microscopy with reduced Jinyi Qi1; 1University of California, Davis and 2Canon Medical Research data-acquisition, Chrysanthe Preza, The University of Memphis (United (United States), Inc. (United States) States)

4:10 COIMG-158 10:00 am – 3:30 pm Industry Exhibition - Wednesday Method of moments for single-particle cryo-electron microscopy, Amit 10:10 – 10:50 am Coffee Break Singer, Princeton University (United States)

4:30 COIMG-159 Computational imaging in transmission electron microscopy: Atomic electron tomography and phase contrast imaging, Colin Ophus, Lawrence Berkeley National Laboratory (United States)

54 #EI2020 electronicimaging.org Computational Imaging XVIII

Computational Imaging Applications to Materials Characterization PLENARY: VR/AR Future Technology Session Chair: Jeffrey Simmons, Air Force Research Laboratory COIMG (United States) Session Chairs: Radka Tezaur, Intel Corporation (United States), and Jonathan Phillips, Google Inc. (United States) 10:50 am – 12:30 pm Grand Peninsula B/C 2:00 – 3:10 pm Grand Peninsula Ballroom D Materials science, like physics, focuses on forward modeling almost exclu- sively for analysis. This creates opportunities for imaging scientists to make Quality Screen Time: Leveraging Computational Displays for Spatial significant advances by introducing modern, inversion-based methods for Computing, Douglas Lanman, director, Display Systems Research, Facebook Reality Labs (United States) analysis of microscope imagery. Materials Science emerged as a true ``sci- entific’’ discipline, with the development of microscopy because it allowed For abstract and speaker biography, see page 7 the materials scientist to observe the ``microstructure,’’ that is, the texture produced by the processes used for preparing the material. For this reason, 3:10 – 3:30 pm Coffee Break materials science and microscopy have always been intimately linked, with the major connection being microstructure as a means of control- Materials Imaging ling properties. Until quite recently materials characterization was largely ``photons-on-film.’’ With the digital transition of microscopy from film to Session Chair: David Castañón, Boston University (United States) data file, microscopy became a computational imaging problem. With the 3:30 – 4:10 pm automation of data collection, it became imperative to develop algorithms Grand Peninsula B/C requiring less human interaction. This session highlights recent advances in 3:30 COIMG-263 materials science as a direct consequence of cross-disciplinary approaches Mueller matrix imaging for classifying similar diffuse materials, Lisa Li, between computational imaging and materials science. This session cov- Meredith Kupinski, Madellyn Brown, and Russell Chipman, The University ers, but is not limited to, forward modeling of material-probe-detector inter- of Arizona (United States) actions, segmentation, anomaly detection, data fusion, denoising, learning 3:50 COIMG-264 approaches, detection and tracking, and super-resolution. Modeling multivariate tail behavior in materials data, Lucas Costa, 10:50 COIMG-247 Tomas Comer, Daniel Greiwe, Camilo Aguilar Herrera, and Mary Comer, Adversarial training incorporating physics-based regularization for Purdue University (United States) digital microstructure synthesis, Stephen Niezgoda, The Ohio State University (United States) Security Imaging 11:10 COIMG-248 Crystallographic symmetry for data augmentation in detecting dendrite Session Chair: David Castañón, Boston University (United States) cores, Lan Fu1, Hongkai Yu2, Megna Shah3, Jeffrey Simmons3, and Song 4:10 – 4:50 pm Wang1; 1University of South Carolina, 2University of Texas, and 3Air Force Grand Peninsula B/C Research Laboratory (United States) 4:10 COIMG-293 11:30 COIMG-249 A spectrum-adaptive decomposition method for effective atomic number Multi-resolution data fusion for super resolution imaging of biologi- estimation using dual energy CT, Ankit Manerikar, Fangda Li, Tanmay cal materials, Emma Reid1, Cheri Hampton2, Asif Mehmood2, Gregery Prakash, and Avinash Kak, Purdue University (United States) Buzzard1, Lawrence Drummy2, and Charles Bouman1; 1Purdue University and 2Air Force Research Laboratory (United States) 4:30 COIMG-294 Metal artifact reduction in dual-energy CT with synthesized monochro- 11:50 COIMG-250 matic basis for baggage screening, Sandamali Devadithya and David Void detection and fiber extraction for statistical characterization of Castañón, Boston University (United States) fiber-reinforced polymers, Camilo Aguilar Herrera and Mary Comer, Purdue University (United States) DISCUSSION: Wednesday Tech Mixer 12:10 COIMG-251 Applications of denoising, structure optimization, and deep learning Hosts: Charles Bouman, Purdue University (United States); Gregery in high resolution electron microscopy, Chenyu Zhang and Paul Voyles, Buzzard, Purdue University (United States); and Robert Stevenson, University of Wisconsin-Madison (United States) University of Notre Dame (United States) 4:50 – 5:30 pm 12:30 – 2:00 pm Lunch Grand Peninsula B/C Computational Imaging Conference Wednesday wrap-up discussion and refreshments.

electronicimaging.org #EI2020 55 Computational Imaging XVIII COIMG Computational Imaging XVIII Interactive Posters Session Algorithm/Hardware Co-Design for Computational Imaging

5:30 – 7:00 pm Session Chair: Sergio Goma, Qualcomm Technologies, Inc. (United States) Sequoia 10:50 am – 12:30 pm The following works will be presented at the EI 2020 Symposium Grand Peninsula B/C Interactive Posters Session. The aim of this session is to take computational imaging concepts a COIMG-305 step further and to set a stepping stone towards an optimal, technology Connected-tube MPP model for unsupervised 3D fiber detection, Tianyu dependent implementation of computational imaging: algorithm-hardware Li, Mary Comer, and Michael Sangid, Purdue University (United States) co-design. Complex algorithms thrive on clean data sets therefore sensors COIMG-306 that are designed in conjunction with supporting algorithms can offer Imaging through scattering media with a learning based prior, Florian significantly improved results. This session is soliciting original contributions Schiffers, Lionel Fiske, Pablo Ruiz, Aggelos K. Katsaggelos, and Oliver that relate to the joint design of sensors and/or technology in conjunction Cossairt, Northwestern University (United States) with algorithms.

COIMG-307 10:50 COIMG-355 Reconstruction of 2D seismic wavefields from nonuniformly sampled Estimation of the background illumination in optical reflectance sources, Laura Galvis1, Juan Ramirez1, Edwin Vargas1, Ofelia Villarreal2, microscopy, Charles Brookshire1, Michael Uchic2, Victoria Kramb1, Tyler William Agudelo3, and Henry Arguello1; 1Universidad Industrial de Lesthaeghe3, and Keigo Hirakawa1; 1University of Dayton, 2Air Force Santander, 2Cooperativa de Tecnólogos e Ingenieros de la Industria Research Laboratory, and 3University of Dayton Research Institute (United del Petróleo y Afines, TIP, and 3Instituto Colombiano del Petróleo, ICP, States) Ecopetrol S.A. (Colombia) 11:10 COIMG-357 Skin chromophore and melanin estimation from mobile im- 5:30 – 7:00 pm EI 2020 Symposium Interactive Posters Session ages using constrained independent component analysis, Raja Bala1, 5:30 – 7:00 pm Meet the Future: A Showcase of Student and Young Luisa Polania2, Ankur Purwar3, Paul Matts4, and Martin Maltz5; 1Palo Alto Research Center (United States), 2Target Corporation (United States), Professionals Research 3Procter & Gamble (Singapore), 4Procter & Gamble (United Kingdom), and Thursday, January 30, 2020 5Xerox Corporation (United States) 11:30 COIMG-358 Computational imaging: Algorithm/hardware co-design considerations, Deep Learning in Computational Imaging Sergio Goma, Qualcomm Technologies, Inc. (United States) Session Chair: Gregery Buzzard, Purdue University (United States) 11:50 COIMG-359 8:50 – 10:10 am Statistical inversion methods in mobile imaging, Hasib Siddiqui, Grand Peninsula B/C Qualcomm Technologies, Inc. (United States) 8:50 COIMG-341 2D label free microscopy imaging analysis using machine learning, 12:10 – 2:00 pm Lunch Han Hu1, Yang Lei2, Daisy Xin2, Viktor Shkolnikov2, Steven Barcelo2, Jan Allebach1, and Edward Delp1; 1Purdue University and 2HP Labs, HP Inc. Computer Vision I (United States) Session Chair: Robert Stevenson, University of Notre Dame (United States) 9:10 COIMG-342 ProPaCoL-Net: A novel recursive stereo SR net with progressive parallax 2:00 – 3:00 pm coherency learning, Jeonghun Kim and Munchurl Kim, Korea Advanced Grand Peninsula B/C Institute of Science and Technology (Republic of Korea) 2:00 COIMG-377 Efficient multilevel architecture for depth estimation from a single 9:30 COIMG-343 image, Nilesh Pandey, Bruno Artacho, and Andreas Savakis, Rochester Deep learning method for height estimation of sorghum in the field us- Institute of Technology (United States) ing LiDAR, Matthew Waliman and Avideh Zakhor, University of California, Berkeley (United States) 2:20 COIMG-378 Sky segmentation for enhanced depth reconstruction and Bokeh 9:50 COIMG-344 rendering with efficient architectures, Tyler Nuanes1,2, Matt Elsey2, Radek Background subtraction in diffraction x-ray images using deep CNN, Grzeszczuk2, and John Shen1; 1Carnegie Mellon University and 2Light Gady Agam, Illinois Institute of Technology (United States) (United States)

10:10 – 10:50 am Coffee Break 2:40 COIMG-379 A dataset for deep image deblurring aided by inertial sensor data, Shuang Zhang, Ada Zhen, and Robert Stevenson, University of Notre Dame (United States)

3:00 – 3:30 pm Coffee Break

56 #EI2020 electronicimaging.org Computational Imaging XVIII

Computer Vision II

Session Chair: Robert Stevenson, University of Notre Dame (United States) COIMG 3:30 – 4:30 pm Grand Peninsula B/C 3:30 COIMG-390 On the distinction between phase images and two-view light field for PDAF of mobile imaging, Chi-Jui (Jerry) Ho and Homer Chen, National Taiwan University (Taiwan)

3:50 COIMG-391 Indoor layout estimation by 2D LiDAR and camera fusion, Jieyu Li and Robert Stevenson, University of Notre Dame (United States)

4:10 COIMG-392 Senscape: Modeling and presentation of uncertainty in fused sensor data live image streams, Henry Dietz and Paul Eberhart, University of Kentucky (United States)

electronicimaging.org #EI2020 57 Where Industry and Academia Meet ERVR to discuss Imaging Across Applications

Conference Chairs: Margaret Dolinsky, The Engineering Reality of Virtual Reality 2020 Indiana University (United States), and lan E. McDowall, Intuitive Surgical / Fakespace Conference overview Labs (United States) Virtual and augmented reality systems are evolving. In addition to research, the trend toward content building continues and practitioners find that technologies and disciplines must be Program Committee: Dirk Reiners, University of tailored and integrated for specific visualization and interactive applications. This confer- Arkansas at Little Rock (United States); Jürgen Schulze, ence serves as a forum where advances and practical advice toward both creative activity University of California, San Diego (United States); and Andrew Woods, Curtin

ERVR and scientific investigation are presented and discussed. Research results can be presented University (Australia) and applications can be demonstrated.

Highlights Early Wednesday morning ERVR will join in the Imaging Sensors and Systems Conference keynote, “Mixed-reality guided neuronavigation for non-invasive brain simulation treatment.” Mid-morning on Wednesday, ERVR is co-hosting a Joint Session on urban and enterprise applications of augmented reality with the Imaging and Multimedia Analytics in a Web and Mobile World 2020 Conference. Wednesday afternoon, ERVR is co-hosting the “Visualization Facilities” Joint Session with Stereoscopic Displays and Applications XXXI Conference.

On Thursday, the core ERVR conference sessions kick off with sessions exploring applications of augmented reality, immersive and virtual reality environments, and LiDAR sensor fusion.

58 #EI2020 electronicimaging.org

THE ENGINEERING REALITY OF VIRTUAL REALITY 2020

Wednesday, January 29, 2020 12:40 – 2:00 pm Lunch

PLENARY: VR/AR Future Technology

KEYNOTE: Imaging Systems and Processing Joint Session Session Chairs: Radka Tezaur, Intel Corporation (United States), and Jonathan Phillips, Google Inc. (United States) Session Chairs: Kevin Matherson, Microsoft Corporation (United States) and Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany) 2:00 – 3:10 pm Grand Peninsula Ballroom D ERVR 8:50 – 9:30 am Regency A Quality Screen Time: Leveraging Computational Displays for Spatial Computing, Douglas Lanman, director, Display Systems Research, This session is jointly sponsored by: The Engineering Reality of Virtual Facebook Reality Labs (United States) Reality 2020, Imaging Sensors and Systems 2020, and Stereoscopic Displays and Applications XXXI. For abstract and speaker biography, see page 7 ISS-189 Mixed reality guided neuronavigation for non-invasive brain stimu- 3:10 – 3:30 pm Coffee Break lation treatment, Christoph Leuze, research scientist in the Incubator for Medical Mixed and Extended Reality, Stanford University (United States) Visualization Facilities Joint Session Biographies and/or abstracts for all keynotes are found on pages Session Chairs: Margaret Dolinsky, Indiana University (United States) and 9–14 Andrew Woods, Curtin University (Australia) 3:30 – 4:10 pm Grand Peninsula D 10:00 am – 3:30 pm Industry Exhibition - Wednesday This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2020, and Stereoscopic Displays and Applications XXXI. 10:10 – 10:30 am Coffee Break 3:30 SD&A-265 Immersive design engineering, Bjorn Sommer, Chang Lee, and Savina Toirrisi, Royal College of Art (United Kingdom) Augmented Reality in Built Environments Joint Session Session Chairs: Raja Bala, PARC (United States) and Matthew Shreve, Palo 3:50 SD&A-266 Alto Research Center (United States) Using a random dot stereogram as a test image for 3D demonstrations, Andrew Woods, Wesley Lamont, and Joshua Hollick, Curtin University 10:30 am – 12:40 pm (Australia) Cypress B This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2020, and Imaging and Multimedia Analytics in a Web and KEYNOTE: Visualization Facilities Joint Session Mobile World 2020. 10:30 IMAWM-220 Session Chairs: Margaret Dolinsky, Indiana University (United States) Augmented reality assistants for enterprise, Matthew Shreve and Shiwali and Andrew Woods, Curtin University (Australia) Mohan, Palo Alto Research Center (United States) 4:10 – 5:10 pm Grand Peninsula D 11:00 IMAWM-221 Extra FAT: A photorealistic dataset for 6D object pose estimation, This session is jointly sponsored by: The Engineering Reality of Virtual Jianhang Chen1, Daniel Mas Montserrat1, Qian Lin2, Edward Delp1, and Reality 2020, and Stereoscopic Displays and Applications XXXI. Jan Allebach1; 1Purdue University and 2HP Labs, HP Inc. (United States) ERVR-295 Social holographics: Addressing the forgotten human factor, Derek 11:20 IMAWM-222 Van Tonder, business development manager, and Andy McCutcheon, Space and media: Augmented reality in urban environments, Luisa global sales manager for Aerospace & Defence, Euclideon Caldas, University of California, Berkeley (United States) Holographics (Australia) 12:00 ERVR-223 Biographies and/or abstracts for all keynotes are found on pages Active shooter response training environment for a building evacua- 9–14 tion in a collaborative virtual environment, Sharad Sharma and Sri Teja Bodempudi, Bowie State University (United States)

12:20 ERVR-224 5:30 – 7:00 pm EI 2020 Symposium Interactive Posters Session Identifying anomalous behavior in a building using HoloLens for emer- gency response, Sharad Sharma and Sri Teja Bodempudi, Bowie State 5:30 – 7:00 pm Meet the Future: A Showcase of Student and Young University (United States) Professionals Research

electronicimaging.org #EI2020 59 The Engineering Reality of Virtual Reality 2020

Thursday, January 30, 2020 12:10 ERVR-364 CalAR: A C++ engine for augmented reality applications on Android mobile devices, Menghe Zhang, Karen Lucknavalai, Weichen Liu, and Flourishing Virtual & Augmented Worlds Jürgen Schulze, University of California, San Diego (United States)

Session Chairs: Margaret Dolinsky, Indiana University (United States) and 12:30 – 2:00 pm Lunch Ian McDowall, Intuitive Surgical / Fakespace Labs (United States) 8:45 – 10:10 am Developing Virtual Reality Regency A ERVR Session Chairs: Margaret Dolinsky, Indiana University (United States) and 8:45 Ian McDowall, Intuitive Surgical / Fakespace Labs (United States) Conference Welcome 2:00 – 3:00 pm 8:50 ERVR-337 Regency A Using virtual reality for spinal cord injury rehabilitation, Marina 2:00 ERVR-380 Ciccarelli, Susan Morris, Michael Wiebrands, and Andrew Woods, Curtin Development and evaluation of immersive educational system to University (Australia) improve driver’s risk prediction ability in traffic accident situation,Hiroto 1 2 2 2 9:10 ERVR-338 Suto , Xingguo Zhang , Xun Shen , Pongsathorn Raksincharoensak , and 1 1 2 Heads-up LiDAR imaging with sensor fusion, Yang Cai, CMU (United Norimichi Tsumura ; Chiba University and Tokyo University of Agriculture

States) and Technology (Japan) 2:20 ERVR-381 9:30 ERVR-339 WARHOL: Wearable holographic object labeler, Enhancing lifeguard training through virtual reality, Lucas Wright1, Lara Matthew Shreve, Bob Chunko2, Kelsey Benjamin3, Emmanuelle Hernandez-Morales4, Jack Miller5, Price, Les Nelson, Raja Bala, Jin Sun, and Srichiran Kumar, Palo Alto Melynda Hoover5, and Eliot Winer5; 1Hamilton College, 2University of Research Center (United States) 3 4 Colorado, Prairie View A&M University, University of Puerto Rico, and 2:40 ERVR-382 5 Iowa State University (United States) RaViS: Real-time accelerated view synthesizer for immersive video 6DoF VR, 9:50 ERVR-340 Daniele Bonatto, Sarah Fachada, and Gauthier Lafruit, Université Libre Transparent type virtual image display using small mirror array, Akane de Bruxelles (Belgium) Temochi and Tomohiro Yendo, Nagaoka University of Technology (Japan)

10:10 – 10:50 am Coffee Break

Experiencing Virtual Reality

Session Chairs: Margaret Dolinsky, Indiana University (United States) and Ian McDowall, Intuitive Surgical / Fakespace Labs (United States) 10:50 am – 12:30 pm Regency A 10:50 ERVR-360 Designing a VR arena: Integrating virtual environments and physi- cal spaces for social, sensorial data-driven virtual experiences, Ruth West1, Eitan Mendelowitz2, Zach Thomas1, Christopher Poovey1, and Luke Hillard1; 1University of North Texas and 2Mount Holyoke College (United States)

11:10 ERVR-361 Leaving the windows open: Indeterminate situations through composite 360-degree photography, Peter Williams1 and Sala Wong2; 1California State University, Sacramento and 2Indiana State University (United States)

11:30 ERVR-362 User experience evaluation in virtual reality based on subjective feel- ings and physiological signals (JIST-first),YunFang Niu1, Danli Wang1, ZiWei Wang1, Fang Sun2, Kang Yue1, and Nan Zheng1; 1Institute of Automation, Chinese Academy of Sciences and 2Liaoning Normal University (China)

11:50 ERVR-363 Interactive multi-user 3D visual analytics in augmented reality, Wanze Xie1, Yining Liang1, Janet Johnson1, Andrea Mower2, Samuel Burns2, Colleen Chelini2, Paul D’Alessandro2, Nadir Weibel1, and Jürgen Schulze1; 1University of California, San Diego and 2PwC (United States)

60 #EI2020 electronicimaging.org Where Industry and Academia Meet FAIS to discuss Imaging Across Applications

Workshop Chairs: Mustafa Jaber, Food and Agricultural Imaging Systems 2020 NantOmics, LLC (United States); Grigorios Tsagkatakis, Institute of Computer Science, Workshop overview FORTH (Greece); and Mohammed Guaranteeing food security, understanding the impact of climate change in agriculture, Yousefhussien, General Electric Global quantifying the impact of extreme weather events on food production, and automating Research (United States) the process of food quality control are a few topics where modern imaging technologies can provide much needed solutions. This workshop welcomes contributions on innovative imaging systems, computer vision, machine/deep learning research, and augmented real- ity focusing on applications in food and agriculture. Workshop topics consider how novel imaging technologies can address issues related to the impact of climate change, handling and fusion of remote sensing and in-situ data, crop yield prediction, intelligent farming, and livestock management among others. Topics related to food and beverage industry that include food recognition, calorie estimation, food waste management (among others) are included.

Highlights The workshop hosts two guest speakers, Dr. Jan van Aardt, professor, Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology (United States), and Kevin Lang, general manager, PrecisionHawk (United States). FAIS

Jan van Aardt obtained a BSc in forestry (biometry and silviculture specialization) from the University of Stellenbosch, Stellenbosch, South Africa (1996). He completed his MS and PhD in forestry, focused on remote sensing (imaging spectroscopy and light detection and ranging), at the Virginia Polytechnic Institute and State University, Blacksburg, Virginia (2000 and 2004, respectively). This was followed by post-doctoral work at the Katholieke Universiteit Leuven, Belgium, and a stint as research group leader at the Council for Scientific and Industrial Research, South Africa. Imaging spectroscopy and structural (lidar) sensing of natural resources form the core of his efforts, which vary between vegetation structural and system state (physiology) assessment. He has received funding from NSF, NASA, Google, and USDA, among others, and has published more than 70 peer-reviewed papers and more than 90 conference contributions. VanAardt is currently a professor in the Chester F. Carlson Center for Imaging Science at the Rochester Institute of Technology, New York.

Dr. van Aardt is speaking on, “Managing crops across spatial and temporal scales - the roles of UAS and satellite remote sensing.”

Kevin Lang is general manager of PrecisionHawk’s agriculture business (Raleigh, North Carolina). PrecisionHawk is a commercial drone and data company that uses aerial map- ping, modeling, and agronomy platform specifically designed for precision agriculture. Mr. Lang advises clients on how to capture value from aerial data collection, artificial intel- ligence and advanced analytics in addition to delivering implementation programs. Lang holds a BS in mechanical engineering from Clemson University and an MBA from Wake Forest University.

Kevin Lang is speaking on, “Practical applications and trends for UAV remote sensing in agriculture.”

electronicimaging.org #EI2020 61

FOOD AND AGRICULTURAL IMAGING SYSTEMS 2020

Tuesday, January 28, 2020 IMAWM-114 LambdaNet: A fully convolutional architecture for directional change 7:30 – 8:45 am Women in Electronic Imaging Breakfast; detection, Bryan Blakeslee and Andreas Savakis, Rochester Institute of pre-registration required Technology (United States)

Drone Imaging I Joint Session KEYNOTE: Remote Sensing in Agriculture I Joint Session Session Chairs: Andreas Savakis, Rochester Institute of Technology (United States) and Grigorios Tsagkatakis, Foundation for Research and Technology Session Chairs: Vijayan Asari, University of Dayton (United States) and (FORTH) (Greece) Mohammed Yousefhussien, General Electric Global Research (United States) 8:45 – 10:10 am Cypress B 10:50 – 11:40 am This session is jointly sponsored by: Food and Agricultural Imaging Systems Cypress B 2020, and Imaging and Multimedia Analytics in a Web and Mobile This session is jointly sponsored by: Food and Agricultural Imaging FAIS World 2020. Systems 2020, and Imaging and Multimedia Analytics in a Web and Mobile World 2020. 8:45 Conference Welcome FAIS-127 Managing crops across spatial and temporal scales - The roles of 8:50 IMAWM-084 UAS and satellite remote sensing, Jan van Aardt, professor, Chester F. A new training model for object detection in aerial images, Geng Yang1, Carlson Center for Imaging Science, Rochester Institute of Technology Yu Geng2, Qin Li1, Jane You3, and Mingpeng Cai1; 1Shenzhen Institute of (United States) Information Technology (China), 2Shenzhen Shangda Xinzhi Information Biographies and/or abstracts for all keynotes are found on pages Technology Co., Ltd. (China), and 3The Hong Kong Polytechnic University 9–14 (Hong Kong)

9:10 IMAWM-085 Small object bird detection in infrared drone videos using mask R-CNN deep learning, Yasmin Kassim1, Michael Byrne1, Cristy Burch2, Kevin Mote2, Jason Hardin2, and Kannappan Palaniappan1; 1University of 2 Missouri and Texas Parks and Wildlife (United States) KEYNOTE: Remote Sensing in Agriculture II Joint Session 9:30 IMAWM-086 Session Chairs: Vijayan Asari, University of Dayton (United States) and High-quality multispectral image generation using conditional GANs, Mohammed Yousefhussien, General Electric Global Research (United Ayush Soni, Alexander Loui, Scott Brown, and Carl Salvaggio, Rochester States) Institute of Technology (United States) 11:40 am – 12:30 pm 9:50 IMAWM-087 Cypress B Deep Ram: Deep neural network architecture for oil/gas pipeline right- This session is jointly sponsored by: Food and Agricultural Imaging of-way automated monitoring, Ruixu Liu, Theus Aspiras, and Vijayan Systems 2020, and Imaging and Multimedia Analytics in a Web and Asari, University of Dayton (United States) Mobile World 2020. FAIS-151 10:00 am – 7:30 pm Industry Exhibition - Tuesday Practical applications and trends for UAV remote sensing in agricul- 10:10 – 10:30 am Coffee Break ture, Kevin Lang, general manager, Agriculture, PrecisionHawk (United States) Biographies and/or abstracts for all keynotes are found on pages Drone Imaging II Joint Session 9–14 Session Chairs: Vijayan Asari, University of Dayton (United States) and Grigorios Tsagkatakis, Foundation for Research and Technology (FORTH) (Greece) 12:30 – 2:00 pm Lunch 10:30 – 10:50 am Cypress B This session is jointly sponsored by: Food and Agricultural Imaging Systems 2020, and Imaging and Multimedia Analytics in a Web and Mobile World 2020.

62 #EI2020 electronicimaging.org Food and Agricultural Imaging Systems 2020

Wednesday, January 29, 2020 PLENARY: Automotive Imaging

Session Chairs: Radka Tezaur, Intel Corporation (United States), and 10:00 am – 3:30 pm Industry Exhibition - Wednesday Jonathan Phillips, Google Inc. (United States) 10:10 – 11:00 am Coffee Break 2:00 – 3:10 pm Grand Peninsula Ballroom D 12:30 – 2:00 pm Lunch Imaging in the Autonomous Vehicle Revolution, Gary Hicok, senior vice president of hardware development, NVIDIA Corporation (United States) PLENARY: VR/AR Future Technology For abstract and speaker biography, see page 7 Session Chairs: Radka Tezaur, Intel Corporation (United States), and Jonathan Phillips, Google Inc. (United States) 3:10 – 3:30 pm Coffee Break 2:00 – 3:10 pm Grand Peninsula Ballroom D Food and Agricultural Imaging Quality Screen Time: Leveraging Computational Displays for Spatial Session Chairs: Mustafa Jaber, NantVision Inc. (United States); Grigorios Computing, Douglas Lanman, director, Display Systems Research, Tsagkatakis, Foundation for Research and Technology (FORTH) (Greece); Facebook Reality Labs (United States) and Mohammed Yousefhussien, General Electric Global Research (United For abstract and speaker biography, see page 7 States) FAIS 3:30 – 5:30 pm 3:10 – 3:30 pm Coffee Break Regency B 3:30 FAIS-171 Food and Computer Vision Fish freshness estimation though analysis of multispectral images with convolutional neural networks, Grigorios Tsagkatakis1, Savas Session Chair: Mustafa Jaber, NantVision Inc. (United States) Nikolidakis2, Eleni Petra3, Argyris Kapantagakis2, Kriton Grigorakis2, 4:30 – 5:30 pm 4 4 1 1 George Katselis , Nikos Vlahos , and Panagiotis Tsakalides ; Foundation Cypress B 2 for Research and Technology (FORTH), Hellenic Centre for Marine This is a shared listing for a related Imaging and Multimedia Analytics in a Research, 3Athena Research & Innovation Center, and 4University of Patras Web and Mobile World 2020 Conference sesssion. Refer to the IMAWM (Greece) Conference program for details. 3:50 FAIS-172 Deep learning based fruit freshness classification and detec- 5:30 – 7:00 pm EI 2020 Symposium Interactive Posters Session tion with CMOS image sensors and edge processors, Tejaswini 5:30 – 7:00 pm Meet the Future: A Showcase of Student and Young Ananthanarayana1,2, Ray Ptucha1, and Sean Kelly2; 1Rochester Institute of Technology and 2ON Semiconductor (United States) Professionals Research

4:10 FAIS-173 Smartphone imaging for color analysis of tomatoes, Katherine Carpenter and Susan Farnand, Rochester Institute of Technology (United States)

4:30 FAIS-174 Cattle identification and activity recognition by surveillance camera, Haike Guan, Naoki Motohashi, Takashi Maki, and Toshifumi Yamaai, Ricoh Company, Ltd. (Japan)

4:50 FAIS-175 High-speed imaging technology for online monitoring of food safety and quality attributes: Research trends and challenges, Seung-Chul Yoon, US Department of Agriculture-Agricultural Research Service (United States)

5:10 FAIS-176 A survey on deep learning in food imaging applications, Mustafa Jaber1, Grigorios Tsagkatakis2, and Mohammed Yousefhussien3; 1NantOmics (United States), 2Foundation for Research and Technology (FORTH) (Greece), and 3General Electric Global Research (United States)

5:30 – 7:30 pm Symposium Demonstration Session

electronicimaging.org #EI2020 63 Where Industry and Academia Meet HVEI to discuss Imaging Across Applications

Conference Chairs: Damon Chandler, Shizuoka Human Vision and Electronic Imaging 2020 University (Japan); Mark McCourt, North Dakota State University (United States); and Jeffrey Conference overview Mulligan, NASA Ames Research Center (United The conference on Human Vision and Electronic Imaging explores the role of human percep- States) tion and cognition in the design, analysis, and use of electronic media systems. It brings together researchers, technologists, and artists, from all over the world, for a rich and lively Program Committee: Albert Ahumada, NASA Kjell exchange of ideas. We believe that understanding the human observer is fundamental to Ames Research Center (United States); Brunnström, Acreo AB (Sweden); Claus- the advancement of electronic media systems, and that advances in these systems and Christian Carbon, University of Bamberg applications drive new research into the perception and cognition of the human observer. (Germany); Scott Daly, Dolby Laboratories, Inc. Every year, we introduce new topics through our Special Sessions, centered on areas driv- (United States); Huib de Ridder, Technische ing innovation at the intersection of perception and emerging media technologies. The HVEI Universiteit Delft (the Netherlands); Ulrich website (https://jbmulligan.github.io/HVEI/) includes additional information and updates. Engelke, Commonwealth Scientific and Industrial Research Organisation (Australia); Award Elena Fedorovskaya, Rochester Institute of Technology (United States); James Ferwerda, Best Paper Award Rochester Institute of Technology (United States); Jennifer Gille, Oculus VR (United States); Sergio HVEI Events Goma, Qualcomm Technologies, Inc. (United Daily End-of-Day Discussions States); Hari Kalva, Florida Atlantic University Wednesday evening HVEI Banquet and Talk (United States); Stanley Klein, University of California, Berkeley (United States); Patrick Le Callet, Université de Nantes (France); Lora Likova, Smith-Kettlewell Eye Research Institute (United States); Mónica López-González, La Petite Noiseuse Productions (United States); Laura McNamara, Sandia National Conference Sponsors Laboratories (United States); Thrasyvoulos Pappas, Northwestern University (United States); Adar Pelah, University of York (United Kingdom); Eliezer Peli, Schepens Eye Research Institute (United States); Sylvia Pont, Technische Universiteit Delft (the Netherlands); Judith Redi, Exact (the Netherlands); Hawley Rising, Consultant (United States); Bernice Rogowitz, Visual Perspectives (United States); Sabine Süsstrunk, École Polytechnique Fédérale de Lausanne (Switzerland); Christopher Tyler, Smith- Kettlewell Eye Research Institute (United States); Andrew Watson, Apple Inc. (United States); and Michael Webster, University of Nevada, Reno (United States)

64 #EI2020 electronicimaging.org

HUMAN VISION AND ELECTRONIC IMAGING 2020

Monday, January 27, 2020 11:50 AVM-041 Object detection using an ideal observer model, Paul Kane and Orit Skorka, ON Semiconductor (United States)

Human Factors in Stereoscopic Displays Joint Session 12:10 AVM-042 Comparison of detectability index and contrast detection probability Session Chairs: Nicolas Holliman, University of Newcastle (United (JIST-first),Robin Jenkin, NVIDIA Corporation (United States) Kingdom), and Jeffrey Mulligan, NASA Ames Research Center (United States) 12:30 – 2:00 pm Lunch 8:45 – 10:10 am Grand Peninsula D This session is jointly sponsored by: Human Vision and Electronic Imaging PLENARY: Frontiers in Computational Imaging 2020, and Stereoscopic Displays and Applications XXXI. Session Chairs: Radka Tezaur, Intel Corporation (United States), and 8:45 Jonathan Phillips, Google Inc. (United States) Conference Welcome 2:00 – 3:10 pm 8:50 HVEI-009 Grand Peninsula Ballroom D HVEI Stereoscopic 3D optic flow distortions caused by mismatches between Imaging the Unseen: Taking the First Picture of a Black Hole, Katie image acquisition and display parameters (JIST-first),Alex Hwang and Bouman, assistant professor, Computing and Mathematical Sciences Eli Peli, Harvard Medical School (United States) Department, California Institute of Technology (United States) 9:10 HVEI-010 For abstract and speaker biography, see page 7 The impact of radial distortions in VR headsets on perceived surface slant (JIST-first),Jonathan Tong, Robert Allison, and Laurie Wilcox, York University (Canada) 3:10 – 3:30 pm Coffee Break

9:30 SD&A-011 Visual fatigue assessment based on multitask learning (JIST-first),Danli Perceptual Image Quality Joint Session Wang, Chinese Academy of Sciences (China) Session Chairs: Mohamed Chaker Larabi, Université de Poitiers (France), 9:50 SD&A-012 and Jeffrey Mulligan, NASA Ames Research Center (United States) Depth sensitivity investigation on multi-view glasses-free 3D display, Di 3:30 – 4:50 pm Zhang1, Xinzhu Sang2, and Peng Wang2; 1Communication University of Grand Peninsula A China and 2Beijing University of Posts and Telecommunications (China) This session is jointly sponsored by: Human Vision and Electronic Imaging 10:10 – 10:50 am Coffee Break 2020, and Image Quality and System Performance XVII. 3:30 IQSP-066 Perceptual quality assessment of enhanced images using a crowd- Predicting Camera Detection Performance Joint Session sourcing framework, Muhammad Irshad1, Alessandro Silva1,2, Sana Alamgeer1, and Mylène Farias1; 1University of Brasilia and 2IFG (Brazil) Session Chair: Robin Jenkin, NVIDIA Corporation (United States) 10:50 am – 12:30 pm 3:50 IQSP-067 Perceptual image quality assessment for various viewing conditions Regency B and display systems, Andrei Chubarau1, Tara Akhavan2, Hyunjin Yoo2, This session is jointly sponsored by: Autonomous Vehicles and Machines Rafal Mantiuk3, and James Clark1; 1McGill University (Canada), 2IRYStec 2020, Human Vision and Electronic Imaging 2020, and Image Quality Software Inc. (Canada), and 3University of Cambridge (United Kingdom) and System Performance XVII. 10:50 AVM-038 4:10 HVEI-068 Describing and sampling the LED flicker signal,Robert Sumner, Imatest, Improved temporal pooling for perceptual video quality assessment LLC (United States) using VMAF, Sophia Batsi and Lisimachos Kondi, University of Ioannina (Greece) 11:10 IQSP-039 Demonstration of a virtual reality driving simulation platform, Mingming 4:30 HVEI-069 Wang and Susan Farnand, Rochester Institute of Technology (United States) Quality assessment protocols for omnidirectional video quality evalua- tion, Ashutosh Singla, Stephan Fremerey, Werner Robitza, and Alexander 11:30 AVM-040 Raake, Technische Universität Ilmenau (Germany) Prediction and fast estimation of contrast detection probability, Robin Jenkin, NVIDIA Corporation (United States) 5:00 – 6:00 pm All-Conference Welcome Reception

electronicimaging.org #EI2020 65 Human Vision and Electronic Imaging 2020

Tuesday, January 28, 2020 11:30 HVEI-130 Predicting single observer’s votes from objective measures using neural networks, Lohic Fotio Tiotsop1, Tomas Mizdos2, Miroslav Uhrina2, Peter 7:30 – 8:45 am Women in Electronic Imaging Breakfast; Pocta2, Marcus Barkowsky3, and Enrico Masala1; 1Politecnico di Torino pre-registration required (Italy), 2Zilina University (Slovakia), and 3Deggendorf Institute of Technology (DIT) (Germany) Video Quality Experts Group I Joint Session 11:50 HVEI-131 Session Chairs: Kjell Brunnström, RISE Acreo AB (Sweden), and Jeffrey A simple model for test subject behavior in subjective experiments, Mulligan, NASA Ames Research Center (United States) Zhi Li1, Ioannis Katsavounidis2, Christos Bampis1, and Lucjan Janowski3; 1Netflix, Inc. (United States),2 Facebook, Inc. (United States), and 3AGH 8:50 – 10:10 am University of Science and Technology (Poland) Grand Peninsula A This session is jointly sponsored by: Human Vision and Electronic Imaging 12:10 HVEI-132 2020, and Image Quality and System Performance XVII. Characterization of user generated content for perceptually-optimized video compression: Challenges, observations, and perspectives, Suiyi 8:50 HVEI-090 Ling1,2, Yoann Baveye1,2, Patrick Le Callet2, Jim Skinner3, and Ioannis The Video Quality Experts Group - Current activities and research, Katsavounidis3; 1CAPACITÉS (France), 2Université de Nantes (France), and Kjell Brunnström1,2 and Margaret Pinson3; 1RISE Acreo AB (Sweden), 2Mid 3Facebook, Inc. (United States) Sweden University (Sweden), and 3National Telecommunications and HVEI Information Administration, Institute for Telecommunications Sciences (United 12:30 – 2:00 pm Lunch States)

9:10 HVEI-091 Quality of experience assessment of 360-degree video, Anouk van PLENARY: Automotive Imaging Kasteren1,2, Kjell Brunnström1,3, John Hedlund1, and Chris Snijders2; 1RISE Session Chairs: Radka Tezaur, Intel Corporation (United States), and Research Institutes of Sweden AB (Sweden), 2University of Technology Jonathan Phillips, Google Inc. (United States) Eindhoven (the Netherlands), and 3Mid Sweden University (Sweden) 2:00 – 3:10 pm 9:30 HVEI-092 Grand Peninsula Ballroom D Open software framework for collaborative development of no reference image and video quality metrics, Margaret Pinson1, Philip Imaging in the Autonomous Vehicle Revolution, Gary Hicok, senior Corriveau2, Mikolaj Leszczuk3, and Michael Colligan4; 1US Department vice president, hardware development, NVIDIA Corporation (United of Commerce (United States), 2Intel Corporation (United States), States) 3AGH University of Science and Technology (Poland), and 4Spirent For abstract and speaker biography, see page 7 Communications (United States)

9:50 HVEI-093 3:10 – 3:30 pm Coffee Break Investigating prediction accuracy of full reference objective video qual- ity measures through the ITS4S dataset, Antonio Servetti, Enrico Masala, Image Quality Metrics oint ession and Lohic Fotio Tiotsop, Politecnico di Torino (Italy) J S Session Chair: Jonathan Phillips, Google Inc. (United States) 10:00 am – 7:30 pm Industry Exhibition - Tuesday 3:30 – 5:10 pm 10:10 – 10:50 am Coffee Break Grand Peninsula A This session is jointly sponsored by: Human Vision and Electronic Imaging 2020, and Image Quality and System Performance XVII. Video Quality Experts Group II Joint Session 3:30 IQSP-166 Session Chair: Kjell Brunnström, RISE Acreo AB (Sweden) DXOMARK objective video quality measurements, Emilie Baudin, Laurent 10:50 am – 12:30 pm Chanas, and Frédéric Guichard, DXOMARK (France) Grand Peninsula A 3:50 IQSP-167 This session is jointly sponsored by: Human Vision and Electronic Imaging Analyzing the performance of autoencoder-based objective quality 2020, and Image Quality and System Performance XVII. metrics on audio-visual content, Helard Becerra1, Mylène Farias1, and 2 1 2 10:50 HVEI-128 Andrew Hines ; University of Brasilia (Brazil) and University College Quality evaluation of 3D objects in mixed reality for different lighting Dublin (Ireland) conditions, Jesús Gutiérrez, Toinon Vigier, and Patrick Le Callet, Université 4:10 IQSP-168 de Nantes (France) No reference video quality assessment with authentic distortions using 1 11:10 HVEI-129 3-D deep convolutional neural network, Roger Nieto , Hernan Dario A comparative study to demonstrate the growing divide between 2D Benitez Restrepo1, Roger Figueroa Quintero1, and Alan Bovik2; 1Pontificia and 3D gaze tracking quality, William Blakey1,2, Navid Hajimirza1, and University Javeriana, Cali (Colombia) and 2The University of Texas at Austin Naeem Ramzan2; 1Lumen Research Limited and 2University of the West of (United States) Scotland (United Kingdom)

66 #EI2020 electronicimaging.org Human Vision and Electronic Imaging 2020

4:30 IQSP-169 11:10 HVEI-234 Quality aware feature selection for video object tracking, Roger Nieto1, Psychophysics study on LED flicker artefacts for automotive digital mir- Carlos Quiroga2, Jose Ruiz-Munoz3, and Hernan Benitez-Restrepo1; ror replacement systems, Nicolai Behmann and Holger Blume, Leibniz 1Pontificia University Javeriana, Cali (Colombia),2 Universidad del Valle University Hannover (Germany) (Colombia), and 3University of Florida (United States) 12:30 – 2:00 pm Lunch 4:50 IQSP-170 Studies on the effects of megapixel sensor resolution on displayed im- age quality and relevant metrics, Sophie Triantaphillidou1, Jan Smejkal1, PLENARY: VR/AR Future Technology Edward Fry1, and Chuang Hsin Hung2; 1University of Westminster (United Kingdom) and 2Huawei (China) Session Chairs: Radka Tezaur, Intel Corporation (United States), and Jonathan Phillips, Google Inc. (United States)

DISCUSSION: HVEI Tuesday Wrap-up Q&A 2:00 – 3:10 pm Grand Peninsula Ballroom D Session Chairs: Damon Chandler, Shizuoka University (Japan); Mark Quality Screen Time: Leveraging Computational Displays for Spatial McCourt, North Dakota State University (United States); and Jeffrey Computing, Douglas Lanman, director, Display Systems Research, Mulligan, NASA Ames Research Center (United States) Facebook Reality Labs (United States) 5:10 – 5:40 pm For abstract and speaker biography, see page 7 Grand Peninsula A

5:30 – 7:30 pm Symposium Demonstration Session

3:10 – 3:30 pm Coffee Break HVEI

Wednesday, January 29, 2020 Faces in Art / Human Feature Use

Session Chair: Mark McCourt, North Dakota State University (United Image Processing and Perception States)

Session Chair: Damon Chandler, Shizuoka University (Japan) 3:30 – 4:10 pm Grand Peninsula A 9:10 – 10:10 am Grand Peninsula A 3:30 HVEI-267 Conventions and temporal differences in painted faces: A study of pos- 9:10 HVEI-208 ture and color distribution, Mitchell van Zuijlen, Sylvia Pont, and Maarten Neural edge integration model accounts for the staircase-Gelb and Wijntjes, Delft University of Technology (the Netherlands) scrambled-Gelb effects in lightness perception, Michael Rudd, University of Washington (United States) 3:50 HVEI-268 Biological and biomimetic perception: A comparative study through 9:30 HVEI-209 gender recognition from human gait (JPI-pending), Viswadeep Sarangi1, Influence of texture structure on the perception of color composition (JPI- Adar Pelah1, William Hahn2, and Elan Barenholtz2; 1University of York 1 2 3 4 first),Jing Wang , Jana Zujovic , June Choi , Basabdutta Chakraborty , (United Kingdom) and 2Florida Atlantic University (United States) Rene van Egmond5, Huib de Ridder5, and Thrasyvoulos Pappas1; 1Northwestern University (United States), 2Google, Inc. (United States), 3Accenture (United States), 4Amway (United States), and 5Delft University of DISCUSSION: HVEI Wednesday Wrap-up Q&A Technology (the Netherlands) Session Chairs: Damon Chandler, Shizuoka University (Japan); Mark 9:50 HVEI-210 McCourt, North Dakota State University (United States); and Jeffrey Evaluation of tablet-based methods for assessment of contrast sensitiv- Mulligan, NASA Ames Research Center (United States) ity, Jeffrey Mulligan, NASA Ames Research Center (United States) 4:10 – 5:00 pm Grand Peninsula A 10:00 am – 3:30 pm Industry Exhibition - Wednesday

10:10 – 10:50 am Coffee Break 5:30 – 7:00 pm EI 2020 Symposium Interactive Posters Session

5:30 – 7:00 pm Meet the Future: A Showcase of Student and Young Psychophysics and LED Flicker Artifacts Joint Session Professionals Research Session Chair: Jeffrey Mulligan, NASA Ames Research Center (United States) 2020 Friends of HVEI Banquet 10:50 – 11:30 am 7:00 – 10:00 pm Regency B Offsite Restaurant This session is jointly sponsored by: Autonomous Vehicles and Machines This annual event brings the HVEI community together for great food 2020, and Human Vision and Electronic Imaging 2020. and convivial conversation. The presenter is Prof. Bruno Olshausen (UC 10:50 HVEI-233 Berkeley), speaking on “Perception as inference.” See the Keynotes section Predicting visible flicker in temporally changing images,Gyorgy Denes for details. Registration required, online or at the registration desk. Location and Rafal Mantiuk, University of Cambridge (United Kingdom) will be provided with registration.

electronicimaging.org #EI2020 67 Human Vision and Electronic Imaging 2020

Thursday, January 30, 2020 Multisensory and Crossmodal Interactions II Session Chair: Lora Likova, Smith-Kettlewell Eye Research Institute (United States) KEYNOTE: Multisensory and Crossmodal Interactions 2:00 – 3:00 pm Session Chair: Lora Likova, Smith-Kettlewell Eye Research Institute Grand Peninsula A (United States) 2:00 HVEI-383 9:10 – 10:10 am Auditory capture of visual motion: Effect of audio-visual stimulus onset Grand Peninsula A asynchrony, Mark McCourt, Emily Boehm, and Ganesh Padmanabhan, HVEI-354 North Dakota State University (United States) Multisensory interactions and plasticity – Shooting hidden assump- 2:20 HVEI-384 tions, revealing postdictive aspects, Shinsuke Shimojo, professor and Auditory and audiovisual processing in visual cortex, Jessica Green, principle investigator, California Institute of Technology (United States) University of South Carolina (United States) Biographies and/or abstracts for all keynotes are found on pages 9–14 2:40 HVEI-385 Perception of a stable visual environment during head motion depends on motor signals, Paul MacNeilage, University of Nevada, Reno (United States) HVEI 10:10 – 10:50 am Coffee Break 3:00 – 3:30 pm Coffee Break Multisensory and Crossmodal Interactions I Multisensory and Crossmodal Interactions III Session Chair: Mark McCourt, North Dakota State University (United States) Session Chair: Mark McCourt, North Dakota State University (United States) 10:50 am – 12:30 pm Grand Peninsula A 3:30 – 5:00 pm 10:50 HVEI-365 Grand Peninsula A Multisensory contributions to learning face-name associations, Carolyn 3:30 HVEI-393 Murray, Sarah May Tarlow, and Ladan Shams, University of California, Los Multisensory aesthetics: Visual, tactile and auditory preferences for Angeles (United States) fractal-scaling characteristics, Branka Spehar, University of New South Wales (Australia) 11:10 HVEI-366 Face perception as a multisensory process, Lora Likova, Smith-Kettlewell 3:50 HVEI-394 Eye Research Institute (United States) Introducing Vis+Tact(TM) iPhone app, Jeannette Mahoney, Albert Einstein College of Medicine (United States) 11:30 HVEI-367 Changes in auditory-visual perception induced by partial vision loss: 4:10 HVEI-395 Use of novel multisensory illusions, Noelle Stiles1,2, Armand Tanguay2,3, An accelerated Minkowski summation rule for multisensory cue com- Ishani Ganguly2, Carmel Levitan4, and Shinsuke Shimojo2; 1Keck School bination, Christopher Tyler, Smith-Kettlewell Eye Research Institute (United of Medicine, University of Southern California, 2California Institute of States) Technology, 3University of Southern California, and 4Occidental College (United States) 4:30 Multisensory Discussion 11:50 HVEI-368 Multisensory temporal processing in early deaf individuals, Fang Jiang, University of Nevada, Reno (United States)

12:10 HVEI-369 Inter- and intra-individual variability in multisensory integration in autism spectrum development: A behavioral and electrophysiological study, Clifford Saron1, Yukari Takarae2, Iman Mohammadrezazadeh3, and Susan Rivera1; 1University of California, Davis, 2University of California, San Diego, and 3HRL Laboratories (United States)

12:30 – 2:00 pm Lunch

68 #EI2020 electronicimaging.org IPAS Where Industry and Academia Meet to discuss Imaging Across Applications

Conference Chairs: Sos S. Agaian,CSI City Image Processing: Algorithms and Systems XVIII University of New York and The Graduate Center (CUNY) (United States); Karen 0. Conference overview Egiazarian, Tampere University (Finland); and Image Processing: Algorithms and Systems continues the tradition of the past conference Atanas P. Gotchev, Tampere University (Finland) Nonlinear Image Processing and Pattern Analysis in exploring new image processing al- gorithms. It also reverberates the growing call for integration of the theoretical research on Program Committee: Gözde Bozdagi Akar, Junior image processing algorithms with the more applied research on image processing systems. Middle East Technical University (Turkey); Barrera, Universidad de São Paulo (Brazil); Jenny Benois-Pineau, Bordeaux University (France); Specifically, the conference aims at highlighting the importance of the interaction between Giacomo Boracchi, Politecnico di Milano (Italy); linear, nonlinear, and transform-based approaches for creating sophisticated algorithms and Reiner Creutzburg, Technische Hochschule building modern imaging systems for new and emerging applications. Brandenburg (Germany); Alessandro Foi, Tampere University of Technology (Finland); Paul Award D. Gader, University of Florida (United States); Best Paper John C. Handley, University of Rochester (United States); Vladimir V. Lukin, National Aerospace University (Ukraine); Vladimir Marchuk, Don State Technical University (Russian Federation); Alessandro Neri, Radiolabs (Italy); Marek R. Ogiela, AGH University of Science and Technology (Poland); Ljiljana Platisa, Universiteit Gent (Belgium); Françoise Prêteux, Ecole des Ponts ParisTech (France); Giovanni Ramponi, University degli Studi di Trieste (Italy); Ivan W. Selesnick, Polytechnic Institute of New York University (United States); and Damir Sersic, University of Zagreb (Croatia) IPAS

electronicimaging.org #EI2020 69

IMAGE PROCESSING: ALGORITHMS AND SYSTEMS XVII

Monday, January 27, 2020 Medical Image Processing Session Chairs: Sos Agaian, CSI City University of New York and The 10:10 – 10:45 am Coffee Break Graduate Center (CUNY) (United States), and Atanas Gotchev, Tampere University (Finland) Image Processing with Machine Learning 3:30 – 4:30 pm Session Chairs: Sos Agaian, CSI City University of New York and The Harbour A/B Graduate Center (CUNY) (United States); Karen Egiazarian, Tampere 3:30 IPAS-062 University (Finland); and Atanas Gotchev, Tampere University (Finland) An active contour model for medical image segmentation using a 1,2 1,2 10:45 am – 12:30 pm quaternion framework, Viacheslav Voronin , Evgeny Semenishchev , 2 2 3 1 Harbour A/B Alexander Zelenskii , Marina Zhdanova , and Sos Agaian ; Don State Technical University (Russian Federation), 2Moscow State University of 10:45 Technology “STANKIN” (Russian Federation), and 3CSI City University of Conference Welcome New York and The Graduate Center (CUNY) (United States)

10:50 IPAS-025 3:50 IPAS-063 Pruning neural networks via gradient information, Pavlo Molchanov, Improving 3D medical image compression efficiency using spatiotem- NVIDIA Corporation (United States) poral coherence, Matina Zerva, Michalis Vrigkas, Lisimachos Kondi, and

11:10 IPAS-026 Christophoros Nikou, University of Ioannina (Greece) Real-world fence removal from a single-image via deep neural 4:10 IPAS-064 network, Takuro Matsui, Takuro Yamaguchi, and Masaaki Ikehara, Keio Pathology image-based lung cancer subtyping using deep-learning University (Japan) features and cell-density maps, Mustafa Jaber1, Christopher Szeto2, Bing 2 3 2 2 11:30 IPAS-027 Song , Liudmila Beziaeva , Stephen Benz , and Shahrooz Rabizadeh ; 1 2 3 IPAS Adaptive context encoding module for semantic segmentation, NantOmics, ImmunityBio, and NantHealth (United States) Congcong Wang1, Faouzi Alaya Cheikh1, Azeddine Beghdadi2, and Ole Jakob Elle3,4; 1Norwegian University of Science and Technology (Norway), 5:00 – 6:00 pm All-Conference Welcome Reception 2Universite Paris 13 (France), 3Oslo University Hospital (Norway), and 4University of Oslo (Norway) Tuesday, January 28, 2020 11:50 IPAS-028 CNN-based classification of degraded images,Kazuki Endo1, Masayuki 7:30 – 8:45 am Women in Electronic Imaging Breakfast; Tanaka1,2, and Masatoshi Okutomi1; 1Tokyo Institute of Technology and pre-registration required 2National Institute of Advanced Industrial Science and Technology (Japan)

12:10 IPAS-029 Scene Understanding A deep learning-based approach for defect detection and removing on Session Chairs: Sos Agaian, CSI City University of New York and The 1 1,2 1 archival photos, Roman Sizyakin , Viacheslav Voronin , Nikolay Gapon , Graduate Center (CUNY) (United States), and Karen Egiazarian, Tampere 1,2 2 1 Evgeny Semenishchev , and Alexander Zelenskii ; Don State Technical University (Finland) University and 2Moscow State University of Technology “STANKIN” (Russian Federation) 8:50 – 10:10 am Harbour A/B 12:30 – 2:00 pm Lunch 8:50 IPAS-094 Two-step cascading algorithm for camera-based night fire detection, Minji Park, Donghyun Son, and ByoungChul Ko, Keimyung University PLENARY: Frontiers in Computational Imaging (Republic of Korea)

Session Chairs: Radka Tezaur, Intel Corporation (United States), and 9:10 IPAS-095 Jonathan Phillips, Google Inc. (United States) Introducing scene understanding to person re-identification using a spa- 2:00 – 3:10 pm tio-temporal multi-camera model, Xin Liu, Herman Groot, Egor Bondarev, Grand Peninsula Ballroom D and Peter de With, Eindhoven University of Technology (the Netherlands)

Imaging the Unseen: Taking the First Picture of a Black Hole, Katie 9:30 IPAS-096 Bouman, assistant professor, Computing and Mathematical Sciences Use of retroreflective markers for object detection in harsh sensing Department, California Institute of Technology (United States) conditions, Laura Goncalves Ribeiro, Olli Suominen, Sari Peltonen, and For abstract and speaker biography, see page 7 Atanas Gotchev, Tampere University (Finland)

70 #EI2020 electronicimaging.org 12:10 Aerospace University(Ukraine) 11:50 electronicimaging.org #EI2020 electronicimaging.org visual qualitymetrics,MykolaPonomarenko An expandableimagedatabaseforevaluationoffull-reference François Pitié,andAnilKokaram,Trinity College(Ireland) Per clipLagrangianmultiplieroptimisationforHEVC, of Korea) Han, SoonyoungHong,andMoonGiKang,Yonsei University(Republic Computational colorconstancyundermultiplelightsources, Yonsei University(RepublicofKorea) color filterarray,KyeonghoonJeong,JonghyunKim,andMoonGiKang, Color interpolationalgorithmforaperiodicwhite-dominantRGBW space, JihoYoon andChulheeLee,Yonsei University(RepublicofKorea) distancewithadjustableblock Edge detectionusingtheBhattacharyya 10:50 Harbour A/B 10:50 am–12:30pm University (Finland);andAtanasGotchev, Tampere University(Finland) Graduate Center(CUNY)(UnitedStates);KarenEgiazarian,Tampere Session Chairs:SosAgaian,CSICityUniversityofNewYork andThe Image andVideoProcessing and GoogLeNet,GuoanYang, Xi’anJiaotongUniversity(China) A novelimagerecognitionapproachusingmultiscalesaliencymodel 9:50 Image Processing:AlgorithmsandSystemsXVIII 11:10 Lukin 11:30 For abstractandspeakerbiography, seepage7 States) vice president,hardwaredevelopment,NVIDIACorporation(United Imaging intheAutonomousVehicle Hicok,senior Revolution,Gary Grand PeninsulaBallroomD 2:00 –3:10pm Jonathan Phillips,GoogleInc.(UnitedStates) Session Chairs:RadkaTezaur, IntelCorporation(UnitedStates),and PLENARY: AutomotiveImaging 2 , andKarenEgiazarian

10:00 am – 7:30 pm Industry Exhibition-Tuesday10:00 am–7:30pmIndustry 10:10 – 10:50 am Coffee Break 10:10 –10:50amCoffee 3:10 – 3:30 pm Coffee Break 3:10 –3:30pmCoffee 12:30 –2:00pmLunch 1 ;

1 Tampere University(Finland)and

1 , OlegIeremeiev Daniel Ringis,

2 Jaeduk , Vladimir 2 National IPAS-137 IPAS-136 IPAS-133 IPAS-097 IPAS-134 IPAS-135

3:50 4:30 Color restorationofmultispectralimages:Near Tampere University(Finland) Ponomarenko, KarenEgiazarian,IgorShevkunov, andPeterKocsis, domain BM3D(CCDBM3D)algorithm,VladimirKatkovnik,Mykola Hyperspectral complex-domainimagedenoising:Cubecomplex- States) Gadgil, QingSong,andGuan-MingSu,DolbyLaboratories(United Image debandingusingiterativeadaptivesparsefiltering, ellite images(JIST-first),ThaweesakTrongtirakul Fractional contraststretchingforimageenhancementofaerialsandsat 3:30 Harbour A/B 3:30 –5:30pm Atanas Gotchev, Tampere University(Finland) Session Chairs:KarenEgiazarian,Tampere University(Finland),and Image FilteringandEnhancement graphs, ZhaoGao correctioninphoto- OEC-CNN: Asimplemethodforover-exposure Masaaki Ikehara,KeioUniversity(Japan) convolutional neuralnetwork,Takahiro Kudo,Takanori Fujisawa,and Non-blind imagedeconvolutionbasedon“ringing”removalusing 1 5:10 4:50 4:10 and SosAgaian (CUNY) (UnitedStates) Sos Agaian (United States) (Thailand) and and color (RGB)image,ThaweesakTrongtirakul Loughborough Universityand 2 CSI CityUniversityofNewYork andTheGraduate Center(CUNY)

5:30 –7:30pmSymposiumDemonstrationSession 2 ; 1

King Mongkut’s UniversityofTechnology Thonburi(Thailand) 2 CSI CityUniversityofNewYork andTheGraduateCenter 2 ; 1 1 King Mongkut’s UniversityofTechnology Thonburi , EranEdirisinghe

2 ARM Limited(UnitedKingdom) 1 , andViacheslavChesnokov

1 , Werapon Chiracharit 1 , Werapon Chiracharit -infrared (NIR)filter Neeraj

2 1 IPAS-178 IPAS-180 IPAS-177 IPAS-182 IPAS-181 IPAS-179 ; , and -to- 1 71 , -

IPAS Image Processing: Algorithms and Systems XVIII

Wednesday, January 29, 2020

10:00 am – 3:30 pm Industry Exhibition - Wednesday

10:10 – 11:00 am Coffee Break

12:30 – 2:00 pm Lunch

PLENARY: VR/AR Future Technology

Session Chairs: Radka Tezaur, Intel Corporation (United States), and Jonathan Phillips, Google Inc. (United States) 2:00 – 3:10 pm Grand Peninsula Ballroom D Quality Screen Time: Leveraging Computational Displays for Spatial Computing, Douglas Lanman, director, Display Systems Research, Facebook Reality Labs (United States) For abstract and speaker biography, see page 7

3:10 – 3:30 pm Coffee Break

Image Processing: Algorithms and Systems XVIII Interactive Papers Session 5:30 – 7:00 pm Sequoia IPAS The following works will be presented at the EI 2020 Symposium Interactive Papers Session. IPAS-310 Comparing training variability of CNN and optimal linear data reduction on image textures, Khalid Omer, Luca Caucci, and Meredith Kupinski, The University of Arizona (United States)

IPAS-311 Elastic graph-based semi-supervised embedding with adaptive loss re- gression, Fadi Dornaika and Youssof El Traboulsi, University of the Basque Country (Spain)

IPAS-312 Generative adversarial networks: A short review, Habib Ullah1, Sultan Daud Khan1, Mohib Ullah2, and Faouzi Alaya Cheikh2; 1University of Ha’il (Saudi Arabia) and 2Norwegian University of Science and Technology (Norway)

IPAS-313 Multiscale convolutional descriptor aggregation for visual place recogni- tion, Raffaele Imbriaco, Egor Bondarev, and Peter de With, Eindhoven University of Technology (the Netherlands)

5:30 – 7:00 pm EI 2020 Symposium Interactive Posters Session

5:30 – 7:00 pm Meet the Future: A Showcase of Student and Young Professionals Research

72 #EI2020 electronicimaging.org Where Industry and Academia Meet to discuss Imaging Across Applications IQSP

Conference Chairs: Nicolas Bonnier, Apple Inc. Image Quality and System Performance XVII (United States); and Stuart Perry, University of Technology Sydney (Australia) Conference overview We live in a visual world. The perceived quality of images is of crucial importance in Program Committee: Alan Bovik, University industrial, medical, and entertainment application environments. Developments in camera of Texas at Austin (United States); Peter sensors, image processing, 3D imaging, display technology, and digital printing are ena- Burns, Burns Digital Imaging (United States); Brian Cooper, bling new or enhanced possibilities for creating and conveying visual content that informs or Lexmark International, Inc. (United States); Luke Cui, Amazon (United entertains. Wireless networks and mobile devices expand the ways to share imagery and States); Mylène Farias, University of Brasilia autonomous vehicles bring image processing into new aspects of society. (Brazil); Susan Farnand, Rochester Institute of Technology (United States); Frans Gaykema, The power of imaging rests directly on the visual quality of the images and the performance Océ Technologies B.V. (the Netherlands); Jukka of the systems that produce them. As the images are generally intended to be viewed by Häkkinen, University of Helsinki (Finland); Dirk humans, a deep understanding of human visual perception is key to the effective assessment Hertel, E Ink Corporation (United States); Robin of image quality. Jenkin, NVIDIA Corporation (United States); Elaine Jin, NVIDIA Corporation (United States); Mohamed-Chaker Larabi, University of Poitiers This conference brings together engineers and scientists from industry and academia who (France); Göte Nyman, University of Helsinki strive to understand what makes a high-quality image, and how to specify the requirements (Finland); Jonathan Phillips, Google Inc. (United and assess the performance of modern imaging systems. It focuses on objective and sub- States); Sophie Triantaphillidou, University of jective methods for evaluating the perceptual quality of images, and includes applications Westminster (United Kingdom); and Clément throughout the imaging chain from image capture, through processing, to output, printed or Viard, DxOMark Image Labs (United States) displayed, video or still, 2D or 3D, virtual, mixed or augmented reality, LDR or HDR.

Awards Best Student Paper Conference Sponsor Best Paper IQSP

electronicimaging.org #EI2020 73

-

, 2 ession HVEI-068 HVEI-068 HVEI-069 HVEI-069 AVM-042 AVM-042 AVM-040 AVM-040 AVM-041 AVM-041 IQSP-066 IQSP-066 IQSP-067 IQSP-067 S IRYStec IRYStec , Robin 2 , Sana oint J 1,2

IFG (Brazil) 2 , Hyunjin Yoo 2

#EI2020 electronicimaging.org

, Alessandro Silva 1 , Tara Akhavan , Tara 1 McGill University (Canada), 1 University of Brasilia and ; 1 1 ; 1 University of Cambridge (United Kingdom) 3

12:30 – 2:00 pm Lunch 3:10 – 3:30 pm Coffee Break , and James Clark 3 5:00 – 6:00 pm All-Conference Welcome Reception 5:00 – 6:00 pm All-Conference Welcome , and Mylène Farias 1

For abstract and speaker biography, see page 7 For abstract and speaker biography, PLENARY: Frontiers in Computational Imaging Frontiers PLENARY: Intel Corporation (United States), and Session Chairs: Radka Tezaur, Jonathan Phillips, Google Inc. (United States) 2:00 – 3:10 pm Grand Peninsula Ballroom D Katie the First Picture of a Black Hole, Imaging the Unseen: Taking Computing and Mathematical Sciences Bouman, assistant professor, (United States) Department, California Institute of Technology Rafal Mantiuk 12:10 Perceptual Image Quality Session Chairs: Mohamed Chaker Larabi, Université de Poitiers (France), Session Chairs: Mohamed Chaker Larabi, Université and Jeffrey (United States) Mulligan, NASA Ames Research Center 3:30 – 4:50 pm Grand Peninsula A and Electronic Imaging This session is jointly sponsored by: Human Vision 2020, and Image Quality and System Performance XVII. 3:30 Perceptual quality assessment of enhanced images using a crowd- sourcing framework, Muhammad Irshad 11:30 of contrast detection probability Prediction and fast estimation (United States) Jenkin, NVIDIA Corporation an ideal observerObject detection using model, Paul Kane and Orit (United States) Skorka, ON Semiconductor index and contrast detection probability Comparison of detectability (United States) (JIST-first), Robin Jenkin, NVIDIA Corporation Perceptual image quality assessment for various viewing conditions and display systems, Andrei Chubarau Quality assessment protocols for omnidirectional video quality evalua Robitza, and Alexander Werner tion, Ashutosh Singla, Stephan Fremerey, Universität Ilmenau (Germany) Raake, Technische Improved temporal pooling for perceptual video quality assessment Sophia Batsi and Lisimachos Kondi, University of Ioannina using VMAF, (Greece) 4:10 Software Inc. (Canada), and 3:50 4:30 11:50 Alamgeer

ession ession AVM-019 AVM-019 AVM-038 AVM-038 IQSP-039 IQSP-039 IQSP-018 IQSP-018 ession S S S AVM-001 AVM-001 oint oint J J oint J Dietmar Wueller, Dietmar Wueller,

10:10 – 10:50 am Coffee Break

Biographies and/or abstracts for all keynotes are found on pages Biographies and/or abstracts for all keynotes 9–14 8:45 Conference Welcome 8:50 and updates LED flicker measurement: Challenges, considerations, from IEEE P2020 working group, Brian Deegan, senior expert, Valeo Vision Systems (Ireland) KEYNOTE: Automotive Camera Image Quality KEYNOTE: Automotive Amazon (United States) Session Chair: Luke Cui, 8:45 – 9:30 am Regency B and by: Autonomous Vehicles This session is jointly sponsored Quality and System PerformanceMachines 2020, and Image XVII. 11:10 9:50 Predicting Camera Detection Performance Demonstration of a virtual reality driving simulation platform, Mingming (United States) and Susan Farnand, of Technology Rochester Institute Wang 74 10:50 Imatest, Describing and sampling the LED flicker signal, Robert Sumner, LLC (United States) Session Chair: Robin Jenkin, NVIDIA Corporation (United States) 10:50 am – 12:30 pm Regency B and Machines This session is jointly sponsored by: Autonomous Vehicles 2020, Human Vision and Electronic Imaging 2020, and Image Quality and System Performance XVII. Automotive image quality concepts for the next SAE levels: Color Automotive image quality concepts for the next Marc Geese, probability, separation probability and contrast detection Continental AG (Germany) Session Chair: Luke Cui, Amazon (United States) 9:30 – 10:10 am Regency B and Machines Vehicles This session is jointly sponsored by: Autonomous 2020, and Image Quality and System Performance XVII. 9:30 A new dimension in geometric camera calibration, Automotive Camera Image Quality Image Engineering GmbH & Co. KG (Germany) IMAGE QUALITY AND SYSTEM PERFORMANCE XVII PERFORMANCE AND SYSTEM QUALITY IMAGE January 27, 2020 Monday,

IQSP Scotland (UnitedKingdom) 9:30 Corriveau 11:10 of Commerce(UnitedStates), Research InstitutesofSwedenAB(Sweden), electronicimaging.org #EI2020 electronicimaging.org and 3Dgazetrackingquality, WilliamBlakey A comparativestudytodemonstratethegrowingdividebetween2D de Nantes(France) conditions, JesúsGutiérrez,Toinon Vigier, andPatrickLeCallet,Université lighting Quality evaluationof3Dobjectsinmixedrealityfordifferent 10:50 XVII. 2020, andImageQualitySystemPerformance This sessionisjointlysponsoredby:HumanVisionandElectronicImaging Grand PeninsulaA 10:50 am–12:30pm Session Chair:KjellBrunnström,RISEAcreoAB(Sweden) and LohicFotioTiotsop, PolitecnicodiTorino (Italy) EnricoMasala, ity measuresthroughtheITS4Sdataset,AntonioServetti, Investigating predictionaccuracyoffullreferenceobjectivevideoqual 3 reference imageandvideoqualitymetrics,MargaretPinson Open softwareframeworkforcollaborativedevelopmentofno Kasteren Quality ofexperienceassessment360-degreevideo, Kjell Brunnström Group-Currentactivitiesandresearch, The VideoQualityExperts 8:50 XVII. 2020, andImageQualitySystemPerformance This sessionisjointlysponsoredby:HumanVisionandElectronicImaging Grand PeninsulaA 8:50 –10:10am Mulligan, NASAAmesResearchCenter(UnitedStates) Session Chairs:KjellBrunnström,RISEAcreoAB(Sweden),andJef Tuesday, 28,2020 January Image QualityandSystemPerformanceXVII Eindhoven (theNetherlands),and 9:50 9:10 States) Administration, Institute for TelecommunicationsInformation Sciences (United Video Quality Experts GroupII Video QualityExperts Sweden University(Sweden),and Video Quality Experts GroupI Video QualityExperts Communications (UnitedStates) Naeem Ramzan AGH UniversityofScienceandTechnology (Poland),and

1,2 7:30 –8:45amWomen inElectronicImagingBreakfast; 2 , KjellBrunnström , MikolajLeszczuk 10:00 am – 7:30 pm Industry Exhibition-Tuesday10:00 am–7:30pmIndustry 1,2 2 ; andMargaretPinson 1

Lumen ResearchLimitedand 10:10 – 10:50 am Coffee Break 10:10 –10:50amCoffee pre-registration required

1,3 3 , JohnHedlund , andMichaelColligan 2 Intel Corporation(UnitedStates),

3 3 Mid SwedenUniversity(Sweden) National Telecommunications and 3 ; 1 RISE AcreoAB(Sweden), 1 2 , andChrisSnijders University ofTechnology 1,2 2 University oftheWest of , NavidHajimirza

4 ; 1 US Department US Department Anouk van 4 Spirent J J 1 oint oint , Philip 2 S S ; frey HVEI-092 HVEI-129 HVEI-128 HVEI-090 HVEI-093 HVEI-091 1 1 ession ession , and RISE

2 Mid - Dublin (Ireland) 3:50 Katsavounidis University Javeriana,Cali(Colombia)and Benitez Restrepo (Italy), Ling andperspectives,Suiyi video compression:Challenges,observations, Characterization ofusergeneratedcontentforperceptually-optimized 1 Zhi Li A simplemodelfortestsubjectbehaviorinsubjectiveexperiments, networks, LohicFotioTiotsop Predicting singleobserver’s votesfromobjectivemeasuresusingneural 3-D deepconvolutionalneuralnetwork,RogerNieto using No referencevideoqualityassessmentwithauthenticdistortions metrics onaudio-visualcontent,HelardBecerra objectivequality ofautoencoder-based Analyzing theperformance Chanas, andFrédéricGuichard,DXOMARK(France) DXOMARK objectivevideoqualitymeasurements,EmilieBaudin,Laurent 3:30 XVII. 2020, andImageQualitySystemPerformance This sessionisjointlysponsoredby:HumanVisionandElectronicImaging Grand PeninsulaA 3:30 –5:10pm 3 11:30 Session Chair:JonathanPhillips,GoogleInc.(UnitedStates) Andrew Hines (United States) Image QualityMetrics 4:10 (DIT) (Germany) 12:10 11:50 University ofScienceandTechnology (Poland) Pocta Facebook, Inc.(UnitedStates) Netflix, Inc.(UnitedStates), For abstractandspeakerbiography, seepage7 Jonathan Phillips,GoogleInc.(UnitedStates) Session Chairs:RadkaTezaur, IntelCorporation(UnitedStates),and PLENARY: AutomotiveImaging States) vice president,hardwaredevelopment,NVIDIACorporation(United Imaging intheAutonomousVehicle Hicok,senior Revolution,Gary Grand PeninsulaBallroomD 2:00 –3:10pm 1,2 1 2 , Yoann Baveye , IoannisKatsavounidis , MarcusBarkowsky 2 Zilina University(Slovakia),and 3

2 ; ;

1 1 1 CAPACITÉS (France), , RogerFigueroaQuintero University ofBrasilia(Brazil)and 1,2 3:10 – 3:30 pm Coffee Break 3:10 –3:30pmCoffee

, PatrickLeCallet 12:30 –2:00pmLunch 3 , andEnricoMasala 2 2 1 Facebook, Inc.(UnitedStates),and , ChristosBampis , Tomas Mizdos

2 Université deNantes(France),and 3 Deggendorf InstituteofTechnologyDeggendorf 2 , JimSkinner 2 The UniversityofTexas atAustin 1 , andAlanBovik 2

, MiroslavUhrina 1 , andLucjanJanowski 1 ; 1 , MylèneFarias 1 2 Politecnico diTorino University College

3 1 , andIoannis , Hernan Dario , Hernan J 2 oint ; 1 Pontificia 2 , Peter 1 3 S , and AGH IQSP-167 IQSP-166 IQSP-168 HVEI-130 HVEI-132 HVEI-131 ession 3 ; 75

IQSP Image Quality and System Performance XVII

4:30 IQSP-169 11:30 IQSP-241 Quality aware feature selection for video object tracking, Roger Nieto1, Camera system performance derived from natural scenes, Oliver van Carlos Quiroga2, Jose Ruiz-Munoz3, and Hernan Benitez-Restrepo1; Zwanenberg1, Sophie Triantaphillidou1, Robin Jenkin2, and Alexandra 1Pontificia University Javeriana, Cali (Colombia),2 Universidad del Valle Psarrou1; 1University of Westminster (United Kingdom) and 2NVIDIA (Colombia), and 3University of Florida (United States) Corporation (United States)

4:50 IQSP-170 11:50 IQSP-242 Studies on the effects of megapixel sensor resolution on displayed im- Correcting misleading image quality measurements, Norman Koren, age quality and relevant metrics, Sophie Triantaphillidou1, Jan Smejkal1, Imatest LLC (United States) Edward Fry1, and Chuang Hsin Hung2; 1University of Westminster (United Kingdom) and 2Huawei (China) 12:30 – 2:00 pm Lunch

5:30 – 7:30 pm Symposium Demonstration Session PLENARY: VR/AR Future Technology

Wednesday, January 29, 2020 Session Chairs: Radka Tezaur, Intel Corporation (United States), and Jonathan Phillips, Google Inc. (United States) KEYNOTE: Image Capture 2:00 – 3:10 pm Grand Peninsula Ballroom D Session Chair: Nicolas Bonnier, Apple Inc. (United States) Quality Screen Time: Leveraging Computational Displays for Spatial 8:50 – 9:50 am Computing, Douglas Lanman, director, Display Systems Research, Harbour A/B Facebook Reality Labs (United States) IQSP-190 For abstract and speaker biography, see page 7 Camera vs smartphone: How electronic imaging changed the game, Frédéric Guichard, DXOMARK (France) Biographies and/or abstracts for all keynotes are found on pages 3:10 – 3:30 pm Coffee Break 9–14 Image Quality of Omnidirectional Environnement

Session Chair: Stuart Perry, University of Technology Sydney (Australia) Image Capture Performance I 3:30 – 5:10 pm Session Chair: Peter Burns, Burns Digital Imaging (United States) Harbour A/B 9:50 – 10:10 am 3:30 IQSP-284 Subjective and viewport-based objective quality assessment of Harbour A/B 360-degree videos, Roberto Azevedo1, Neil Birkbeck2, Ivan Janatra2, IQSP-214 Balu Adsumilli2, and Pascal Frossard1; 1Ecole Polytechnique Fédérale de Comparing common still image quality metrics in recent High Dynamic 2

IQSP Lausanne (Switzerland) and YouTube (United States) Range (HDR) and Wide Color Gamut (WCG) representations, Anustup Choudhury and Scott Daly, Dolby Laboratories (United States) 3:50 IQSP-285 Statistical characterization of tile decoding time of HEVC-encoded 360° 10:00 am – 3:30 pm Industry Exhibition - Wednesday video, Henrique Garcia1, Mylène Farias1, Ravi Prakash2, and Marcelo Carvalho1; 1University of Brasilia (Brazil) and 2The University of Texas at 10:10 – 10:50 am Coffee Break Dallas (United States)

Image Capture Performance II 4:10 IQSP-286 Complexity optimization for the upcoming versatile video coding stand- Session Chair: Sophie Triantaphillidou, University of Westminster (United ard, Mohamed Chaker Larabi, Université de Poitiers (France) Kingdom) 4:30 IQSP-287 10:50 am – 12:10 pm On the improvement of 2D quality metrics for the assessment of 360- Harbour A/B deg images, Mohamed Chaker Larabi, Université de Poitiers (France) 10:50 IQSP-239 4:50 IQSP-288 Validation of modulation transfer functions and noise power spectra The cone model: Recognizing gaze uncertainty in virtual environments, from natural scenes (JIST-first),Edward Fry1, Sophie Triantaphillidou1, Anjali Jogeshwar, Mingming Wang, Gabriel Diaz, Susan Farnand, and Robin Jenkin2, and Ralph Jacobson1; 1University of Westminster (United Jeff Pelz, Rochester Institute of Technology (United States) Kingdom) and 2NVIDIA Corporation (United States)

11:10 IQSP-240 Application of ISO standard methods to optical design for image capture, Peter Burns1, Don Williams2, Heidi Hall3, John Griffith3, and Scott Cahall3; 1Burns Digital Imaging, 2Image Science Associates, and 3Moondog Optics (United States)

76 #EI2020 electronicimaging.org Litao Hu Zhenhua Hu Cho

(United States),and

electronicimaging.org #EI2020 electronicimaging.org II, Litao Hu Relation betweenimagequalityandresolution-Part I,ZhenhuaHu Relation betweenimagequalityandresolution-Part 1 Zhang Region ofinterestextractionforimagequalityassessment, Kim, andJoonSeoYim, SamsungElectronics(RepublicofKorea) mobile imagesensor, SunghoCha,JaehyukHur, Sung-suKim,TaeHyung Quantification methodforvideomotioncorrectionper of itsparameter, OleksiiRubel of2DDCT Prediction ofperformance Administration (UnitedStates) Quanzeng Wang andWei-Chung Cheng,USFoodandDrug characteristicsofendoscopes, Evaluation ofopticalperformance TaeHyung Kim,andJoonSeoYim, SamsungElectronics(RepublicofKorea) Cheoljong Yang, JinhyunKim,JungminLee,Younghoon Kim,Sung-su ISPtuningframeworkbasedonuserpreferencefeedback, Effective (Republic ofKorea) Cheoljong Yang, TaeHyung Kim,andJoonSeoYim, SamsungElectronics age qualityoptimization,Younghoon Kim,JungminLee,Sung-su DNN-based ISPparameterinferencealgorithmforautomaticim- defects, RunzheZhang A comprehensivesystemforanalyzingthepresenceofprintquality Interactive PapersSession. The followingworkswillbepresentedattheEI2020Symposium Sequoia 5:30 –7:00pm XVIIInteractivePapersSession Image QualityandSystemPerformance Image QualityandSystemPerformanceXVII University and

and

Karen Egiazarian

Korea Co.Ltd.(RepublicofKorea) University (Finland) Purdue University(UnitedStates), 5:30 –7:00pmMeettheFuture:AShowcaseofStudentandYoung 3 2 , andJanAllebach 5:30 –7:00pmEI2020SymposiumInteractivePostersSession HP Inc.(UnitedStates) 1 , EricMaggard 1 , PeterBauer 1 , JanAllebach 2 HP Inc.(UnitedStates) 2 ;

1 3 National AerospaceUniversity(Ukraine)and HP PrintingKoreaCo.Ltd.(RepublicofKorea) 2 , Todd Harris 2 1 , Yousun Bang , Yi Yang 1 ; 1 Professionals Research Purdue University(UnitedStates), 1

, PeterBauer

1 , SergiyAbramov 1 , EricMaggard 2

HP Inc.(UnitedStates),and 2 , andJanAllebach

3 -based filterandadaptiveselection , MinkiCho 2 , andTodd Harris 2 1 , Yousun Bang , VladimirLukin 3 , andJanAllebach 1 ; formance in formance 1 Purdue University 2 ; 2 HP Inc. 1 Runzhe Purdue 3 HP Printing

3 1 , Minki , and 2 IQSP-320 IQSP-319 IQSP-321 IQSP-314 IQSP-322 IQSP-317 IQSP-323 IQSP-316 IQSP-315 Tampere 1

, 1 1 , ;

Utpal Sarkar and and FrédéricGuichard Corporation (UnitedStates) 11:30 States) and Jacobson 11:50 Ink quality ruler experiments and print uniformity predictor, experimentsandprintuniformity Ink qualityruler Yi Yang and DrugAdministration Edward Fry Scene-and-process-dependent spatialimagequalitymetrics(JIST Chung Cheng Evaluating whole-slideimagingviewersusedindigitalpathology Rubel Prediction ofLeefilterper Zalczer Depth mapqualityevaluationforphotographicapplications, 10:50 Harbour A/B 10:50 am–12:30pm 1 (United States) Tandon, Sarthak Alexander Schwartz, andJacksonKnappen,Imatest,LLC Aerospace University(Ukraine)and 9:50 Koren image, Norman capacitywithaSiemensstar Measuring cameraShannoninformation V systems, EdwardFry scene-dependencyinsimulatedimagecapture Noise powerspectrum 8:50 Harbour A/B 8:50 –10:10am Session Chair:MylèneFarias,UniversityofBrasilia(Brazil) III Image CapturePerformance Thursday, 30,2020 January 9:30 Session Chair:JukkaHäkkinen,UniversityofHelsinki(Finland) System Performance 11:10 (United States) 9:10 University ofWestminster (UnitedKingdom)and erification oflong-rangeMTFtestingthroughinter 2 1 NVIDIA Corporation(UnitedStates) , VladimirLukin 1 , François-XavierThomas 1 , andJohnJarvis 1 2 , SophieTriantaphillidou 2 HP Inc.(Spain) , IsabelBorrell

1 , SamuelLam 10:10 – 10:50 am Coffee Break 10:10 –10:50amCoffee 1 1 , SophieTriantaphillidou , AndriiRubel 1 1 andRobinJenkin ; 1 DXOMARK and 1 formance forSentinel-1SARimages, Oleksii formance ;

2 , andJanAllebach 2 1 , QiGong University ofWestminster (UnitedKingdom) 2 University of Maryland (UnitedStates) University ofMaryland 1 , LaurentChanas 1 2 , andKarenEgiazarian 1 Tampere University(Finland) , RobinJenkin 1 , andPaulLemaillet

2 ; 2 ENS Cachan(France) 1 Imatest LLCand 1 , RobinJenkin 1 ; 2 1 NVIDIA Corporation Purdue University(United 1 2 , GabrieleFacciolo , andJohnJarvis mediary optics, mediary 2 1 1,2 2 ; ; NVIDIA 1 , Ralph 1 National US Food Eloi

IQSP-372 IQSP-373 IQSP-370 IQSP-348 IQSP-345 IQSP-347 IQSP-371 IQSP-346 -first),

1 , Wei- ; 1 2 , , 77

IQSP Where Industry and Academia Meet to discuss Imaging Across Applications IMAWM

Conference Chairs: Jan P. Allebach, Purdue Imaging and Multimedia Analytics in a Web and University (United States); Zhigang Fan, Apple Inc. (United States); and Qian Lin, HP Inc. Mobile World 2020 (United States) Conference overview Program Committee: Vijayan Asari, University The recent progress in web, social networks, and mobile capture and presentation technolo- of Dayton (United States); Raja Bala, PARC gies has created a new wave of interest in imaging and multimedia topics, from multimedia (United States); Reiner Fageth, CEWE Stiftung analytics to content creation and repurposing, from engineering challenges to aesthetics & Co. KGaA (Germany); Michael Gormish, and legal issues, from content sharing on social networks to content access from Smart Ricoh Innovations, Inc. (United States); Yandong Phones with cloud-based content repositories and services. Compared to many subjects Guo, XMotors (United States); Ramakrishna in traditional imaging, these topics are more multi-disciplinary in nature. This conference Kakarala, Picartio Inc. (United States); Yang Lei, HP Labs (United States); Xiaofan Lin, A9.COM, provides a forum for researchers and engineers from various related areas, both academic Inc. (United States); Changsong Liu, Tsinghua and industrial to exchange ideas and share research results in this rapidly evolving field. University (China); Yucheng Liu, Facebook Inc. (United States); Binu Nair, United Technologies Award Research Center (United States); Mu Qiao, Best Paper Award Shutterfly, Inc. (United States); Alastair Reed, Digimarc Corporation (United States); Andreas Savakis, Rochester Institute of Technology (United States); Bin Shen, Google Inc. (United States); Wiley Wang June Life, Inc. (United States); Jane You, The Hong Kong Polytechnic University (Hong Kong); and Tianli Yu, Morpx Inc. (China)

Conference Sponsors IMAWM

78 #EI2020 electronicimaging.org Yu Geng 9:30 9:10 Missouri and electronicimaging.org #EI2020 electronicimaging.org World 2020. 2020, andImagingMultimedia AnalyticsinaWeb andMobile This sessionisjointlysponsoredby: FoodandAgriculturalImagingSystems Cypress B 10:30 –10:50am (Greece) Grigorios Tsagkatakis, FoundationforResearchandTechnology (FORTH) Session Chairs:VijayanAsari,UniversityofDayton(UnitedStates)and Asari, UniversityofDayton(UnitedStates) of-way automatedmonitoring,RuixuLiu,TheusAspiras,andVijayan Deep Ram:neuralnetworkarchitectureforoil/gaspipelineright- Institute ofTechnology (UnitedStates) Ayush Soni,AlexanderLoui,ScottBrown,andCarlSalvaggio,Rochester High-quality multispectralimagegenerationusingconditionalGANs, Yasmindeep learning, Kassim Small objectbirddetectionininfrareddronevideosusingmaskR-CNN A newtrainingmodelforobjectdetectioninaerialimages, Conference Welcome 8:45 World 2020. 2020, andImagingMultimediaAnalyticsinaWeb andMobile This sessionisjointlysponsoredby:FoodandAgriculturalImagingSystems Cypress B 8:45 –10:10am (FORTH) (Greece) States) andGrigoriosTsagkatakis, FoundationforResearch andTechnology Session Chairs:AndreasSavakis,RochesterInstituteofTechnology (United Tuesday, 28,2020 January MOBILE WORLD2020 IMAGING ANDMULTIMEDIA ANALYTICS INAWEBAND Technology Co.,Ltd.(China),and Information TechnologyInformation (China), (Hong Kong) Drone ImagingII Drone ImagingI 8:50 9:50 Mote 2 , JasonHardin 7:30 –8:45amWomen inElectronicImagingBreakfast; 2 , QinLi 10:00 am – 7:30 pm Industry Exhibition-Tuesday10:00 am–7:30pmIndustry

2 Texas ParksandWildlife(UnitedStates)

1

, JaneYou 2 10:10 – 10:30 am Coffee Break 10:10 –10:30amCoffee , andKannappanPalaniappan

pre-registration required 3 , andMingpengCai 1 , MichaelByrne 2 Shenzhen Shangda Xinzhi Information Shenzhen ShangdaXinzhiInformation 3 The HongKongPolytechnicUniversity

1 , CristyBurch 1 ; 1 1 Shenzhen Instituteof ;

1 University of J J 2 oint oint , Kevin Geng Yang IMAWM-086 IMAWM-085 IMAWM-084 IMAWM-087 S S ession ession 1 , Technology (UnitedStates) BlakesleeandAndreasSavakis, RochesterInstituteof detection, Bryan LambdaNet: Afullyconvolutionalarchitecturefordirectionalchange (United States) Carlson CenterforImagingScience,RochesterInstituteofTechnology UAS andsatelliteremotesensing,JanvanAardt,professor, ChesterF. Managing cropsacrossspatialandtemporalscales-Therolesof FAIS-127 Mobile World 2020. Systems 2020,andImagingMultimediaAnalyticsinaWeb and This sessionisjointlysponsoredby:FoodandAgriculturalImaging Cypress B 10:50 –11:40am States) Mohammed Yousefhussien, GeneralElectricGlobalResearch(United Session Chairs:VijayanAsari,UniversityofDayton(UnitedStates)and KEYNOTE: RemoteSensinginAgricultureI 9–14 Biographies and/orabstractsforallkeynotesarefoundonpages (United States) culture, KevinLang,generalmanagerofagriculture,PrecisionHawk Practical applicationsandtrendsforUAV remotesensinginagri- FAIS-151 Mobile World 2020. Systems 2020,andImagingMultimediaAnalyticsinaWeb and This sessionisjointlysponsoredby:FoodandAgriculturalImaging Cypress B 11:40 am–12:30pm States) Mohammed Yousefhussien, GeneralElectricGlobal Research(United Session Chairs:VijayanAsari,UniversityofDayton(UnitedStates)and KEYNOTE: RemoteSensinginAgricultureII 9–14 Biographies and/orabstractsforallkeynotesarefoundonpages

12:30 –2:00pmLunch

J J oint oint IMAWM-114 S S ession ession 79

IMAWM Imaging and Multimedia Analytics in a Web and Mobile World 2020

Wednesday, January 29, 2020 PLENARY: Automotive Imaging

Session Chairs: Radka Tezaur, Intel Corporation (United States), and Jonathan Phillips, Google Inc. (United States) KEYNOTE: Personal Health Data and Surveillance 2:00 – 3:10 pm Session Chair: Jan Allebach, Purdue University (United States) Grand Peninsula Ballroom D 9:10 – 10:10 am Imaging in the Autonomous Vehicle Revolution, Gary Hicok, senior Cypress B vice president, hardware development, NVIDIA Corporation (United IMAWM-211 States) Health surveillance, Ramesh Jain, Bren Professor in Information & For abstract and speaker biography, see page 7 Computer Sciences, Donald Bren School of Information and Computer Sciences, University of California, Irvine (United States) 3:10 – 3:30 pm Coffee Break Biographies and/or abstracts for all keynotes are found on pages 9–14 Deep Learning: Multi-Media Analysis

Session Chair: Zhigang Fan, Apple Inc. (United States) 10:00 am – 3:30 pm Industry Exhibition - Wednesday

3:30 – 5:40 pm 10:10 – 10:30 am Coffee Break Cypress B 3:30 IMAWM-183 Actual usage of AI to generate more interesting printed products Augmented Reality in Built Environments Joint Session (Invited), Birte Stadtlander, Sven Wiegand, and Reiner Fageth, CEWE Session Chairs: Raja Bala, PARC (United States) and Matthew Shreve, Palo Stiftung & Co. KGAA (Germany) Alto Research Center (United States) 4:00 IMAWM-184 10:30 am – 12:40 pm Deep learning for printed mottle defect grading, Jianhang Chen1, Qian Cypress B Lin2, and Jan Allebach1; 1Purdue University and 2HP Labs, HP Inc. (United This session is jointly sponsored by: The Engineering Reality of Virtual States) Reality 2020, and Imaging and Multimedia Analytics in a Web and 4:20 IMAWM-185 Mobile World 2020. A local-global aggregate network for facial landmark localization, Ruiyi 10:30 IMAWM-220 Mao1,2, Qian Lin3, and Jan Allebach1; 1Purdue University, 2Apple Inc., and Augmented reality assistants for enterprise, Matthew Shreve and Shiwali 3HP Labs, HP Inc. (United States) Mohan, Palo Alto Research Center (United States)

4:40 IMAWM-186 11:00 IMAWM-221 The blessing and the curse of the noise behind facial landmark annota- Extra FAT: A photorealistic dataset for 6D object pose estimation, tions, Xiaoyu Xiang1, Yang Cheng1, Shaoyuan Xu1, Qian Lin2, and Jan Jianhang Chen1, Daniel Mas Montserrat1, Qian Lin2, Edward Delp1, and Allebach1; 1Purdue University and 2HP Labs, HP Inc. (United States) Jan Allebach1; 1Purdue University and 2HP Labs, HP Inc. (United States)

5:00 IMAWM-187 11:20 IMAWM-222 Gun source and muzzle head detection, Zhong Zhou, Isak Czeresnia Space and media: Augmented reality in urban environments, Luisa Etinger, Florian Metze, Alexander Hauptmann, and Alex Waibel, Carnegie Caldas, University of California, Berkeley (United States) Mellon University (United States) 12:00 ERVR-223 5:20 IMAWM-188 Active shooter response training environment for a building evacua- Semi-supervised multi-task network for image aesthetic assessment, tion in a collaborative virtual environment, Sharad Sharma and Sri Teja

IMAWM Xiaoyu Xiang1, Yang Cheng1, Jianhang Chen1, Qian Lin2, and Jan Bodempudi, Bowie State University (United States) Allebach1; 1Purdue University and 2HP Labs, HP Inc. (United States) 12:20 ERVR-224 5:30 – 7:30 pm Symposium Demonstration Session Identifying anomalous behavior in a building using HoloLens for emer- gency response, Sharad Sharma and Sri Teja Bodempudi, Bowie State University (United States)

12:40 – 2:00 pm Lunch

80 #EI2020 electronicimaging.org Qian Lin (United States) Technology, Peshawar(Pakistan), Telecommunications and electronicimaging.org #EI2020 electronicimaging.org (United States) J. Deering,LiaStanciu,GeorgeChiu,andJanAllebach,PurdueUniversity Image analyticsforfoodsafety, MinZhao,SusanaDiaz-Amaya,Amanda (United States) Heather Eicher-Miller, MarahAqeel,andLuotaoLin,PurdueUniversity data:Anutritionscienceperspective, Visual processingofdietary Purdue University(UnitedStates) Sri KalyanYarlagadda, RunyuMao,ZemanShao, andJiangpengHe, dietwithvisualdata(Invited),FengqingZhu, Shazam forfood:Learning 4:30 Cypress B 4:30 –5:30pm Session Chair:QianLin,HPLabs,Inc.(UnitedStates) Food andComputerVision of imageandWiFi,JichaoJiao A deepneuralnetwork-basedindoorpositioningalgorithmbycascade during floods,NainaSaid PSO andgeneticmodelingofdeepfeaturesforroadpassibilityanalysis Identification ofutilityimageswithamobiledevice, 3:30 Cypress B 3:30 –4:30pm Imaging andMultimedia Analytics inaWebandMobileorld2020 3:50 5:10 4:50 Session Chair:ReinerFageth,CEWEStiftung&Co.KGAA(Germany) Applications Deep Learning: and 4:10 Chen Ullah For abstractandspeakerbiography, seepage7 Facebook RealityLabs(UnitedStates) Computing, DouglasLanman,director, DisplaySystemsResearch, Quality ScreenTime:LeveragingComputationalDisplaysforSpatial Grand PeninsulaBallroomD 2:00 –3:10pm Jonathan Phillips,GoogleInc.(UnitedStates) Session Chairs:RadkaTezaur, IntelCorporation(UnitedStates),and PLENARY: VR/ARFutureTechnology 3 3 1 Norwegian UniversityofScienceandTechnologyNorwegian (Norway) , Touqir Gohar , MengGuan 2 , andJanAllebach

1 1 , andWei Cui , andAlaAl-fuqaha 3:10 – 3:30 pm Coffee Break 3:10 –3:30pmCoffee 2 XMotors (China) 1 , Aysha Nayab 1 ; 1 Purdue Universityand 1

, XingWang 2 Hamad BinKhallifaUniversity(Qatar), 1 ; 1 Beijing UniversityofPostsand 2 ;

1 1 University ofEngineeringand , KashifAhmad 2 , Yaxin Zhao 2 HP Labs,Inc. Karthick Shankar Karthick 1 , Xinping 2 , Mohib IMAWM-300 IMAWM-269 IMAWM-270 IMAWM-302 IMAWM-301 IMAWM-271

1 , State University(UnitedStates) Real-time whiteboardcodingonmobiledevices,XunyuPan,Frostburg Papers Session. The followingworkwillbepresentedattheEI2020SymposiumInteractive Sequoia 5:30 –7:00pm Interactive PapersSession Imaging andMultimediaAnalyticsinaWeb andMobileWorld 2020 5:30 –7:00pmMeettheFuture:AShowcaseofStudentandYoung 5:30 –7:00pmEI2020SymposiumInteractivePostersSession Professionals Research

IMAWM-309 81

IMAWM Where Industry and Academia Meet to discuss Imaging Across Applications

ISS Conference Chairs: Jon S. McElvain, Dolby Imaging Sensors and Systems 2020 Laboratories, Inc. (United States); Arnaud Peizerat, Commissariat à l’Énergie Atomique Conference overview (France); Nitin Sampat, Edmund Optics (United The Imaging Sensors and Systems Conference (ISS) begins with EI 2020, from the merger States); and Ralf Widenhorn, Portland State of the Image Sensors and Imaging Systems Conference and the Photography, Mobile, and University (United States) Immersive Imaging Conference. Through these conferences, ISS traces its roots to the earlier Program Committee: Nick Bulitka, Conference, which ran for thirteen years. Teledyne Lumenera (Canada); Peter Catrysse, Stanford University (United States); Calvin Chao, Taiwan ISS focuses on image sensing for consumer, industrial, medical, and scientific applications, Semiconductor Manufacturing Company as well as embedded image processing, and pipeline tuning for these camera systems. (TSMC) (Taiwan); Tobi Delbrück, Institute of This conference will serve to bring together researchers, scientists, and engineers working Neuroinformatics, University of Zurich and ETH in these fields, and provides the opportunity for quick publication of their work. Topics Zurich (Switzerland); Henry Dietz, University can include, but are not limited to, research and applications in image sensors and detec- of Kentucky (United States); Joyce E. Farrell, tors, camera/sensor characterization, ISP pipelines and tuning, image artifact correction Stanford University (United States); Boyd Fowler, and removal, image reconstruction, color calibration, image enhancement, HDR imaging, OminVision Technologies Inc. (United States); light-field imaging, multi-frame processing, computational photography, 3D imaging, 360/ Eiichi Funatsu, OmniVision Technologies Inc. (United States); Sergio Goma, Qualcomm cinematic VR cameras, camera image quality evaluation and metrics, novel imaging appli- Technologies Inc. (United States); Francisco cations, imaging system design, and deep learning applications in imaging. Imai, Apple Inc. (United States); Michael Kriss, MAK Consultants (United States); Rihito Kuroda, Award Tohoku University (Japan); Kevin Matherson, Arnaud Darmont Memorial Best Paper Award* Microsoft Corporation (United States); Jackson Roland, Apple Inc. (United States); Min-Woong *The Arnaud Darmont Memorial Best Paper Award is given in recognition of IMSE Conference Chair Seo, Samsung Electronics, Semiconductor Arnaud Darmont who passed away unexpectedly in September 2018. R&D Center (Republic of Korea); Gilles Sicard, Commissariat à l’Énergie Atomique (France); Radka Tezaur, Intel Corporation (United States); Arnaud dedicated his professional life to the computer vision industry. After completing his degree in Jean-Michel Tualle, Université Paris 13 (France); electronic engineering from the University of Liège in Belgium (2002) he launched his career in the and Dietmar Wueller, Image Engineering GmbH field of CMOS image sensors and high dynamic range imaging, founding APHESA in 2008. He was & Co. KG (Germany) fiercely dedicated to disseminating knowledge about sensors, computer vision, and custom electronics design of imaging devices as witnessed by his years of teaching courses at the Electronic Imaging Symposium and Photonics West Conference, as well as his authorship of several publications. At the time of his death, Arnaud was in the final stages of revising the second edition of “High Dynamic Range Imaging – Sensors and Architectures”, first published in 2013. An active member of the EMVA 1288 Conference Sponsors standardization group, he was also the standards manager for the organization where he oversaw the development of EMVA standards and fostered cooperation with other imaging associations worldwide on the development and the dissemination of vision standards. His dedication, knowledge, and bound- less energy will be missed by the IS&T and Electronic Imaging communities. ISS

82 #EI2020 electronicimaging.org

IMAGING SENSORS AND SYSTEMS 2020

Tuesday, January 28, 2020 Sensor Design Technology Session Chairs: Jon McElvain, Dolby Laboratories (United States) and 7:30 – 8:45 am Women in Electronic Imaging Breakfast; Arnaud Peizerat, CEA (France) pre-regispatration required 11:10 am – 12:10 pm Regency A Depth Sensing I 11:10 ISS-143 Session Chairs: Jon McElvain, Dolby Laboratories (United States) and An over 120dB dynamic range linear response single exposure CMOS Arnaud Peizerat, CEA (France) image sensor with two-stage lateral overflow integration trench capaci- tors, Yasuyuki Fujihara, Maasa Murata, Shota Nakayama, Rihito Kuroda, 9:05 – 10:10 am and Shigetoshi Sugawa, Tohoku University (Japan) Regency A 9:05 11:30 ISS-144 Conference Welcome Planar microlenses for near infrared CMOS image sensors, Lucie Dilhan1,2,3, Jérôme Vaillant1,2, Alain Ostrovsky3, Lilian Masarotto1,2, Céline 9:10 ISS-103 Pichard1,2, and Romain Paquet1,2; 1University Grenoble Alpes, 2CEA, and A 4-tap global shutter pixel with enhanced IR sensitivity for VGA time- 3STMicroelectronics (France) of-flight CMOS image sensors,Taesub Jung, Yonghun Kwon, Sungyoung Seo, Min-Sun Keel, Changkeun Lee, Sung-Ho Choi, Sae-Young Kim, 11:50 ISS-145 Sunghyuck Cho, Youngchan Kim, Young-Gu Jin, Moosup Lim, Hyunsurk Event threshold modulation in dynamic vision spiking imagers for Ryu, Yitae Kim, Joonseok Kim, and Chang-Rok Moon, Samsung Electronics data throughput reduction, Luis Cubero1,2, Arnaud Peizerat1, Dominique (Republic of Korea) Morche1, and Gilles Sicard1; 1CEA and 2University Grenoble Alpes (France) 9:30 ISS-104 Indirect time-of-flight CMOS image sensor using 4-tap charge-modula- 12:30 – 2:00 pm Lunch tion pixels and range-shifting multi-zone technique, Kamel Mars1,2, Keita Kondo1, Michihiro Inoue1, Shohei Daikoku1, Masashi Hakamata1, Keita Yasutomi1, Keiichiro Kagawa1, Sung-Wook Jun3, Yoshiyuki Mineyama3, PLENARY: Automotive Imaging Satoshi Aoyama3, and Shoji Kawahito1; 1Shizuoka University, 2Tokyo Session Chairs: Radka Tezaur, Intel Corporation (United States), and Institute of Technology, and 3Brookman Technology (Japan) Jonathan Phillips, Google Inc. (United States) 9:50 ISS-105 2:00 – 3:10 pm Improving the disparity for depth extraction by decreasing the pixel Grand Peninsula Ballroom D height in monochrome CMOS image sensor with offset pixel , Imaging in the Autonomous Vehicle Revolution, Jimin Lee1, Sang-Hwan Kim1, Hyeunwoo Kwen1, Seunghyuk Chang2, Gary Hicok, senior JongHo Park2, Sang-Jin Lee2, and Jang-Kyoo Shin1; 1Kyungpook National vice president, hardware development, NVIDIA Corporation (United University and 2Center for Integrated Smart Sensors, Korea Advanced States) Institute of Science and Technology (Republic of Korea) For abstract and speaker biography, see page 7

10:00 am – 7:30 pm Industry Exhibition - Tuesday 3:10 – 3:30 pm Coffee Break 10:10 – 10:30 am Coffee Break

KEYNOTE: Sensor Design Technology

Session Chairs: Jon McElvain, Dolby Laboratories (United States) and Arnaud Peizerat, CEA (France) 10:30 – 11:10 am Regency A ISS-115 ISS 3D-IC smart image sensors, Laurent Millet1 and Stephane Chevobbe2; 1CEA/LETI and 2CEA/LIST (France) Biographies and/or abstracts for all keynotes are found on pages 9–14

electronicimaging.org #EI2020 83 Imaging Sensors and Systems 2020

Wednesday, January 29, 2020 PANEL: Sensors Technologies for Autonomous Vehicles Joint Session Panel Moderator: David Cardinal, Cardinal Photo & Extremetech.com (United States) KEYNOTE: Imaging Systems and Processing Joint Session 3:30 – 5:30 pm Session Chairs: Kevin Matherson, Microsoft Corporation (United States) Regency A and Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany) This session is jointly sponsored by: Autonomous Vehicles and 8:50 – 9:30 am Machines 2020, and Imaging Sensors and Systems 2020. Regency A Imaging sensors are at the heart of any self-driving car project. This session is jointly sponsored by: The Engineering Reality of Virtual However, selecting the right technologies isn’t simple. Competitive Reality 2020, Imaging Sensors and Systems 2020, and Stereoscopic products span a gamut of capabilities includi ng traditional visible- Displays and Applications XXXI. light cameras, thermal cameras, lidar, and radar. Our session includes ISS-189 experts in all of these areas, and in emerging technologies, who will Mixed reality guided neuronavigation for non-invasive brain stimu- help us understand the strengths, weaknesses, and future directions of lation treatment, Christoph Leuze, research scientist in the Incubator each. Presentations by the speakers listed below will be followed by a for Medical Mixed and Extended Reality, Stanford University (United panel discussion. States) Biographies and/or abstracts are found on pages 15–21 Biographies and/or abstracts for all keynotes are found on pages 9–14 Introduction, David Cardinal, consultant and technology journalist (United States)

LiDAR for Self-driving Cars, Nikhil Naikal, VP of Software Imaging Systems and Processing I Engineering, Velodyne Lidar (United States) Session Chairs: Kevin Matherson, Microsoft Corporation (United States) Challenges in Designing Cameras for Self-driving Cars, Nicolas and Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany) Touchard, VP of Marketing, DXOMARK (France) 9:30 – 10:10 am Using Thermal Imaging to Help Cars See Better, Mike Walters, VP of Regency A product management for thermal cameras, FLIR Systems, Inc. (United 9:30 ISS-212 States) Soft-prototyping imaging systems for oral cancer screening, Joyce Farrell, Stanford University (United States) Radar’s Role, Greg Stanley, field applications engineer, NXP Semiconductors (the Netherlands) 9:50 ISS-213 Calibration empowered minimalistic multi-exposure image processing Tales from the Automotive Sensor Trenches, Sanjai Kohli, CEO, technique for camera linear dynamic range extension, Nabeel Riza and Visible Sensors, Inc. (United States) Nazim Ashraf, University College Cork (Ireland) Auto Sensors for the Future, Alberto Stochino, founder and CEO, Perceptive (United States) 10:00 am – 3:30 pm Industry Exhibition - Wednesday 10:10 – 10:30 am Coffee Break 5:30 – 7:30 pm Symposium Demonstration Session

Imaging Systems and Processing II

Session Chairs: Francisco Imai, Apple Inc. (United States) and Nitin Sampat, Edmund Optics, Inc (United States) 10:30 am – 12:50 pm Regency A 10:30 ISS-225 Anisotropic subsurface scattering acquisition through a light field based apparatus, Yurii Piadyk, Yitzchak Lockerman, and Claudio Silva, New York University (United States) ISS

10:50 ISS-226 CAOS smart camera-based robust low contrast image recovery over 90 dB scene linear dynamic range, Nabeel Riza and Mohsin Mazhar, University College Cork (Ireland)

84 #EI2020 electronicimaging.org Imaging Sensors and Systems 2020

11:10 ISS-227 4:10 ISS-274 TunnelCam - A HDR spherical camera array for structural integrity A high-linearity time-of-flight image sensor using a time-domain assessments of dam interiors, Dominique Meyer1, Eric Lo1, Jonathan feedback technique, Juyeong Kim, Keita Yasutomi, Keiichiro Kagawa, and Klingspon1, Anton Netchaev2, Charles Ellison2, and Falko Kuester1; Shoji Kawahito, Shizuoka University (Japan) 1University of California, San Diego and 2United States Army Corps of Engineers (United States) Imaging Sensors and Systems 2020 Interactive Papers Session 11:30 ISS-228 Characterization of camera shake, Henry Dietz, William Davis, and Paul 5:30 – 7:00 pm Eberhart, University of Kentucky (United States) Sequoia The following works will be presented at the EI 2020 Symposium 11:50 ISS-229 Interactive Papers Session. Expanding dynamic range in a single-shot image through a sparse grid ISS-327 of low exposure pixels, Leon Eisemann, Jan Fröhlich, Axel Hartz, and Camera support for use of unchipped manual lenses, Henry Dietz, Johannes Maucher, Stuttgart Media University (Germany) University of Kentucky (United States) 12:10 ISS-230 ISS-328 Deep image demosaicing for submicron image sensors (JIST-first),Irina CIS band noise prediction methodology using co-simulation of camera Kim, Seongwook Song, SoonKeun Chang, SukHwan Lim, and Kai Guo, module, Euncheol Lee, Hyunsu Jun, Wonho Choi, Kihyun Kwon, Jihyung Samsung Electronics (Republic of Korea) Lim, Seung-hak Lee, and JoonSeo Yim, Samsung Electronics (Republic of 12:30 ISS-231 Korea) Sun tracker sensor for attitude control of space navigation systems, ISS-329 Antonio De la Calle-Martos1, Rubén Gómez-Merchán2, Juan A. Leñero- From photons to digital values: A comprehensive simulator for image Bardallo2, and Angel Rodríguez-Vázquez1,2; 1Teledyne-Anafocus and sensor design, Alix de Gouvello, Laurent Soulier, and Antoine Dupret, CEA 2University of Seville (Spain) LIST (France)

12:50 – 2:00 pm Lunch ISS-330 Non-uniform integration of TDCI captures, Paul Eberhart, University of Kentucky (United States) PLENARY: VR/AR Future Technology 5:30 – 7:00 pm EI 2020 Symposium Interactive Posters Session Session Chairs: Radka Tezaur, Intel Corporation (United States), and Jonathan Phillips, Google Inc. (United States) 5:30 – 7:00 pm Meet the Future: A Showcase of Student and Young 2:00 – 3:10 pm Professionals Research Grand Peninsula Ballroom D Quality Screen Time: Leveraging Computational Displays for Spatial Computing, Douglas Lanman, director, Display Systems Research, Facebook Reality Labs (United States) For abstract and speaker biography, see page 7

3:10 – 3:30 pm Coffee Break

Depth Sensing II

Session Chairs: Sergio Goma, Qualcomm Technologies, Inc. (United States) and Radka Tezaur, Intel Corporation (United States) 3:30 – 4:30 pm Regency A 3:30 ISS-272 A short-pulse based time-of-flight image sensor using 4-tap charge- modulation pixels with accelerated carrier response, Michihiro Inoue, Shohei Daikoku, Keita Kondo, Akihito Komazawa, Keita Yasutomi,

Keiichiro Kagawa, and Shoji Kawahito, Shizuoka University (Japan) ISS

3:50 ISS-273 Single-shot multi-frequency pulse-TOF depth imaging with sub-clock shifting for multi-path interference separation, Tomoya Kokado1, Yu Feng1, Masaya Horio1, Keita Yasutomi1, Shoji Kawahito1, Takashi Komuro2, Hajime Ngahara3, and Keiichiro Kagawa1; 1Shizuoka University, 2Saitama University, and 3Institute for Datability Science, Osaka University (Japan)

electronicimaging.org #EI2020 85 Where Industry and Academia Meet to discuss Imaging Across Applications IRIACV IRIACV Conference Chairs: Henry Y.T. Ngan, ENPS Intelligent Robotics and Industrial Applications Hong Kong (China); Kurt Niel, Upper Austria University of Applied Sciences (Austria); and using Computer Vision 2020 Juha Röning, University of Oulu (Finland) Conference overview Program Committee: Philip Bingham, Oak This conference brings together real-world practitioners and researchers in intelligent robots Ridge National Laboratory (United States); and computer vision to share recent applications and developments. Topics of interest in- Ewald Fauster, Montan Universitat Leoben clude the integration of imaging sensors supporting hardware, computers, and algorithms for (Austria); Steven Floeder, 3M Company (United intelligent robots, manufacturing inspection, characterization, and/or control. States); David Fofi, University de Bourgogne (France); Shaun Gleason, Oak Ridge National Laboratory (United States); B. Keith Jenkins, The The decreased cost of computational power and vision sensors has motivated the rapid University of Southern California (United States); proliferation of machine vision technology in a variety of industries, including aluminum, Olivier Laligant, University de Bourgogne automotive, forest products, textiles, glass, steel, metal casting, aircraft, chemicals, food, (France); Edmund Lam, The University of Hong fishing, agriculture, archaeological products, medical products, artistic products, etc. Other Kong (Hong Kong); Dah-Jye Lee, Brigham industries, such as semiconductor and electronics manufacturing, have been employing Young University (United States); Junning machine vision technology for several decades. Machine vision supporting handling robots Li, Keck School of Medicine, University of is another main topic. With respect to intelligent robotics another approach is sensor fusion – Southern California (United States); Wei Liu, The combining multi-modal sensors in audio, location, image and video data for signal process- University of Sheffield (United Kingdom); Charles ing, machine learning and computer vision, and additionally other 3D capturing devices. McPherson, Draper Laboratory (United States); Fabrice Meriaudeau, University de Bourgogne (France); Lucas Paletta, JOANNEUM Research There is a need of accurate, fast, and robust detection of objects and their position in space. Forschungsgesellschaft mbH (Austria); Vincent Their surface, the background, and illumination is uncontrolled; in most cases the objects Paquit, Oak Ridge National Laboratory (United of interest are within a bulk of many others. For both new and existing industrial users of States); Daniel Raviv, Florida Atlantic University machine vision, there are numerous innovative methods to improve productivity, quality, (United States); Hamed Sari-Sarraf, Texas and compliance with product standards. There are several broad problem areas that have Tech University (United States); Ralph Seulin, received significant attention in recent years. For example, some industries are collecting University de Bourgogne (France); Christophe enormous amounts of image data from product monitoring systems. New and efficient Stolz, University de Bourgogne (France); Svorad methods are required to extract insight and to perform process diagnostics based on this Štolc, AIT Austrian Institute of Technology historical record. Regarding the physical scale of the measurements, microscopy techniques GmbH (Austria); Bernard Theisen, U.S. Army Tank Automotive Research, Development and are nearing resolution limits in fields such as semiconductors, biology, and other nano-scale Engineering Center (United States); Seung-Chul technologies. Techniques such as resolution enhancement, model-based methods, and sta- Yoon, United States Department of Agriculture tistical imaging may provide the means to extend these systems beyond current capabilities. Agricultural Research Service (United States); Furthermore, obtaining real-time and robust measurements in-line or at-line in harsh industrial Gerald Zauner, FH OÖ– Forschungs & environments is a challenge for machine vision researchers, especially when the manufac- Entwicklungs GmbH (Austria); and Dili Zhang, turer cannot make significant changes to their facility or process. Monotype Imaging (United States)

Award Best Student Paper

86 #EI2020 electronicimaging.org

INTELLIGENT ROBOTICS AND INDUSTRIAL APPLICATIONS

USING COMPUTER VISION 2020 IRIACV

Monday, January 27, 2020 11:50 IRIACV-051 Real-time small-object change detection from ground vehicles using a Siamese convolutional neural network (JIST-first), Sander Klomp, Dennis Robotics van de Wouw, and Peter de With, Eindhoven University of Technology (the Netherlands) Session Chair: Juha Röning, University of Oulu (Finland) 12:10 IRIACV-052 8:45 – 10:10 am Perceptual license plate super-resolution with CTC loss, Zuzana Bilkova1,2 Regency A and Michal Hradis3; 1Charles University, 2Institute of Information Theory and 8:45 Automation, and 3Brno University of Technology (Czechia) Conference Welcome 12:30 – 2:00 pm Lunch 8:50 IRIACV-013 Passive infrared markers for indoor robotic positioning and navigation, Jian Chen, AltVision, Inc. (United States) PLENARY: Frontiers in Computational Imaging

9:10 IRIACV-014 Session Chairs: Radka Tezaur, Intel Corporation (United States), and Improving multimodal localization through self-supervision, Robert Jonathan Phillips, Google Inc. (United States) Relyea, Darshan Ramesh Bhanushali, Karan Manghi, Abhishek Vashist, Clark Hochgraf, Amlan Ganguly, Andres Kwasinski, Michael Kuhl, and 2:00 – 3:10 pm Ray Ptucha, Rochester Institute of Technology (United States) Grand Peninsula Ballroom D Imaging the Unseen: Taking the First Picture of a Black Hole, Katie 9:30 IRIACV-015 Bouman, assistant professor, Computing and Mathematical Sciences Creation of a fusion image obtained in various electromagnetic ranges Department, California Institute of Technology (United States) used in industrial robotic systems, Evgeny Semenishchev1 and Viacheslav Voronin2; 1Moscow State Technical University (STANKIN) and 2Don State For abstract and speaker biography, see page 7 Technical University (Russian Federation)

9:50 IRIACV-016 3:10 – 3:30 pm Coffee Break Locating mechanical switches using RGB-D sensor mounted on a dis- aster response robot, Takuya Kanda1, Kazuya Miyakawa1, Jeonghwang Hayashi1, Jun Ohya1, and Hiroyuki Ogata2; 1Waseda University and Computer Vision & Inspection 2 Seikei University (Japan) Session Chair: Kurt Niel, University of Applied Sciences Upper Austria (Austria) 10:10 – 10:50 am Coffee Break 3:30 – 4:50 pm Regency A Machine Learning 3:30 IRIACV-070 Session Chairs: Kurt Niel, University of Applied Sciences Upper Austria Estimating vehicle fuel economy from overhead camera imagery (Austria) and Juha Röning, University of Oulu (Finland) and application for traffic control,Thomas Karnowski1, Ryan Tokola1, 1 2 3 3 1 10:50 am – 12:30 pm Sean Oesch , Matthew Eicholtz , Jeff Price , and Tim Gee ; Oak Ridge National Laboratory, 2Florida Southern College, and 3GRIDSMART (United Regency A States) 10:50 IRIACV-048 A review and quantitative evaluation of small face detectors in deep 3:50 IRIACV-071 learning, Weihua Xionog, EagleSens Inc. (United States) Tailored photometric stereo: Optimization of light source positions for different materials, Christian Kapeller1,2, Doris Antensteiner1, Thomas 11:10 IRIACV-049 Pinez1, Nicole Brosch1, and Svorad Štolc1; 1AIT Austrian Institute of Rare-class extraction using cascaded pretrained networks applied to Technology GmbH and 2Vienna University of Technology (Austria) crane classification,Sander Klomp1,2, Guido Brouwers2, Rob Wijnhoven2, and Peter de With1; 1Eindhoven University of Technology and 2ViNotion 4:10 IRIACV-072 (the Netherlands) Crowd congestion detection in videos, Sultan Daud Khan1, Habib Ullah1, Mohib Ullah2, and Faouzi Alaya Cheikh2; 1University of Ha’il (Saudi 11:30 IRIACV-050 Arabia) and 2Norwegian University of Science and Technology (Norway) Detection and characterization of rumble strips in roadway video logs, Deniz Aykac, Thomas Karnowski, Regina Ferrell, and James Goddard, Oak Ridge National Laboratory (United States)

electronicimaging.org #EI2020 87 Intelligent Robotics and Industrial Applications using Computer Vision 2020 IRIACV 4:30 IRIACV-074 Head-based tracking, Mohib Ullah1, Habib Ullah2, Kashif Ahmad3, Ali Shariq Imran1, and Faouzi Alaya Cheikh1; 1Norwegian University of Science and Technology (Norway), 2University of Ha’il (Saudi Arabia), and 3Hamad Bin Khalifa University (Qatar)

5:00 – 6:00 pm All-Conference Welcome Reception

Wednesday, January 29, 2020

10:00 am – 3:30 pm Industry Exhibition - Wednesday

10:10 – 11:00 am Coffee Break

12:30 – 2:00 pm Lunch

PLENARY: VR/AR Future Technology

Session Chairs: Radka Tezaur, Intel Corporation (United States), and Jonathan Phillips, Google Inc. (United States) 2:00 – 3:10 pm Grand Peninsula Ballroom D Quality Screen Time: Leveraging Computational Displays for Spatial Computing, Douglas Lanman, director, Display Systems Research, Facebook Reality Labs (United States) For abstract and speaker biography, see page 7

3:10 – 3:30 pm Coffee Break

Intelligent Robotics and Industrial Applications using Computer Vision 2020 Interactive Papers Session 5:30 – 7:00 pm Sequoia The following works will be presented at the EI 2020 Symposium Interactive Papers Session. IRIACV-325 An evaluation of embedded GPU systems for visual SLam algorithms, Tao Peng, Dingnan Zhang, Don Nirmal, and John Loomis, University of Dayton (United States)

IRIACV-326 An evaluation of visual SLam methods on NVIDIA Jetson Systems, Dingnan Zhang, Tao Peng, Don Nirmal, and John Loomis, University of Dayton (United States)

5:30 – 7:00 pm EI 2020 Symposium Interactive Posters Session

5:30 – 7:00 pm Meet the Future: A Showcase of Student and Young Professionals Research

88 #EI2020 electronicimaging.org MAAP Where Industry and Academia Meet to discuss Imaging Across Applications

Conference Chairs: Mathieu Hebert, Université Material Appearance 2020 Jean Monnet de Saint Etienne (France); Lionel Simonot, Université de Poitiers (France); and Conference overview Ingeborg Tastl, HP Inc. (United States) The rapid and continuous development of rendering simulators and devices such as displays and printers offers interesting challenges related to how the appearance of materials is un- Program Committee: Simone Bianco, University derstood. Over the years, researchers from different disciplines, including metrology, optical of Milan (Italy); Marc Ellens, Artomatix (United Susan P. Farnand, modeling, and digital simulation, have studied the interaction of incident light with the texture States); Rochester Institute of Technology (United States); Roland Fleming, and surface geometry of a given object, as well as the optical properties of distinct materi- Justus-Liebig-Universität Giessen (Germany); Jon als. Thanks to those efforts, we have been able to propose methods for characterizing the Yngve Hardeberg, Norwegian University of optical and visual properties of many materials, propose affordable measurement methods, Science and Technology (Norway); Francisco MAAP predict optical properties or appearance attributes, and render 2.5D and 3D objects and H. Imai, Apple Inc. (United States); Susanne scenes with high accuracy. Klein, University of the West of England (United Kingdom); Gael Obein, Conservatoire National This conference offers the possibility to share research results and establish new collabora- des Arts et Metiers (France); Carinna Parraman, tions between academic and industrial researchers from these related fields. University of the West of England (United Kingdom); Holly Rushmeier, Yale University (United States); Takuroh Sone, Ricoh Japan Award (Japan); Shoji Tominaga, Chiba University Best Paper Award (Japan); and Philipp Urban, Fraunhofer Institute for Computer Graphics Research IGD (Germany)

Conference Sponsors

®

electronicimaging.org #EI2020 89 , 1 -

ning, MAAP-061 MAAP-061 MAAP-060 MAAP-060 , Sony George 1 #EI2020 electronicimaging.org Norwegian University of Science 1 ; 2 , Jon Yngve Hardeberg 1 University of Oxford (United Kingdom) 2 12:30 – 2:00 pm Lunch 3:10 – 3:30 pm Coffee Break , and Hannah Smithson 2 5:00 – 6:00 pm All-Conference Welcome Reception 5:00 – 6:00 pm All-Conference Welcome

PLENARY: Frontiers in Computational Imaging Frontiers in Computational PLENARY: and Intel Corporation (United States), Session Chairs: Radka Tezaur, Jonathan Phillips, Google Inc. (United States) 2:00 – 3:10 pm Grand Peninsula Ballroom D Katie the First Picture of a Black Hole, Imaging the Unseen: Taking Computing and Mathematical Sciences Bouman, assistant professor, (United States) Department, California Institute of Technology see page 7 For abstract and speaker biography, 3:50 Joshua Harvey Aging and Renewing Chiba University (Japan) Session Chair: Shoji Tominaga, 3:30 – 4:10 pm Regency C 3:30 wood caused by (ac Changes in the visual appearance of polychrome celerated) aging, Oleksii Sidorov DISCUSSION; Material Appearance MorningDISCUSSION; Material Q&A Hebert,Session Chairs: Mathieu Université Jean Monnet de Saint Etienne Inc. (United States) HP Labs, HP (France) and Ingeborg Tastl, 12:10 – 12:30 pm Regency C Image processing method for renewing old objects using deep lear Image processing method for renewing old objects DISCUSSION: Material Appearance Afternoon Q&A Session Chairs: Mathieu Hebert, Université Jean Monnet de Saint Etienne HP Labs, HP Inc. (United States) (France) and Ingeborg Tastl, 4:10 – 4:30 pm Regency C Runa Takahashi and Katsunori Okajima, Yokohama National University Katsunori Okajima, Yokohama and Runa Takahashi (Japan) and Technology (Norway)and and Technology ; 1

Naoki MAAP-033 MAAP-033 MAAP-032 MAAP-032 MAAP-030 MAAP-030 MAAP-031 MAAP-031 MAAP-020 , and Norihiro Tanaka 2

Norwegian University of Science and 2

, Giuseppe Guarnera 1,2

10:10 – 10:50 am Coffee Break Biographies and/or abstracts for all keynotes are found on pages Biographies and/or abstracts for all keynotes 9–14 KEYNOTE: 3D Digitization and Optical Material Interactions KEYNOTE: 3D Digitization HP Labs, HP Inc. (United States) Session Chair: Ingeborg Tastl, 9:30 – 10:10 am Regency C The physical Capturing and 3D rendering of optical behavior: approach to realism, Martin Ritz, deputy head, Competence Center for Computer Cultural Heritage Digitizatiion, Fraunhofer Institute Graphics Research (Germany) Nagano University (Japan) and 11:50 Technology (Norway) Technology 11:30 Sparkle, Gloss, Texture, and Translucency Sparkle, Gloss, Texture, Session Chair: Mathieu Hebert, de Saint Etienne Université Jean Monnet (France) 90 BTF image recovery based on U-Net and texture interpolation, Caustics and translucency perception, Davit Gigilashvili, Lucas Dubouchet, Jon Yngve Hardeberg, and Marius Pedersen, Norwegian (Norway) University of Science and Technology 10:50 evaluating the sparkle One-shot multi-angle measurement device for Ltd. (Japan) Ricoh Company, impression (JIST-first), Shuhei Watanabe, Appearance reproduction of material surface with strong specular reflection, Shoji Tominaga 1 Chiba University (Japan) and Keita Hirai, Tada 10:50 am – 12:10 pm Regency C Material Appearance 2020 Conference Introduction Material Appearance 2020 Hebert,Session Chair: Mathieu Université Jean Monnet de Saint Etienne (France) 9:20 – 9:30 am Regency C MATERIAL APPEARANCE 2020 APPEARANCE MATERIAL January 27, 2020 Monday,

11:10

MAAP Material Appearance 2020

Tuesday, January 28, 2020 10:40 MAAP-396 From color and spectral reproduction to appearance, BRDF, and beyond, Jon Yngve Hardeberg, Norwegian University of Science and 7:30 – 8:45 am Women in Electronic Imaging Breakfast; Technology (NTNU) (Norway) pre-registration required 11:10 MAAP-120 HP 3D color gamut – A reference system for HP’s Jet Fusion 580 color Skin and Deep Learning Joint Session 3D printers, Ingeborg Tastl1 and Alexandra Ju2; 1HP Labs, HP Inc. and 2HP Inc. (United States) Session Chairs: Alessandro Rizzi, Università degli Studi di Milano (Italy) and Ingeborg Tastl, HP Labs, HP Inc. (United States) 11:30 COLOR-121 Spectral reproduction: Drivers, use cases, and workflow, 8:45 – 9:30 am Tanzima Habib, Phil Green, and Peter Nussbaum, Norwegian University of Science Regency C MAAP and Technology (Norway) This session is jointly sponsored by: Color Imaging XXV: Displaying, Processing, Hardcopy, and Applications, and Material Appearance 11:50 COLOR-122 2020. Parameter estimation of PuRet algorithm for managing appearance of material objects on display devices (JIST-first),Midori Tanaka, Ryusuke 8:45 Arai, and Takahiko Horiuchi, Chiba University (Japan) Conference Welcome 12:10 COLOR-123 8:50 MAAP-082 Beyond color correction: Skin color estimation in the wild through deep Colorimetrical performance estimation of a reference hyperspectral microscope for color tissue slides assessment, learning, Robin Kips, Quoc Tran, Emmanuel Malherbe, and Matthieu Paul Lemaillet and Wei- Chung Cheng, US Food and Drug Administration (United States) Perrot, L’Oréal Research and Innovation (France)

9:10 COLOR-083 12:30 – 2:00 pm Lunch SpectraNet: A deep model for skin oxygenation measurement from multi-spectral data, Ahmed Mohammed, Mohib Ullah, and Jacob Bauer, Norwegian University of Science and Technology (Norway) PLENARY: Automotive Imaging

Session Chairs: Radka Tezaur, Intel Corporation (United States), and Jonathan Phillips, Google Inc. (United States) Spectral Dataset Joint Session 2:00 – 3:10 pm Session Chair: Ingeborg Tastl, HP Labs, HP Inc. (United States) Grand Peninsula Ballroom D 9:30 – 10:10 am Imaging in the Autonomous Vehicle Revolution, Gary Hicok, senior Regency C vice president, hardware development, NVIDIA Corporation (United This session is jointly sponsored by: Color Imaging XXV: Displaying, States) Processing, Hardcopy, and Applications, and Material Appearance For abstract and speaker biography, see page 7 2020. 9:30 MAAP-106 3:10 – 3:30 pm Coffee Break Visible to near infrared reflectance hyperspectral images dataset for image sensors design, Axel Clouet1, Jérôme Vaillant1, and Célia Viola2; 5:30 – 7:30 pm Symposium Demonstration Session 1CEA-LETI and 2CEA-LITEN (France)

9:50 MAAP-107 A multispectral dataset of oil and watercolor paints, Vahid Babaei1, Azadeh Asadi Shahmirzadi2, and Hans-Peter Seidel1; 1Max-Planck-Institut für Informatik and 2Consultant (Germany)

10:00 am – 7:30 pm Industry Exhibition - Tuesday

10:10 – 10:40 am Coffee Break

Color and Appearance Reproduction Joint Session Session Chair: Mathieu Hebert, Université Jean Monnet de Saint Etienne (France) 10:40 am – 12:30 pm Regency C This session is jointly sponsored by: Color Imaging XXV: Displaying, Processing, Hardcopy, and Applications, and Material Appearance 2020.

electronicimaging.org #EI2020 91 Where Industry and Academia Meet to discuss Imaging Across Applications MWSF

Conference Chairs: Adnan M. Alattar, Media Watermarking, Security, Digimarc Corporation (United States), Nasir D. Memon, Tandon School of and Forensics 2020 Engineering, New York University (United States), and Gaurav Sharma, University of Conference overview Rochester (United States) The ease of capturing, manipulating, distributing, and consuming digital media (e.g., imag- es, audio, video, graphics, and text) has enabled new applications and brought a number Program Committee: Mauro Barni, University of important security challenges to the forefront. These challenges have prompted significant degli Studi di Siena (Italy); Sebastiano Battiato, research and development in the areas of digital watermarking, steganography, data hid- University degli Studi di Catania (Italy); Marc ing, forensics, media identification, biometrics, and encryption to protect owners’ rights, Chaumont, Laboratory d’lnformatique de establish provenance and veracity of content, and to preserve privacy. Research results in Robotique et de Microelectronique de Montpellier (France); Scott A. Craver, Binghamton University these areas has been translated into new paradigms and applications for monetizing media (United States); Edward J. Delp, Purdue University while maintaining ownership rights, and new biometric and forensic identification techniques (United States); Jana Dittmann, Otto-von-Guericke- for novel methods for ensuring privacy. University Magdeburg (Germany); Gwenaël Doërr, ContentArmor SAS (France); Jean-luc The Media Watermarking, Security, and Forensics Conference is a premier destination for Dugelay, EURECOM (France); Touradj Ebrahimi,

MWSF disseminating high-quality, cutting-edge research in these areas. The conference provides École Polytechnique Fédérale de Lausanne an excellent venue for researchers and practitioners to present their innovative work as well (EPFL) (Switzerland); Maha El Choubassi, Intel as to keep abreast of the latest developments in watermarking, security, and forensics. Early Corporation (United States); Jessica Fridrich, results and fresh ideas are particularly encouraged and supported by the conference review Binghamton University (United States); Anthony T. S. Ho, University of Surrey (United Kingdom); Jiwu format: only a structured abstract describing the work in progress and preliminary results is Huang, Sun Yat-Sen University (China); Andrew initially required and the full paper is requested just before the conference. A strong focus D. Ker, University of Oxford (United Kingdom); on how research results are applied by industry, in practice, also gives the conference its Matthias Kirchner, Binghamton University (United unique flavor. States); Alex C. Kot, Nanyang Technological University (Singapore); Chang-Tsun Li, The University of Warwick (United Kingdom); Jennifer Newman, Iowa State University (United States); William Puech, Laboratory d’Informatique de Robotique et de Microelectronique de Montpellier (France); Husrev Taha Sencar, TOBB University of Economics and Technology (Turkey); Yun-Qing Shi, New Jersey Institute of Technology (United States); Ashwin Swaminathan, Magic Leap, Inc. (United States); Robert Ulichney, HP Inc. (United States); Claus Vielhauer, Fachhochschule Brandenburg (Germany); Svyatoslav V. Voloshynovskiy, University de Genève (Switzerland); and Chang Dong Yoo, Korea Advanced Institute of Science and Technology (Republic of Korea)

Conference Sponsor

92 #EI2020 electronicimaging.org

MEDIA WATERMARKING, SECURITY, AND FORENSICS 2020

Monday, January 27, 2020 PLENARY: Frontiers in Computational Imaging

Session Chairs: Radka Tezaur, Intel Corporation (United States), and KEYNOTE: Watermarking and Recycling Jonathan Phillips, Google Inc. (United States) Session Chair: Adnan Alattar, Digimarc Corporation (United States) 2:00 – 3:10 pm 8:55 – 10:00 am Grand Peninsula Ballroom D Cypress A Imaging the Unseen: Taking the First Picture of a Black Hole, Katie Bouman, assistant professor, Computing and Mathematical Sciences 8:55 Department, California Institute of Technology (United States) Conference Welcome For abstract and speaker biography, see page 7 9:00 MWSF-017 Watermarking to turn plastic packaging from waste to asset through 3:10 – 3:30 p m Coffee Break improved optical tagging,, Larry Logan, chief evangelist, Digimarc Corporation (United States) Deep Learning Steganalysis Biographies and/or abstracts for all keynotes are found on pages MWSF 9–14 Session Chair: Adnan Alattar, Digimarc Corporation (United States) 3:30 – 5:10 pm 10:10 – 10:30 am Coffee Break Cypress A 3:30 MWSF-075 Watermark JPEG steganalysis detectors scalable with respect to compression quality, Yassine Yousfi and Jessica Fridrich, Binghamton University (United Session Chair: Robert Ulichney, HP Labs, HP Inc. (United States) States)

10:30 am – 12:10 pm 3:55 MWSF-076 Cypress A Detection of malicious spatial-domain steganography over noisy chan- 10:30 MWSF-021 nels using convolutional neural networks, Swaroop Shankar Prasad1, Reducing invertible embedding distortion using graph matching model, Ofer Hadar2, and Ilia Polian1; 1University of Stuttgart (Germany) and 2Ben- Hanzhou Wu and Xinpeng Zhang, Shanghai University (China) Gurion University of the Negev (Israel)

10:55 MWSF-022 4:20 MWSF-077 Watermarking in deep neural networks via error back-propagation, Semi-blind image resampling factor estimation for PRNU computation, Jiangfeng Wang, Hanzhou Wu, Xinpeng Zhang, and Yuwei Yao, Miroslav Goljan and Morteza Darvish Morshedi Hosseini, Binghamton Shanghai University (China) University (United States)

11:20 MWSF-023 4:45 MWSF-078 Signal rich art: Improvements and extensions, Ajith Kamath, Digimarc A CNN-based correlation predictor for PRNU-based image manipula- Corporation (United States) tion localization, Sujoy Chakraborty; Binghamton University and Stockton University (United States) 11:45 MWSF-024 Estimating watermark synchronization signal using partial pixel least 5:00 – 6:00 pm All-Conference Welcome Reception squares, Robert Lyons and Brett Bradley, Digimarc Corporation (United States)

12:30 – 2:00 pm Lunch

electronicimaging.org #EI2020 93 Media Watermarking, Security, and Forensics 2020

Tuesday, January 28, 2020 PLENARY: Automotive Imaging 7:30 – 8:45 am Women in Electronic Imaging Breakfast; Session Chairs: Radka Tezaur, Intel Corporation (United States), and pre-registration required Jonathan Phillips, Google Inc. (United States) 2:00 – 3:10 pm KEYNOTE: Technology in Context Grand Peninsula Ballroom D Imaging in the Autonomous Vehicle Revolution, Gary Hicok, senior Session Chair: Adnan Alattar, Digimarc Corporation (United States) vice president, hardware development, NVIDIA Corporation (United 9:00 – 10:00 am States) Cypress A For abstract and speaker biography, see page 7 MWSF-102 Technology in context: Solutions to foreign propaganda and disin- 3:10 – 3:30 pm Coffee Break formation, Samaruddin Stewart, technology and media expert, Global Engagement Center, US State Department, and Justin Maddox, adjunct professor, Department of Information Sciences and Technology, George Identification Mason University (United States) Session Chair: Adnan Alattar, Digimarc Corporation (United States) MWSF Biographies and/or abstracts for all keynotes are found on pages 3:30 – 5:10 pm 9–14 Cypress A 3:30 MWSF-215 10:00 am – 7:30 pm Industry Exhibition - Tuesday Score-based likelihood ratios in camera device identification,Stephanie 10:10 – 10:30 am Coffee Break Reinders, Li Lin, Wenhao Chen, Yong Guan, and Jennifer Newman, Iowa State University (United States)

DeepFakes 3:55 MWSF-216 Camera unavoidable scene watermarks: A method for forcibly convey- Session Chair: Gaurav Sharma, University of Rochester (United States) ing information onto photographs, Clark Demaree and Henry Dietz, 10:30 am – 12:10 pm University of Kentucky (United States) Cypress A 4:20 MWSF-217 10:30 MWSF-116 A deep learning approach to MRI scanner manufacturer and model Detecting “deepfakes” in H.264 video data using compression ghost identification,Shengbang Fang1, Ronnie Sebro2, and Matthew Stamm1; artifacts, Raphael Frick, Sascha Zmudzinski, and Martin Steinebach, 1Drexel University and 2Hospital of the University of Pennsylvania (United Fraunhofer SIT (Germany) States)

10:55 MWSF-117 4:45 MWSF-218 A system for mitigating the problem of deepfake news videos using Motion vector based robust video hash, Huajian Liu, Sebastian Fach, and watermarking, Adnan Alattar, Ravi Sharma, and John Scriven, Digimarc Martin Steinebach, Fraunhofer SIT (Germany) Corporation (United States) 5:30 – 7:30 pm Symposium Demonstration Session 11:20 MWSF-118 Checking the integrity of images with signed thumbnail images, Martin Steinebach, Huajian Liu, Sebastian Jörg, and Waldemar Berchtold, Fraunhofer SIT (Germany)

11:45 MWSF-119 The effect of class definitions on the transferability of adversarial at- tacks against forensic CNNs, Xinwei Zhao and Matthew Stamm, Drexel University (United States)

12:30 – 2:00 pm Lunch

94 #EI2020 electronicimaging.org Media Watermarking, Security, and Forensics 2020

Wednesday, January 29, 2020 PLENARY: VR/AR Future Technology

Session Chairs: Radka Tezaur, Intel Corporation (United States), and KEYNOTE: Digital vs Physical Document Security Jonathan Phillips, Google Inc. (United States) Session Chair: Gaurav Sharma, University of Rochester (United States) 2:00 – 3:10 pm 9:00 – 10:00 am Grand Peninsula Ballroom D Cypress A Quality Screen Time: Leveraging Computational Displays for Spatial Computing, Douglas Lanman, director, Display Systems Research, MWSF-204 Facebook Reality Labs (United States) Digital vs physical: A watershed in document security, Ian Lancaster, holography and authentication specialist, Lancaster Consulting (United For abstract and speaker biography, see page 7 Kingdom) Biographies and/or abstracts for all keynotes are found on pages 3:10 – 3:30 pm Coffee Break 9–14 Steganography

10:00 am – 3:30 pm Industry Exhibition - Wednesday Session Chair: Jessica Fridrich, Binghamton University (United States) 10:10 – 10:30 am Coffee Break 3:30 – 5:10 pm Cypress A Physical Object Security 3:30 MWSF-289 MWSF Session Chair: Gaurav Sharma, University of Rochester (United States) Minimum perturbation cost modulation for side-informed steganogra- phy, Jan Butora and Jessica Fridrich, Binghamton University (United States) 10:30 am – 12:10 pm Cypress A 3:55 MWSF-290 Synchronizing embedding changes in side-informed steganography, 10:30 MWSF-398 Mehdi Boroumand and Jessica Fridrich, Binghamton University (United Smartphone systems for secure documents, Alan Hodgson, Alan States) Hodgson Consulting Ltd. (United Kingdom) 4:20 MWSF-291 10:55 MWSF-397 Generative text steganography based on adaptive arithmetic coding Embedding data in the blue channel*, Robert Ulichney, HP Labs, HP Inc. and LSTM network, Huixian Kang, Hanzhou Wu, and Xinpeng Zhang, (United States) Shanghai University (China) *Proceedings Note: A proceedings paper related to the Robert Ulichney talk will be found in the proceedings issue for the Color Imaging XXV: Displaying, 4:45 MWSF-292 Processing, Hardcopy, and Applications Conference. Analyzing the decoding rate of circular coding in a noisy transmission channel, Yufang Sun and Jan Allebach, Purdue University (United States) 11:20 MWSF-399 Physical object security (TBA), Gaurav Sharma, University of Rochester (United States) DISCUSSION: Concluding Remarks Session Chairs: Adnan Alattar, Digimarc Corporation (United States) and 11:45 MWSF-219 High-entropy optically variable device characterization – Facilitating Gaurav Sharma, University of Rochester (United States) multimodal authentication and capture of deep learning data, Mikael 5:10 – 5:20 pm Lindstrand, gonioLabs AB (Sweden) Cypress A

12:30 – 2:00 pm Lunch 5:30 – 7:00 pm EI 2020 Symposium Interactive Posters Session

5:30 – 7:00 pm Meet the Future: A Showcase of Student and Young Professionals Research

electronicimaging.org #EI2020 95 MOBMU

Where Industry and AcademiaWhere Meet Industry and Academia Meet to discuss Imaging Across Applications to discuss Imaging Across Applications

Conference Chairs: David Akopian, The Mobile Devices and Multimedia: Enabling University of Texas at San Antonio (United States); Reiner Creutzburg, Technische Technologies, Algorithms, and Applications Hochschule Brandenburg (Germany) 2020 Program Committee: John Adcock, FX Palo Conference overview Alto Labororatory Inc. (United States); Sos Agaian, CSI City University of New York and The goal of this conference is to provide an international forum for presenting recent research The Graduate Center (CUNY) (United States); results on multimedia for mobile devices, and to bring together experts from both academia Faouzi Alaya Cheikh, Norwegian University of and industry for a fruitful exchange of ideas and discussion on future challenges. The authors Science and Technology (Norway); Noboru are encouraged to submit work-in-progress papers as well as updates on previously re- Babaguchi, Osaka University (Japan); Nina ported systems. Outstanding papers may be recommended for the publication in the Journal Bhatti, Kokko Inc. (United States); C.L. Philip Electronic Imaging or Journal of Imaging Science and Technology. Chen, University of Macau (Macao); Chang Wen Chen, The State University of New York at Buffalo (United States); Matthew Cooper, FX Awards Palo Alto Laboratory (United States); Kenneth Best Paper Crisler, Motorola, Inc. (United States); Francesco Best Student Paper De Natale, University degli Studi di Trento (Italy); Alberto Del Bimbo, University degli Studi di Firenze (Italy); Stefan Edlich, Technische Fachhochschule Berlin (Germany); Atanas Gotchev, Tampere University of Technology (Finland); Alan Hanjalic, Technische University Delft (the Netherlands); Alexander Hauptmann, Carnegie Mellon University (United States); Winston Hsu, National Taiwan University (Taiwan); Gang Hua, Stevens Institute of MOBMU Technology (United States); Catalin Lacatus, Qualcomm Technologies, Inc. (United States); Xin Li, West Virginia University (United States); Qian Lin, HP Inc. (United States); Gabriel Marcu, Apple Inc. (United States); Vasileios Mezaris, Informatics and Telematics Institute (Greece); Chong-Wah Ngo, City University of Hong Kong (China); Sethuraman Panchanathan, Arizona State University (United States); Kari Pulli, Meta Company (United States); Yong Rui, Microsoft Corporation (China); Olli Silvén, University of Oulu (Finland); John Smith, IBM Thomas J. Watson Research Center (United States); Hari Sundaram, Arizona State University (United States); Jarmo Takala, Tampere University of Technology (Finland); Marius Tico, Apple Inc. (United States); Meng Wang, National University of Singapore (Singapore); Rong Yan, Facebook Inc. (United States); Jun Yang, Facebook Inc. (United States)

96 #EI2020 electronicimaging.org

MOBILE DEVICES AND MULTIMEDIA: ENABLING TECHNOLOGIES, ALGORITHMS, AND APPLICATIONS 2020

Wednesday, January 29, 2020 11:50 MOBMU-254 Towards sector-specific security operation,Michael Pilgermann1, Sören Werth2, and Reiner Creutzburg1; 1 Technische Hochschule Brandenburg Imaging Technologies and 2Technische Hochschule Lübeck (Germany)

Session Chair: Reiner Creutzburg, Technische Hochschule Brandenburg 12:10 MOBMU-402 (Germany) Situational Strategic Awareness Monitoring Surveillance System 9:05 – 10:10 am (SSAMSS) - Microcomputer and microcomputer clustering used for intelligent, economical, scalable, and deployable approach for materi- Sandpebble A/B als (JIST-first), Kenly Maldonado and Steven Simske, Colorado State 9:05 University (United States) Conference Welcome

9:10 MOBMU-205 12:30 – 2:00 pm Lunch Strategies of using ACES look modification transforms (LMTs) in a VFX environment, Eberhard Hasche, Oliver Karaschewski, and Reiner Creutzburg, Technische Hochschule Brandenburg (Germany) PLENARY: VR/AR Future Technology

9:30 MOBMU-206 Session Chairs: Radka Tezaur, Intel Corporation (United States), and Creating high resolution 360° 15:1-content for a conference room us- Jonathan Phillips, Google Inc. (United States) ing film compositing technologies,Eberhard Hasche, Dominik Benning, 2:00 – 3:10 pm Oliver Karaschewski, and Reiner Creutzburg, Technische Hochschule Grand Peninsula Ballroom D Brandenburg (Germany) Quality Screen Time: Leveraging Computational Displays for Spatial 9:50 MOBMU-207 Computing, Douglas Lanman, director, Display Systems Research, JAB code - A versatile polychrome 2D barcode, Waldemar Berchtold Facebook Reality Labs (United States) and Huajian Liu, Fraunhofer SIT (Germany) For abstract and speaker biography, see page 7 MOBMU

10:00 am – 3:30 pm Industry Exhibition - Wednesday 3:10 – 3:30 pm Coffee Break 10:10 – 10:50 am Coffee Break

IoT, Security Emerging Technologies Session Chair: Eberhard Hasche, TH Brandenburg (Germany) Session Chair: Eberhard Hasche, TH Brandenburg (Germany) 3:30 – 4:50 pm 10:50 – 11:10 am Sandpebble A/B Sandpebble A/B 3:30 MOBMU-275 MOBMU-232 Security and privacy investigation of Wi-Fi connected and app- The human factor and social engineering - Personality traits and controlled IoT-based consumer market smart light bulbs, Franziska personality types as a basis for security awareness, Nicole Malletzky, Schwarz1, Klaus Schwarz1, Reiner Creutzburg1, and David Akopian2; Michael Pilgermann, and Reiner Creutzburg, Technische Hochschule 1Technische Hochschule Brandenburg (Germany) and 2The University of Brandenburg (Germany) Texas at San Antonio (United States)

3:50 MOBMU-276 Infrastructure Security New methodology and checklist of Wi-Fi connected and app-controlled IoT-based consumer market smart home devices, 1 Session Chair: Eberhard Hasche, TH Brandenburg (Germany) Franziska Schwarz , Klaus Schwarz1, Reiner Creutzburg1, and David Akopian2; 1Technische 11:10 am – 12:10 pm Hochschule Brandenburg (Germany) and 2The University of Texas at San Sandpebble A/B Antonio (United States) 11:10 MOBMU-252 4:10 MOBMU-277 Measuring IT security, compliance, and digital sovereignty within small Conception and implementation of a course for professional training and medium-sized IT enterprises, Andreas Johannsen, Daniel Kant, and and education in the field of IoT and smart home security, Michael Reiner Creutzburg, Technische Hochschule Brandenburg (Germany) Pilgermann, Thomas Bocklisch, and Reiner Creutzburg, Technische 11:30 MOBMU-253 Hochschule Brandenburg (Germany) Investigation of risks for critical infrastructures due to the exposure of SCADA systems and industrial controls on the Internet based on the search engine Shodan, Daniel Kant, Reiner Creutzburg, and Andreas Johannsen, Technische Hochschule Brandenburg (Germany) electronicimaging.org #EI2020 97 Mobile Devices and Multimedia: Enabling Technologies, Algorithms, and Applications 2020

4:30 MOBMU-278 Conception and implementation of professional laboratory exercises in the field of open source intelligence (OSINT),Klaus Schwarz, Franziska Schwarz, and Reiner Creutzburg, Technische Hochschule Brandenburg (Germany)

Healthcare Technologies & Security

Session Chair: Eberhard Hasche, TH Brandenburg (Germany) 4:50 – 5:30 pm Sandpebble A/B 4:50 MOBMU-303 Mobile head tracking for eCommerce and beyond, Muratcan Cicek1, Jinrong Xie2, Qiaosong Wang2, and Robinson Piramuthu2; 1University of California, Santa Cruz and 2eBay Inc. (United States)

5:10 MOBMU-304 International biobanking interface service - Health sciences in the digital age, Christian Linke1, Rene Mantke1, and Reiner Creutzburg2; 1Brandenburg Medical School Theodor Fontane and 2 Technische Hochschule Brandenburg (Germany)

Mobile Devices and Multimedia: Enabling Technologies, Algorithms, and Applications 2020 Interactive Papers Session 5:30 – 7:00 pm

MOBMU Sequoia The following works will be presented at the EI 2020 Symposium Interactive Papers Session. MOBMU-331 AI-based anomaly detection for cyberattacks on Windows systems - Creation of a prototype for automated monitoring of the process environment, Benjamin Yüksel, Klaus Schwarz, and Reiner Creutzburg, Technische Hochschule Brandenburg (Germany)

MOBMU-332 An implementation of drone-projector: Stabilization of projected imag- es, Eunbin Choi, Younghyeon Park, and Byeungwoo Jeon, Sungkyunkwan University (Republic of Korea)

MOBMU-333 Cybersecurity and forensic challenges - A bibliographic review 2020, Reiner Creutzburg, Technische Hochschule Brandenburg (Germany)

MOBMU-334 MessageSpace. Messaging systems for health research, Thinh Vo1, Sahak Kaghyan1, Rodrigo Escobar1, David Akopian1, Deborah Parra- Medina2, and Laura Esparza2; 1The University of Texas at San Antonio and 2University of Texas Health Science Center at San Antonio (United States)

MOBMU-335 Performance analysis of mobile cloud architectures for mHealth app, Devasena Inupakutika and David Akopian, The University of Texas at San Antonio (United States)

MOBMU-336 Secure remote service box, Klaus Schwarz and Reiner Creutzburg, Technische Hochschule Brandenburg (Germany)

5:30 – 7:00 pm EI 2020 Symposium Interactive Posters Session

5:30 – 7:00 pm Meet the Future: A Showcase of Student and Young Professionals Research

98 #EI2020 electronicimaging.org SD&A Where Industry and AcademiaWhere Meet Industry and Academia Meet to discuss Imaging Across Applications to discuss Imaging Across Applications

Conference Chairs: Gregg E. Favalora, Draper Stereoscopic Displays and Applications XXXI (United States); Nicolas S. Holliman, Newcastle University (United Kingdom); Takashi Kawai, Conference overview Waseda University (Japan]; and Andrew J. The World’s Premier Conference for 3D Innovation Woods, Curtin University (Australia)) The Stereoscopic Displays and Applications Conference (SD&A) focuses on developments covering the entire stereoscopic 3D imaging pipeline from capture, processing, and display Program Committee: Neil A. Dodgson, Victoria Davide to perception. The conference brings together practitioners and researchers from industry University of Wellington (New Zealand); Gadia, University degli Studi di Milano (Italy); and academia to facilitate an exchange of current information on stereoscopic imaging Hideki Kakeya, University of Tsukuba (Japan); topics. The highly popular conference demonstration session provides authors with a perfect Stephan R. Keith, SRK Graphics Research additional opportunity to showcase their work. Large-screen stereoscopic projection is avail- (United States); Björn Sommer, Royal College able, and presenters are encouraged to make full use of these facilities during their presenta- of Art, London (United Kingdom); John D. Stern, tions. Publishing your work at SD&A offers excellent exposure—across all publication outlets, Intuitive Surgical, Inc. (Retired) (United States); SD&A has the highest proportion of papers in the top 100 cited papers in the stereoscopic and Chris Ward, Lightspeed Design, Inc. (United imaging field (Google Scholar, May 2013). States)

Awards Founding Chair: John O. Merritt, The Merritt Group (United States) Best use of in a presentation Best film (animation) Best film (live action) Conference Sponsors: Event 3D Theatre Session PROJECTION SYSTEM

3D THEATRE PARTNERS SD&A

electronicimaging.org #EI2020 99

STEREOSCOPIC DISPLAYS AND APPLICATIONS XXXI

Monday, January 27, 2020 12:10 SD&A-403 Monolithic surface-emitting electroholographic optical modulator, Gregg Favalora, Michael Moebius, Joy Perkinson, Elizabeth Brundage, William Human Factors in Stereoscopic Displays Joint Session Teynor, Steven Byrnes, James Hsiao, William Sawyer, Dennis Callahan, Ian Frank, and John LeBlanc, The Charles Stark Draper Laboratory, Inc. Session Chairs: Nicolas Holliman, University of Newcastle (United (United States) Kingdom) and Jeffrey Mulligan, NASA Ames Research Center (United States) 12:30 – 2:00 pm Lunch 8:4 5 – 10:10 am Grand Peninsula D This session is jointly sponsored by: Human Vision and Electronic Imaging PLENARY: Frontiers in Computational Imaging 2020, and Stereoscopic Displays and Applications XXXI. Session Chairs: Radka Tezaur, Intel Corporation (United States), and 8:45 Jonathan Phillips, Google Inc. (United States) Conference Welcome 2:00 – 3:10 pm 8:50 HVEI-009 Grand Peninsula Ballroom D Stereoscopic 3D optic flow distortions caused by mismatches between Imaging the Unseen: Taking the First Picture of a Black Hole, Katie image acquisition and display parameters (JIST-first), Alex Hwang and Bouman, assistant professor, Computing and Mathematical Sciences Eli Peli, Harvard Medical School (United States) Department, California Institute of Technology (United States) 9:10 HVEI-010 For abstract and speaker biography, see page 7 The impact of radial distortions in VR headsets on perceived surface slant (JIST-first),Jonathan Tong, Robert Allison, and Laurie Wilcox, York University (Canada) 3:10 – 3:30 pm Coffee Break

9:30 SD&A-011 Visual fatigue assessment based on multitask learning (JIST-first),Danli KEYNOTE: Immersive 3D Display Systems Wang, Chinese Academy of Sciences (China) Session Chair: Takashi Kawai, Waseda University (Japan) 9:50 SD&A-012 Depth sensitivity investigation on multi-view glasses-free 3D display, Di 3:30 – 4:30 pm Zhang1, Xinzhu Sang2, and Peng Wang2; 1Communication University of Grand Peninsula D China and 2Beijing University of Posts and Telecommunications (China) SD&A-065 High frame rate 3D-challenges, issues, and techniques for suc- 10:10 – 10:50 am Coffee Break cess, Larry Paul, executive director, Technology and Custom Solutions Enterprise and Entertainment, Christie Digital Systems (United States) SD&A 2020: Welcome & Introduction Biographies and/or abstracts for all keynotes are found on pages 9–14 Session Chair: Andrew Woods, Curtin University (Australia)

10:50 – 11:10 am 5:00 – 6:00 pm All-Conference Welcome Reception Grand Peninsula D SD&A Conference 3D Theatre SD&A Autostereoscopy I Producers: Dan Lawrence, Lightspeed Design Group (United States); John Session Chair: Bjorn Sommer, Royal College of Art (United Kingdom) Stern, retired (United States); Chris Ward, Lightspeed Design, Inc. (United 11:10 am – 12:30 pm States); and Andrew Woods, Curtin University (Australia) Grand Peninsula D 6:00 – 7:30 pm 11:10 SD&A-053 Grand Peninsula D Morpholo: A hologram generator algorithm, Enrique Canessa, ICTP This ever-popular session of each year’s Stereoscopic Displays and (Italy) Applications Conference showcases the wide variety of 3D content that is being produced and exhibited around the world. All 3D footage screened 11:30 SD&A-054 HoloExtension - AI-based 2D backwards compatible super-multiview in the 3D Theatre Session is shown in high-quality polarized 3D on a large display technology, Rolf-Dieter Naske, psHolix AG (Germany) screen. The final program will be announced at the conference and 3D glasses will be provided. 11:50 SD&A-055 Application of a high resolution autostereoscopic display for medi- cal purposes, Kokoro Higuchi, Ayuki Hayashishita, and Hideki Kakeya, University of Tsukuba (Japan)

100 #EI2020 electronicimaging.org Stereoscopic Displays and Applications XXXI

11:50 SD&A-141 SD&A Annual Conference Dinner Processing legacy underwater stereophotography for new applications, 7:50 – 10:00 pm Patrick Baker1, Trevor Winton2, Daniel Adams3, and Andrew Woods3; Offsite Restaurant 1Western Australian Museum, 2Flinders University of South Australia, and The annual informal dinner for SD&A attendees. An opportunity to meet 3Curtin University (Australia) with colleagues and discuss the latest advances. There is no host for the 12:10 SD&A-142 dinner. Information on venue and cost will be provided during the first day Multifunctional stereoscopic machine vision system with multiple 3D of the conference. outputs, Vasily Ezhov, Natalia Vasilieva, Peter Ivashkin, and Alexander Galstian, GPI RAS (Russian Federation) Tuesday, January 28, 2020 12:30 – 2:00 pm Lunch 7:30 – 8:45 am Women in Electronic Imaging Breakfast; pre-registration required PLENARY: Automotive Imaging Session Chairs: Radka Tezaur, Intel Corporation (United States), and Autostereoscopy II Jonathan Phillips, Google Inc. (United States) Session Chair: Gregg Favalora, Draper Laboratory (United States) 2:00 – 3:10 pm 8:50 – 10:10 am Grand Peninsula Ballroom D Grand Peninsula D Imaging in the Autonomous Vehicle Revolution, Gary Hicok, senior vice president, hardware development, NVIDIA Corporation (United 8:50 SD&A-098 States) Dynamic zero-parallax-setting techniques for multi-view autostereo- scopic display, Yuzhong Jiao, Mark Mok, Kayton Cheung, Man Chi For abstract and speaker biography, see page 7 Chan, and Tak Wai Shen, United Microelectronics Centre (Hong Kong) 3:10 – 3:30 pm Coffee Break 9:10 SD&A-099 Projection type 3D display using spinning screen, Hiroki Hayakawa and Tomohiro Yendo, Nagaoka University of Technology (Japan) 3D Developments

9:30 SD&A-100 Session Chair: John Stern, retired (United States) Full-parallax 3D display using time-multiplexing projection technology, 3:30 – 4:10 pm Takuya Omura, Hayato Watanabe, Naoto Okaichi, Hisayuki Sasaki, and Grand Peninsula D Masahiro Kawakita, NHK (Japan Broadcasting Corporation) (Japan) 3:30 SD&A-154 9:50 SD&A-101 CubicSpace: A reliable model for proportional, comfortable and Light field display using wavelength division multiplexing,Masaki universal capture and display of stereoscopic content, Nicholas Routhier, Yamauchi and Tomohiro Yendo, Nagaoka University of Technology (Japan) Mindtrick Innovations Inc. (Canada)

10:00 am – 7:30 pm Industry Exhibition - Tuesday 3:50 SD&A-155 A camera array system based on DSLR cameras for autostereoscopic 10:10 – 10:50 am Coffee Break prints, Tzung-Han Lin, Yu-Lun Liu, Chi-Cheng Lee, and Hsuan-Kai Huang, National Taiwan University of Science and Technology (Taiwan) Stereoscopic Image Processing

Session Chair: Nicolas Holliman, University of Newcastle (United KEYNOTE: Multiple Viewer Stereoscopic Displays Kingdom) Session Chair: Gregg Favalora, The Charles Stark Draper Laboratory, 10:50 am – 12:30 pm Inc. (United States) Grand Peninsula D 10:50 SD&A-138 4:10 – 5:10 pm

Objective and subjective evaluation of a multi-stereo 3D reconstruction Grand Peninsula D SD&A system, Christian Kapeller1,2, Braulio Sespede2, Matej Nezveda3, Matthias SD&A-400 Labschütz4, Simon Flöry4, Florian Seitner3, and Margrit Gelautz2; 1Austrian Challenges and solutions for multiple viewer stereoscopic displays, Institute of Technology, 2Vienna University of Technology, 3emotion3D Kurt Hoffmeister, Mechdyne Corp. (United States) GmbH, and 4Rechenraum e.U. (Austria) Biographies and/or abstracts for all keynotes are found on pages 11:10 SD&A-139 9–14 Flow map guided correction to stereoscopic panorama, Haoyu Wang, Daniel Sandin, and Dan Schonfeld, University of Illinois at Chicago (United 5:30 – 7:30 pm Symposium Demonstration Session States)

11:30 SD&A-140 Spatial distance-based interpolation algorithm for computer generated 2D+Z images, Yuzhong Jiao, Kayton Cheung, and Mark Mok, United Microelectronics Centre (Hong Kong)

electronicimaging.org #EI2020 101 Stereoscopic Displays and Applications XXXI

Wednesday, January 29, 2020 PLENARY: VR/AR Future Technology

Session Chairs: Radka Tezaur, Intel Corporation (United States), and KEYNOTE: Imaging Systems and Processing Joint Session Jonathan Phillips, Google Inc. (United States) Session Chairs: Kevin Matherson, Microsoft Corporation (United States) 2:00 – 3:10 pm and Dietmar Wueller, Image Engineering GmbH & Co. KG (Germany) Grand Peninsula Ballroom D 8:50 – 9:30 am Quality Screen Time: Leveraging Computational Displays for Spatial Regency A Computing, Douglas Lanman, director, Display Systems Research, Facebook Reality Labs (United States) This session is jointly sponsored by: The Engineering Reality of Virtual Reality 2020, Imaging Sensors and Systems 2020, and Stereoscopic For abstract and speaker biography, see page 7 ISS-189 Mixed reality guided neuronavigation for non-invasive brain stimu- 3:10 – 3:30 pm Coffee Break lation treatment, Christoph Leuze, research scientist in the Incubator for Medical Mixed and Extended Reality, Stanford University (United States) Visualization Facilities Joint Session Biographies and/or abstracts for all keynotes are found on pages Session Chairs: Margaret Dolinsky, Indiana University (United States) and 9–14 Andrew Woods, Curtin University (Australia) 3:30 – 4:10 pm Grand Peninsula D SD&A 3D Theatre – Spotlight This session is jointly sponsored by: The Engineering Reality of Virtual Session Chair: John Stern, retired (United States) Reality 2020, and Stereoscopic Displays and Applications XXXI. 9:40 – 10:10 am 3:30 SD&A-265 Immersive design engineering, Bjorn Sommer, Chang Lee, and Savina Grand Peninsula D Toirrisi, Royal College of Art (United Kingdom) This session is an opportunity to take an extended look at highlights from the Monday evening 3D Theatre session. 3:50 SD&A-266 Using a random dot stereogram as a test image for 3D demonstrations, 10:00 am – 3:30 pm Industry Exhibition - Wednesday Andrew Woods, Wesley Lamont, and Joshua Hollick, Curtin University (Australia) 10:10 – 10:50 am Coffee Break

Stereoscopic Perception and VR KEYNOTE: Visualization Facilities Joint Session Session Chair: Takashi Kawai, Waseda University (Japan) Session Chairs: Margaret Dolinsky, Indiana University (United States) 10:50 am – 12:10 pm and Andrew Woods, Curtin University (Australia) Grand Peninsula D 4:10 – 5:10 pm 10:50 SD&A-243 Grand Peninsula D Evaluating the stereoscopic display of visual entropy glyphs in com- This session is jointly sponsored by: The Engineering Reality of Virtual plex environments, Nicolas Holliman, University of Newcastle (United Reality 2020, and Stereoscopic Displays and Applications XXXI. Kingdom) ERVR-295 11:10 SD&A-244 Social holographics: Addressing the forgotten human factor, Derek Evaluating user experience of 180 and 360 degree images, Yoshihiro Van Tonder, business development manager, and Andy McCutcheon, SD&A Banchi, Keisuke Yoshikawa, and Takashi Kawai, Waseda University global sales manager for Aerospace & Defence, Euclideon (Japan) Holographics (Australia) Biographies and/or abstracts for all keynotes are found on pages 11:30 SD&A-245 9–14 Visual quality in VR head mounted device: Lessons learned making professional headsets, Bernard Mendiburu, Varjo (Finland)

11:50 SD&A-246 5:30 – 7:00 pm EI 2020 Symposium Interactive Posters Session The single image stereoscopic auto-pseudogram – Classification and 5:30 – 7:00 pm Meet the Future: A Showcase of Student and Young theory, Ilicia Benoit, National 3-D Day (United States) Professionals Research 12:30 – 2:00 pm Lunch

102 #EI2020 electronicimaging.org Where Industry and Academia Meet VDA to discuss Imaging Across Applications

Conference Chairs: Thomas Wischgoll, Wright Visualization and Data Analysis 2020 State University (United States); David Kao, NASA Ames Research Center (United States); VDA Conference overview and Yi-Jen Chiang, New York University (United The Conference on Visualization and Data Analysis (VDA) 2020 covers all research and States) development and application aspects of data visualization and visual analytics. Since the first VDA conference was held in 1994, the annual event has served as a major venue for Program Committee: Madjid Allili, Bishop’s Wes Bethel, visualization researchers and practitioners from around the world to present their work and University (Canada); Lawrence Berkeley National Laboratory (United States); share their experiences. Aashish Chaudhary, Kitware, Inc. (United States); Guoning Chen, University of Houston Award (United States); Joseph Cottam, Pacific Kostas Pantazos Memorial Award for Outstanding Paper Northwest National Laboratory (United States); Sussan Einakian, California Polytechnic State University (United States); Ulrich Engelke, CSIRO (Australia); Christina Gillman, University of Leipzig (Germany); Matti Gröhn, Finnish Institute of Occupational Health (Finland); Hanqi Guo, Argonne National Laboratory (United States); Ming Hao, conDati (United States); Christopher G. Healey, North Carolina State University (United States); Halldór Janetzko, University of Konstanz (Germany); Ming Jiang, Lawrence Livermore National Laboratory (United States); Andreas Kerren, Linnaeus University (Sweden); Robert Lewis, Washington State University (United States); Peter Lindstrom, Lawrence Livermore National Laboratory (United States); Zhanping Liu, Kentucky State University (United States); Aidong Lu, The University of North Carolina at Charlotte (United States); G. Elisabeta Marai, University of Illinois at Chicago (United States); Alex Pang, University of California, Santa Cruz (United States); Kristi Potter, National Renewable Energy Laboratory (United States); Theresa-Marie Rhyne, Computer Graphics and E-Learning (United States); René Rosenbaum, meeCoda (Germany); Jibonananda Sanyal, Oak Ridge National Laboratory (United States); Pinaki Sarder, University of Buffalo (United States); Graig Sauer, Towson University (United States); Jürgen Schulze, University of California, San Diego (United States); Kalpathi Subramanian, The University of North Carolina at Charlotte (United States); Chaoli Wang, University of Notre Dame (United States); Jie Yan, Bowie State University (United States); Leishi Zhang, Middlesex University London (United Kingdom); Song Zhang, Mississippi State University (United States); and Wenjin Zhou, Oakland University (United States)

electronicimaging.org #EI2020 103

VDA VISUALIZATION AND DATA ANALYSIS 2020

Wednesday, January 29, 2020 11:30 VDA-375 A visualization system for performance analysis of image classification models, Chanhee Park1, Hyojin Kim2, and Kyungwon Lee1; 1Ajou University 10:00 am – 3:30 pm Industry Exhibition - Wednesday (Republic of Korea) and 2Lawrence Livermore National Laboratory (United 10:10 – 11:00 am Coffee Break States)

12:30 – 2:00 pm Lunch 11:50 VDA-376 HashFight: A platform-portable hash table for multi-core and many- core architectures, Brenton Lessley1, Shaomeng Li2, and Hank Childs1; PLENARY: VR/AR Future Technology 1University of Oregon and 2National Center for Atmospheric Research (United States) Session Chairs: Radka Tezaur, Intel Corporation (United States), and Jonathan Phillips, Google Inc. (United States) 12:30 – 2:00 pm Lunch 2:00 – 3:10 pm Grand Peninsula Ballroom D KEYNOTE: Visualization and Cognition Quality Screen Time: Leveraging Computational Displays for Spatial Computing, Douglas Lanman, director, Display Systems Research, Session Chair: Thomas Wischgoll, Wright State University (United Facebook Reality Labs (United States) States) For abstract and speaker biography, see page 7 2:00 – 3:00 pm Regency C 3:10 – 3:30 pm Coffee Break VDA-386 Augmenting cognition through data visualization, Alark Joshi, data Visualization and Data Analysis 2020 Interactive Papers Session visualization researcher and associate professor, University of San Francisco (United States) 5:30 – 7:00 pm Biographies and/or abstracts for all keynotes are found on pages Sequoia 9–14 The Visualization and Data Analysis 2020 Conference work to be pre- sented at the EI 2020 Symposium Interactive Posters Session is listed in the Visualization and Data Analysis 2020 “Information Visualization” session 3:00 – 3:30 pm Coffee Break on Thursday afternoon. Information Visualization 5:30 – 7:00 pm EI 2020 Symposium Interactive Posters Session Session Chair: David Kao, NASA Ames Research Center (United States) 5:30 – 7:00 pm Meet the Future: A Showcase of Student and Young 3:30 – 4:20 pm Professionals Research Regency C 3:30 VDA-387 Thursday, January 30, 2020 A visualization tool for analyzing the suitability of software libraries via their code repositories, Casey Haber1 and Robert Gove2; 1Chartio and 10:10 – 11:00 am Coffee Break 2Two Six Labs (United States)

3:50 VDA-388 Scientific Visualization Visualization of search results of large document sets, James Anderson and Thomas Wischgoll, Wright State University (United States) Session Chair: Yi-Jen Chiang, New York University (United States) 11:05 am – 12:10 pm The title below is presented as an overview of the interactive presenta- Regency C tion at the EI 2020 Interactive Poster session on Wednesday evening. 11:05 Conference Welcome 4:10 VDA-389 Human-computer interface based on tongue and lips movements and its 11:10 VDA-374 application for speech therapy system, Barbara Zitova, Zuzana Bilkova, A gaze-contingent system for foveated multiresolution visualization Adam Novozamsky, and Michal Bartos, Institute of Information Theory and 1 2 of vector and volumetric data, Thanawut Ananpiriyakul , Josh Anghel , Automation, Czech Academy of Sciences (Czechia) Kristi Potter3, and Alark Joshi1; 1University of San Francisco, 2Boise State University, and 3National Renewal Energy Laboratory (United States)

104 #EI2020 electronicimaging.org electronicimaging.org #EI2020 electronicimaging.org ence &telerobotics,scientificvisualization,andmedicalimaging. and human-factorsevaluationofstereoscopicvideodisplaysfortelepres - Williamsburg, MA,withmorethan25yearsofexperienceinthedesign John O.MerrittisadisplaysystemsconsultantatTheGroup, Innovative EngineersbyAustralia. scopic imaging.In2017,hewasrecognizedasoneofAustralia’s Most bachelors, mastersandPhDdegreesinelectronicengineeringstereo - tion withapplicationsinoilandgasmaritimearchaeology. Hehas Technology University. atCurtin inimagingandvisualiza- Hehasexpertise and aseniorresearchfellowwiththeCentreforMarineScience Dr. AndrewWoods HIVEvisualizationfacility ismanageroftheCurtin mented reality. &aug- animation andcomputergraphics,datavisualization,virtual endoscopic surgery, simulation&trainingsystems,teleoperation video displaysystemsforapplicationssuchas:medicalimagingand Engineers, scientists,andprojectmanagersinvolvedwithimaging Intended Audience storage; stereoscopicdisplaysystemtechnologies;andhumanfactors. stereoscopic imageandvideotransmission,compression,processing, include: stereoscopicimagecaptureandcontentgeneration; menting, using,andoptimizingstereoscopic3Ddisplays.Topics covered course providesanunderstandingofthefundamentalscorrectlyimple- imaging, teleoperation,molecularmodeling,and3Dvisualization.This provide significantbenefitsinmanyapplicationareas,includingmedical When correctlyimplemented,stereoscopic3Dimagingsystemscan be includedinacost/benefitanalysisforproposed3Dapplications. List theoften-overlookedside-benefitsofstereoscopicdisplaysthatshould technologies andconsidersuitabilityforyourproposedapplications. Evaluate theoperatingprinciplesofcurrentlyavailablestereoscopicdisplay • • • • This courseenablestheattendeeto: Outcomes Learning Fee: Member:$355/Non-member:$380Student:$120 Group AndrewWoods,Instructors: University, Curtin andJohnMerritt,TheMerritt Course Level:Introductory Course Length:4hours •8:00am–12:15pm Sunday 26January SC01: StereoscopicImagingFundamentals Sunday, 26,2020 January EI 2020SHORT COURSESANDDESCRIPTIONS comfort limitsfor on-screen parallaxvalues. comfort , focus/fixationmismatch, Understand conceptsoforthostereoscopy Understand thehumanfactorsofusingstereoscopicdisplays. stereoscopicimagegeometry.size, andviewingdistanceaffect Understand howcamerafocallength,lensandeyeseparation,display Understand howthehumanvisualsysteminterpretsdepth. to discussImagingAcrossApplications Where Industry andAcademiaMeet Where Industry past 32years. He hasbeenanactiveeducatorinthedigitalimagingcommunityfor the 2015recipientofElectronicImagingScientistYear Award. SPIE (1993),aKodakDistinguishedInventorwith44issuedpatentsand Engineering EmmyAwards. HeisaFellowofIEEE(1997), of theKodakC.E.K.MeesResearchAward andtheco-recipientoftwo ment ofRochesterInstituteTechnology (RIT).Heisatwiceco-recipient of KodakFellow. Currently, heisavisitingprofessorattheEEdepart- 33-year careeratKodakResearchlabs,heretiredin2016withtherank Majid Rabbanihas37yearsofexperienceindigitalimaging.Aftera forunderstandingthedeblurringconcepts. decomposition isnecessary course. Someknowledgeofdigitalfiltering(convolution)andfrequency professional andconsumerimaging,forensicwillbenefitfromthis various productsinadiversesetofapplications,suchasmedicalimaging, and/or applythetechniquesemployedindigitalimageprocessing Scientists, engineers,andtechnicalmanagerswhoneedtounderstand Intended Audience examples complementthetechnicaldescriptions. andastronomicalimaging.Image imaging, forensicsurveillance, video. Applicationsincludeconsumerandprofessionalimaging,medical hancement, noisereduction,sharpening,anddeblurringofstillimages This coursediscussessomeoftheadvancedalgorithmsusedincontrasten- • • • • • This coursewillenabletheattendeeto: Outcomes Learning Fee: Member:$355/Non-member:$380Student:$120 MajidRabbani,RochesterInstituteofTechnologyInstructor: Course Level:Advanced Course Length:4hours •8:00am–12:15pm Sunday 26January SC02: AdvancedImageEnhancementandDeblurring (restoration). Understand Wienerfilteringanditsvariationsforimagedeblurring removal. as sharpening,noiseremoval,andartifact ofvariousenhancement techniquessuch es toimprovetheperformance canbeutilizedinimagesequenc- Understand howmotioninformation bilateral filtering. Understand advancedtechniquesusedinimagenoiseremovalsuchas variations ofnonlinearunsharpmasking. Understand advancedtechniquesusedinimagesharpeningsuchas Compression (DRC). as CLAHE,PhotoshopShadows/Highlights,andDynamicRange Understand advancedalgorithmsusedforcontrastenhancementsuch 105

Short Courses ticipated #EI2020 electronicimaging.org . The point cloud often contains noise and holes . The point cloud often contains noise and holes ficiency purposes the point cloud may be downsampled. ficient way Describe fundamental concepts for point cloud processing. Describe fundamental concepts for point cloud Develop algorithms for point cloud processing. Incorporate point cloud processing in your applications. Understand the limitations of point cloud processing. Use industry standard tools for developing point cloud processing applications. in the design and development of compact cameras at HP and has more cameras at HP and development of compact in the design for consumer miniature cameras of experience developing than 15 years primaryproducts. His on sensor characterization, research interests focus analysis, and the optimization of camera image optical system design and optical sciences from the Matherson holds a Masters and PhD in quality. University of Arizona. SC04: 3D Point Cloud Processing Sunday 26 January 12:15 pm • 8:00 am – Course Length: 4 hours Course Level: Introductory Instructor: of Technology Gady Agam, Illinois Institute Non-member: $380 / Student: $120 Fee: Member: $355 / Learning Outcomes This course enables the attendee to: • • • • • Point clouds are an increasingly important modality for imaging with ap- plications ranging from user interfaces to street modeling for GIS. Range Azure Kinect sensors such as the Intel RealSense camera or Microsoft effectivecamera are becoming increasingly small and cost thus opening course is to review a wide range of applications. The purpose of this the necessary steps in point cloud processing and introduce fundamental algorithms in this area. processing in some Point cloud processing is similar to traditional image sense yet different nature of the data. due to the 3D and unstructured produces a 2D array In contrast to a traditional camera sensor which produces 3D point of samples representing an image, a range sensor samples representing a 3D surface. generally unorganized The points are and so are termed there is a need “cloud”. Once the points are acquired neighbors of a given to store them in a data structure that facilitates finding point in an ef hole filling algorithms. For which can be treated using noise filtering and computational ef In an attempt to further organize the points and obtain a higher level representation of the points, planar or quadratic surface patches can be ex- tracted and segmentation can be performed. For higher level analysis key points can be extracted and features can be computed at their locations. These can then be used to facilitate registration and recognition algorithms. for visualization and analysis purposes the point cloud may be Finally, triangulated. The course discusses and explains the steps described above and introduces the increasingly popular PCL (Point Cloud Library) and other open source frameworks for processing point clouds. Intended Audience Engineers, researchers, and software developers, who develop imag- ing applications and/or use camera sensors for inspection, control, and analysis. Gady Agam is an associate professor of computer science at the Illinois of the Visual Computing Lab at IIT He is the director Institute of Technology. which focuses on imaging, geometric modeling, and graphics applica- tions. He received his PhD from Ben-Gurion University (1999). machine vision, and consumer products. Prior to Microsoft, he par Prior to Microsoft, and consumer products. machine vision, Describe illumination, photons, sensor, and camera radiometry. sensor, Describe illumination, photons, for a given application. Select optics and sensor compact camera modules based on Understand the optics of application. of sensor and volume of Understand the difficulties in minimizing size camera modules. in compact camera Assess the need for per unit camera calibrations modules. Learn are tion, flare, and relative illumination calibrations how distor performed. are required. Review autofocus actuators and why per unit calibrations structured light, etc.). Review 3D imaging systems (stereo, time of flight, imaging systems. Understand the calibrations associated with 3D Understand how to perform the various calibrations typically done in distortion,compact camera modules (relative illumination, gain, actua- etc.). tor variability, Understand equipment required for performing calibrations. Compare hardware tradeoffs variation, its impact such as temperature quality. on calibration, and overall influence on final SC03: Optics and Hardware Calibration of Compact Camera Modules of Compact Camera and Hardware Calibration SC03: Optics Vision Applications Automotive, and Machine for AR/VR, JanuarySunday 26 8:00 am – 12:15 pm • Course Length: 4 hours Course Level: Intermediate Instructors: Uwe Artmann, GmbH & Co. KG, and Image Engineering Corporation Kevin J. Matherson, Microsoft Non-member: $380 / Student: $120 Fee: Member: $355 / Learning Outcomes attendee to: This course enables the • • • • • • • • • • • • calibration with minimal The emphasis of this course is camera hardware system Electronic camera and content on camera calibration of color. performance are determined by a combination of sensor characteristics, Smaller pixels, lens characteristics, and image processing algorithms. result in more part-to-partsmaller optics, smaller modules, and lower cost good image quality. variation driving the need for calibration to maintain This short course provides an overview compact of issues associated with and machine vision imaging modules used in mobile, AR/VR, automotive, mitigating those issues. applications as well as providing techniques for modules, and the The course covers optics, sensors, actuators, camera camera calibrations typically performed to mitigate issues associated with production variation of lenses, sensors, and autofocus actuators. For those interested in more depth on color camera calibration, see SC12: Color and Calibration in Compact Camera Modules for AR, Machine Vision, Automotive, and Mobile Applications. Intended Audience People involved in the design and image quality of digital cameras, mobile staff of manufacturers, managers of cameras, and scanners. Technical digital imaging projects, as well as journalists studying image and students technology. Uwe Artmann studied photo technology at the University of Applied Sciences in Cologne following an apprenticeship as a and finished with the German ‘Diploma Engineer’. He is now the CTO at Image Engineering, an independent test lab for imaging devices and manufacturer of all kinds of test equipment for these devices. His special interest is the influence of noise reduction on image quality and MTF meas- urement in general. Kevin Matherson is a director of optical engineering at Microsoft Corporation working on advanced optical technologies for AR/VR, 106 EI 2020 SHORT COURSES AND DESCRIPTIONS AND DESCRIPTIONS COURSES SHORT EI 2020

Short Courses electronicimaging.org #EI2020 electronicimaging.org present. own anddonotexpresstheviewsoropinionsofhisemployer, pastand Disclaimer: Opinionsandcontentcoveredbythecoursearesolelyhis Amazon PrimeAir, workingonsensingsystemsforautonomousdrones. phases ofcameratuning.Currently, heisaprincipalimagingscientistat covering fromsystemspecification,ISPevaluation,selection,andall productsonthemarketatMicrosoft, led cameratuningofmostSurface International, workingfromlensdesigntofinalimagepipelinetuning;and speed documentscanneropticalimagingsystemdevelopmentatLexmark spectrophotometry, ledhigh lighting, colorformulation, andcolorimetry; and patentapplications.HehasworkedforMacbethCo.onstandard to psychophysicsandhumanvision.Hehasmorethansixtypatents color science,imagingprocessing,imagequalityevaluationsystems, digital imagingsystems,workingfromphotons,lenses,sensors,cameras, ofnumerousmarketproven vision. Hehasbeeninvolvedwiththedelivery yearswitha BSinoptics,MScolorscienceandPhDhuman thirty Luke Cuihasbeenhands-onworkingonimagingsystemsformorethan managers. Camera andimagingengineers,scientists,students,program Intended Audience cameras. techniques totheevaluationofcurrentcompetitivesmartphone fromfundamentalscienceand tive professionalsforthatprocessstarting and pronetoengineeringblunders.Thiscourseintendsprepareperspec- also atreacherousengineeringprocessthatisfullofpseudosciencepitfalls process. Itisaninterdisciplinaryfieldthatasmuchartscience. Digital cameraimagequalitytuningiscriticalinthedevelopment • • • • • • • • • • • • • • • • • • • This courseenablestheattendeetounderstand: Outcomes Learning Fee: Member:$355/Non-member:$380Student:$120 LukeCui,Amazon Instructor: Course Level:Intermediate Course Length:4hours •8:00am–12:15pm Sunday 26January SC05: DigitalCameraImageQualityTuning EI 2020SHORT COURSESANDDESCRIPTIONS Tracking cameras. smartphone ofbestperformance theperformance Application ofdeepneuralnetworkstotuning. Tuning forautonomousrobots. Subjective imagequalityandcompetitivebenchmarkingfortuning. Tone andcolorpreferencetuning. Denoising, noisemaskingandsubstitution. Tone curves,LUTSandtheirsignificance. andcorrection. De-mosaicking, colortransformation corrections. Camera geometricandradiometricdefectdistortion Scene classification,analysis,andmeasurement. 3A tuningprocessesandsteps. calibration. Camera permodulefactory Deep diveintothetuningprocess. Tunings toolsandtheirdevelopment. oftheimagequalitytuning process. Overview Objective imagequalitymetricsandstandardsapplicabletotuning. Capabilities oftheimageprocessingpipelines. Hardware capabilitiesbasedonspecification. The goalsofimagequalitytuning. techniques toproblemsinthehistor computergraphics,andar er vision,machinelearning, This coursepresentstheapplicationofrigorousimageprocessing,comput- • • • • • • amenable tocomputermethods Identifying problemsinthehistoryandinterpretationoffineartthatare This coursewillenabletheattendeeto: Outcomes Learning Fee: Member:$355/Non-member:$380Student:$120 DavidG.Stork Instructor: Course Length:Intermediate Course Length:4hours •8:00am–12:15pm Sunday 26January SC06: ComputerVisionandImageAnalysisofArt • This courseenablestheattendeeto: Outcomes Learning Fee: Member:$355/Non-member:$380Student:$120 Hemami, DraperLab University, ThrasyvoulosN.Pappas,Northwestern Instructors: andSheila Course Level:Intermediate Course Length:4hours •1:30-5:45pm Sunday 26January SC07: PerceptualMetricsforImageandVideoQuality noisseurship” forWileyPublishers. completing “Pixelsandpaintings:Foundationsofcomputer-assistedcon- morethan75,000citations.Heis including eightbooks,havegarnered symposium.His56patents,morethan200scholarlyworks, analysis ofart founding co-chairof“ElectronicImaging”Computervisionandimage societies,and and StanfordUniversities.Heisafellowofsixinternational variouslyatWellesleyhistory Colleges,andClark,Boston andSwarthmore computer science,statistics,neuroscience,psychology, andart andart has heldfacultypositionsinphysics,mathematics,electricalengineering, inphysics,and is agraduatefromMITandtheUniversityofMaryland rigorous computervisionandimageanalysistothestudyoffinear David G.Storkiswidelyconsideredapioneerintheapplicationof history.and art Students andscholarsinimaging,imagescience,computervision,ar Intended Audience tion, butinhighlynon-realistic,stylizedartworks. The courserevisitsclassicproblems,suchasimage-basedobjectrecogni- style), thetracingofar working methods,compositionalprinciples,stylometr such astheanalysisofbrushstrokesandmarks,medium,inferringar physics-constrained imagesinphotographs,videos,andmedicalimages, those addressedwidelyelsewhereincomputerimageanalysisappliedto Thecoursefocuses ontheaspectsoftheseproblemsthatareunlike art. drawings, murals,andothertwo-dimensionalworks,includingabstract Basics of art authentication Basics ofart Stylometr Lighting analysis Color analysis Brush strokeanalysis Perspective analysis restoration, retrieval,etc.)attemptto exploittheseproperties. system andhowcurrentapplications (imageandvideocompression, ofthehumanvisual Gain abasicunderstandingofthe properties y (quantificationofar tistic influence,andar tistic style)andar y andinterpretationoffinear t attributionandauthentication. tistic influence y (quantificationof tificial intelligence t paintings, t. He tists’ 107 t

Short Courses #EI2020 electronicimaging.org Fundamentals bioinspired image processing concepts and applications. The behavior of the human visual system (HVS). A new HSV based image arithmetic. Become familiar with: –  –  –  Provide some examples of the utilization of models of vision in the context of digital image quality assessment, enhancement, and representation. Understand the capabilities and limitations of full-reference, reduced reference, and no-reference metrics for image quality assessment, and why each might be used in a particular application. Gain hands-on experience in building developed biological mecha- nisms to optimize image processing algorithms. Discuss various research solutions for improving current image process- ing algorithms. as co-chair of the 2005 SPIE/IS&T Electronic Imaging Symposium and Electronic Imaging Symposium the 2005 SPIE/IS&T as co-chair of Electronic on Human Vision and SPIE/IS&T Conference co-chair of the IS&T Journal co-editor-in-chief of the He is currently Imaging (1997-2018). and IS&T. a Fellow of IEEE, SPIE, Imaging. Pappas is of Perceptual a BSEE from the University of Michigan Sheila S. Hemami received PhD from Stanford University (1992 and (1990), and a MSEE and nia Laboratories in Palo Alto, Califor 1994). She was with Hewlett-Packard the School of Electrical Engineering at Cornell in 1994 and was with From 2013 to 2016 she was professor University from 1995-2013. and chair of the department of electrical and computer engineering at Northeastern She is currently Director of Strategic University in Boston, MA. broadly research interests Opportunities at Draper Lab. Hemami’s Technical concern communication of visual information from the perspectives of both She was elected a Fellow of the signal processing and psychophysics. contributions to robust and perceptual image and IEEE in 2009 for her for Hemami has held various visiting positions, most video communications. at Ecole Polytechnique recently at the University of Nantes, France and numerous university Federale de Lausanne, Switzerland. She has received C. Holmes Nu’s and national teaching awards, including Eta Kappa She servedMacDonald Award. Publications Products as Vice-President and Services, Lecturer for the IEEE (2015-2016). She was a Distinguished editor-in-chief for the IEEE IEEE Signal Processing Society in 2010-11, was from 2008-10. She has held various technical on Multimedia Transactions leadership positions in the IEEE. SC08: Fundamentals BioInspired Image Processing Sunday 26 January • 1:30 - 5:45 pm Course Length: 4 hours Course Level: Intermediate Instructor: University of Sos Agaian, The Graduate Center and CSI, City (CUNY) New York and Prerequisites: Basic understanding of image processing algorithms statistics. Student: $120 Fee: Member: $355 / Non-member: $380 / Learning Outcomes most fundamental knowl- The main purpose of this course is to provide the what the human visual edge to the students so that they can understand applications. The course system is and how to use it in image processing enables the attendee to: • • • • • The rapid proliferation of hand-held mobile computing devices, coupled and visual data with the acceleration of the ‘Internet-of-Things’ connectivity, producing systems (embedded sensors, mobile phones, and surveillance cameras) have certainly contributed to these advances. In our modern we are producing, storing, and using digital information connected society, ever-increasing volumes of a digital image and video content. How can Gain an operational understanding of existing perceptually-based and of existing perceptually-based understanding Gain an operational of images/artifacts metrics, the types structural similarity on which they failure modes. work, and their current distortionUnderstand different models for applications, and modify or develop new metrics for specific how they can be used to contexts. Understand the differences between sub-threshold and supra-threshold artifacts, these two paradigms, and the differ- the HVS responses to response. ences in measuring that to select and interpret a particularUnderstand criteria by which metric for a particular application. and limitations of full-reference, limited- Understand the capabilities metrics, and why each might be used in a reference, and no-reference particular application. wish to gain an under- Image and video compression specialists who standing of how performance can be quantified. Engineers and scientists who wish to learn about objective image and video quality evaluation. Managers who wish to gain a solid overview and video of image quality evaluation. image processing. Students who wish to pursue a career in digital Intellectual property and patent attorneys who wish to gain a more fundamental understanding of quality metrics and the underlying technologies. Government laboratory personnel who work in imaging. 108 • • • • • quality that objective criteria for the evaluation of image examine We visual perception. Our primaryare based on models of emphasis is on or reference image fidelity, i.e., how close an image is to a given original to include structural image, but we broaden the scope of image fidelity no-reference and limited-reference metrics. also discuss equivalence. We examine a variety of applications with special emphasis on image We examine near-threshold perceptual metrics, and video compression. We (HVS) sensitivity to noise which explicitly account for human visual system by estimating thresholds above which the distortion is just-noticeable, visible distortions and supra-threshold metrics, which attempt to quantify when there are losses due encountered in high compression applications or also consider metrics for structural equivalence, to channel conditions. We whereby the original and the distorted image have visible differences but also take a We quality. both look natural and are of equally high visual close look at procedures for evaluating the performance quality metrics, of realistic distortions for including database design, models for generating for metric development various applications, and subjective procedures the state of the art and and testing. Throughout the course we discuss both directions for future research. Intended Audience • • • • • • Thrasyvoulos N. Pappas received a SB, SM, and PhD in electrical engineering and computer science from MIT (1979, 1982, and 1987, respectively). From 1987 until 1999, he was a member of the Technical Staff at Bell Laboratories, Murray Hill, NJ. He is currently a professor in the department of electrical and computer engineering at Northwestern which he joined in 1999. His research interests are in image University, and video quality and compression, image and video analysis, content- based retrieval, perceptual models for multimedia processing, model-based halftoning, and tactile and multimodal interfaces. Pappas has served as Vice-President Publications, IEEE Signal Processing Society (2015-1017), on Image Processing (2010-12), editor-in-chief of the IEEE Transactions elected member of the Board of Governors Processing of the Signal Society of IEEE (2004-06), chair of the IEEE Image and Multidimensional Committee (2002-03), technical Signal Processing (now IVMSP) Technical program co-chair of ICIP-01 and ICIP-09, and co-chair of the 2011 IEEE on Perception and Visual Analysis. He has also served IVMSP Workshop EI 2020 SHORT COURSES AND DESCRIPTIONS AND DESCRIPTIONS COURSES SHORT EI 2020

Short Courses electronicimaging.org #EI2020 electronicimaging.org • • • • • • • • This courseenablestheattendeeto: Outcomes Learning Fee: Member:$245/Non-member:$270Student:$95 Kevin J.Matherson,MicrosoftCorporation ImageEngineeringGmbH&Co.KG,and UweArtmann, Instructors: Course Level:Intermediate Course Length:2hours •1:30-3:30 pm Sunday 26January SC09: ResolutioninMobileImagingDevices:Concepts&Measurement ences degreefromtheInstituteofControlSystems,RAS. Russian AcademyofSciences(RAS);andhisdoctorengineeringsci- PhD inmathematicsandphysicsfromtheSteklovInstituteofMathematics, chanics (summacumlaude)fromtheYerevan StateUniversity, his Armenia; Publishing Corporation).AgaianreceivedhisMSinmathematicsandme- ofElectricalandComputerEngineering (Hindawi ; andJournal Cybernetics Transaction onImageProcessing;IEEETransaction onSystems,Man,and ofElectronicImaging(SPIE,IS&T); IEEE includingtheJournal nine journals, RecognitionandImageAnalysis , andheisanassociateeditorfor Pattern and AAASFellow. of AgaianisanEditorialBoardMemberfortheJournal Research. OtherhonorsincludeIS&TFellow, IEEEFellow, SPIEFellow, Assurance DHS NationalCenterofAcademicExcellenceinInformation NSF CenterforSimulationVisualization&Real-Time Predictionandthe Award. Moreover, Agaianestablished two universityresearchcenters:the the SanAntonioBusinessJournal’s “TheTech FlashTitans-Top Researcher” of numerousawardsincludingUTSA’s InnovatoroftheYear Award; and, University inMedford,MA.Currently, atCUNY, Agaianistherecipient Texas, SanAntonio(UTSA).HehasbeenavisitingfacultymemberatTufts Flawn ProfessorofelectricalandcomputerengineeringattheUniversity City UniversityofNewYork (CUNY).Priortothis,AgaianwasthePeterT. Sos Agaianiscurrentlyadistinguishedprofessorofcomputerscienceatthe probability.and elementary rithms, proofsinpropositionalandfirst-orderlogic,discretemathematics, Students areexpectedtohaveasolidbackgroundintheanalysisofalgo- Prerequisites of imageprocessingishelpful. broad understandingofimageprocessing.Priorfamiliaritywiththebasics Engineers, scientists,students,andmanagersinterestedinacquiringa Intended Audience well astheassociatedcommercialimpactandopportunities. image processing,anddiscussthecurrenttrendsinthesetechnologiesas also presentasynopsisoftheexistingstate-of-the-artresultsinfield processing andtoprovidenovelinsightsintobio-inspiredintelligence.We the fundamentalsandlatestadvancesinthisresearchareaforimage of imageprocessingprocedures.Theobjectivethiscourseistohighlight vision havealwaysbeenanexcessivesourceofinspirationforthedesign we possiblymakesenseofallthisvisual-centricdata?Studiesinbiological EI 2020SHORT COURSESANDDESCRIPTIONS age spaceresolution. betweenanduses ofobjectspaceandim- Understand thedifference limits. Understand humanvisualsystemresolution andperceptualresolution contrasttransferfunction. Learn SiemensStarSFR. Learn slantededgespatialfrequencyresponse(SFR). Learn Understand pointspreadfunctionandmodulationtransferfunction. ing devicesandtheirproscons. Describe thebasicmethodsofmeasuringresolutioninelectronicimag- ing devices. usedtodescriberesolutionofelectronicimag- Understand terminology University ofArizona. quality. MathersonholdsaMastersandPhDinopticalsciencesfromthe optical systemdesignandanalysis,theoptimizationofcameraimage researchinterestsfocusonsensorcharacterization, products. Hisprimary than 15yearsofexperiencedevelopingminiaturecamerasforconsumer in thedesignanddevelopmentofcompactcamerasatHPhasmore machine vision,andconsumerproducts.PriortoMicrosoft,heparticipated Corporation workingonadvancedopticaltechnologiesforAR/VR, Kevin MathersonisadirectorofopticalengineeringatMicrosoft urement ingeneral. interest istheinfluenceofnoisereductiononimagequalityandMTFmeas- manufacturer ofallkindstestequipmentforthesedevices.Hisspecial at ImageEngineering,anindependenttestlabforimagingdevicesand and finishedwiththeGerman‘DiplomaEngineer’.HeisnowCTO Sciences inColognefollowinganapprenticeshipasaphotographer studied phototechnologyattheUniversityofApplied Uwe Artmann image technology. andstudentsstudying of digitalimagingprojects,aswelljournalists video cameras,andscanners.Technical ofmanufacturers, managers staff ation ofimagequalityelectroniccameras(regardlessapplication), Managers, engineers,andtechniciansinvolvedinthedesignevalu- Intended Audience choice, lighting,sensorresolution,andpropermeasurementtechniques. pects ofmeasuringresolutioninelectronicimagingdevicessuchastarget standards usedfortheevaluationofspatialresolution,andpracticalas- ers thebasicsofresolutionandimpactsimageprocessing,international impact ofimageprocessingonthefinalsystemresolution.Thecoursecov- resolution methodsusedtoevaluateelectronicimagingdevicesandthe ofspatial Thiscourseisanoverview camera, itcanalsointroduceartifacts. processing can,insomecases,improvetheresolutionofanelectronic in devicessuchasdigitalandmobilephonecameras.Whileimage image processingimpacttheoverallresolutionandqualityachieved systems. Componentsofanimagingsystemsuchaslenses,sensors,and Resolution isoftenusedtodescribeimagequalityofelectronicimaging consumer, andmobileapplications. wanting tocharacterizecamerasinAR/VR,automotive,machinevision, the near-infrared/infraredspectralregions.Thecourseisofinteresttothose resolution inwideangleslensesaswellmeasurementof This classisanupdateofour2019courseandaddsmeasurement • • • • • • • • • • • • • Understand through-focusMTF. Learn aboutimpactoffieldcurvature. aboutSFRmeasurementofwideanglelenses. Learn star SFR. onslantededgeandSiemens Understand theimpactoflargedistortion resolution. Specifying centerversuscorner Understand thepracticalconsiderationswhenmeasuringreallenses. Understand measurementofcameraresolution. ofresolutionmeasurements. cascadeproperties Learn Appreciate RAW vs.processedimageresolutionmeasurements. measurementoflensresolutionandsensorresolution. Learn near-infrared resolutionmeasurements. Understand targets,lighting,andmeasurementsetupforvisible Understand practicalissuesassociatedwithresolutionmeasurements. Describe theimpactofimageprocessingfunctionsonspatialresolution. 109

Short Courses #EI2020 electronicimaging.org Identify the critical parameters and their impact on display color Identify the critical parameters quality for smartphones, notebooks, desktops, LCD TVs, and tablets, projectors. Compare color performance for various LCD modes like and limitations FFS. IPS, MVA, for HDR displays and wide gamut Understand the critical factors displays. modulation and the Understand the advantages of the LED backlight for QLED technology. principles of quantum dot gamut enhancement highlight its depend- Select the optimal color model for a display and ency on display technology. ICC profile and Understand the use of the color model for the display the implication for the color management. an LCD screen and Follow a live calibration and characterization of projector used in the class, using tools varying calibrator to from visual instrument based ones. problems of color Apply the knowledge from the course to practical optimization for displays. SC11: Color Optimization for Displays SC11: Color JanuarySunday 26 1:30 - 3:30pm • 2 hours Course Length: Course Level: Intermediate Instructor: Inc Gabriel Marcu, Apple, Non-member: $270 / Student: $95 Fee: Member: $245 / Benefits attendee to: This course enables the • • • • • • • • for various display This course introduces color optimization techniques LCD, LcoS), DLP, types (LCDs, plasma, OLED, QLED, and projection: TV screens. Factors such and ranging from mobile devices to large LCD luminance level (including HDR), dynamic/static contrast as technology, correction, gray ratio (including local dimming), linearization and point, response time, tracking, color gamut (including wide gamut), white calibration, and characterization color model, viewing angle, uniformity, displays are presented. are discussed and color optimization methods for Learning Outcomes problems of color optimi- Apply the knowledge from the course to practical zation for displays. Intended Audience Engineers, scientists, managers, pre-press professionals, and those confront- ing display related color issues. Gabriel Marcu is senior scientist at Apple, Inc. His achievements are in color reproduction on displays and desktop printing (characterization/ calibration, halftoning, gamut mapping, ICC profiling, HDR imaging, He holds more than 80 issued patents in these color conversion). RAW areas. Marcu is responsible for color calibration and characterization of Apple desktop display products. He has taught seminars and courses on Europe. SPIE, and SID conferences and IMI color topics at various IS&T, He was co-chair of the 2006 SPIE/IS&T Electronic Imaging Symposium Color and CIC11; he is co-chair of the Electronic Imaging Symposium’s Processing, and Applications conference. Imaging: Displaying, Hardcopy, Marcu is an IS&T and SPIE Fellow. Understand current methods for objective image quality evaluation. Understand current methods Explain the difference imaging performance between and image quality. performanceDescribe why standard might differ methods with markets. approaches for evaluating wide Field-of-View Identify challenges, and (FOV) cameras. e.g., in multi-cam- Quantify and mitigate sources of system variability, era systems. 110 SC10: Image Quality: IndustrySC10: Image and for Mobile, Automotive, Standards Applications Machine Vision JanuarySunday 26 1:30 - 5:45 pm • Course Length: 4 hours Course Level: Introductory/Intermediate Instructors: Associates, and Peter Burns, Don Williams, Image Science Burns Digital Imaging Non-member: $380 / Student: $120 Fee: Member: $355 / Learning Outcomes attendee to: This course enables the • • • • • start quality methods, as developed by discussing objective image We have been adapted for image capture systems. Several of these methods e.g., automotive (ADAS) and machine-vision in emerging standards for, how and why imaging performance describe methods applications. We are being adopted. Most efforts rely on several ISO-defined methods, e.g., for color-encoding, image resolution, distortion, and noise. While several the image quality needs are different. measurement protocols are similar, 12288 standard for machine vision emphasizes For example, the EMVA and IEEE the CPIQ detector signal and noise characteristics. However, due to optical and P2020 automotive imaging initiatives include attributes video performance (e.g., distortion and motion artifacts). Intended Audience digital camera Image scientists, quality engineers, and others evaluating and scanner performance. The previous introduction to methods for imag- will be etc.) MTF, ing performance testing (optical distortion, color-error, useful. was with Kodak Don Williams, founder of Image Science Associates, signal and noise Research Laboratories. His work focuses on quantitative performance metrics for digital capture imaging devices and imaging fidelity issues. He co-leads the TC 42 standardization efforts on digital print and film scanner resolution (ISO 16067-1, ISO 16067-2), scanner dynamic range (ISO 21550), and is the editor for the second edition to digital camera resolution (ISO 12233). Peter Burns is a consultant working in imaging system evaluation, mod- eling, and image processing. Previously he worked for Carestream Health, Xerox, and Eastman Kodak. A frequent instructor and speaker at technical conferences, he has contributed to several imaging standards. He has taught imaging courses for clients and universities for many years. EI 2020 SHORT COURSES AND DESCRIPTIONS AND DESCRIPTIONS COURSES SHORT EI 2020

Short Courses electronicimaging.org #EI2020 electronicimaging.org studying imagetechnology. andstudents managers ofdigitalimagingprojects, aswelljournalists, (regardless ofapplication)andscanners.Technical ofmanufacturers, staff People involvedinthedesignandimagequalityofelectroniccameras Intended Audience application. inimprovingoverallimagequalityforaparticular to understandtradeoffs in understandingtheimagecaptureandcalibrationprocesscanbeused influences colortransformsandwhitebalance.Theknowledgeacquired areshownhowrawimage data aging pipeline.Inthecourse,participants capture andhowcalibrationcanimproveimagesusingatypicalcolor im- gettoexaminethebasiccolorimage and consumercameras.Participants camera imageforapplicationsinAR/VR,machinevision,automotive, basic methodsdescribingcapture,calibration,andprocessingofacolor that canbeusedtocalibratesuchvariations.Thiscourseprovidesthe to understandhowcolorcamerahardwarevariesaswellthemethods variation.Inorder todesignandtunecameras,itisimportant part-to-part quality. Therearemanyimplementationchallenges,thelargestbeing of thesestepshasasignificantinfluenceonthecolorandoverallimage balancing, tonalandcolorcorrection,sharpening,compression.Each flare compensation,shading,colordemosaicing,white within thecameraandcoversvariousstepslikedarkcurrentsubtraction, consumer, mobile,andautomotiveapplications,imageprocessingisdone Following capture,theimageneedstoberendered.FormostAR/VR, is theBayerfilter, adesignationtothearrangement ofthecolorfilters. often onered,green,andblue.Themostcommonconfiguration imagesinasingleacquisition, Color camerasproduceseveraldifferent • • • • • • • • • • • • This courseenablestheattendeeto: Outcomes Learning Fee: Member:$245/Non-member:$270Student:$95 Kevin J.Matherson,MicrosoftCorporation ImageEngineeringGmbH&Co.KG,and UweArtmann, Instructors: Course Level:Introductory Course Length:2hours •3:45-5:45pm Sunday 26January Machine Vision,Automotive,andMobileApplications SC12: ColorandCalibrationinCompactCameraModulesforAR, EI 2020SHORT COURSESANDDESCRIPTIONS Learn howflarecompensationisdoneinelectroniccameras. colors,etc.). tone, memory abouttheimpactoftonereproduction onperceivedcolor(skin Learn controlling itforcolor. of abouttheimpactsofpixelsaturation andtheimportance Learn remove itsunwantedeffects. Understand howchromaticaberrationsimpactcolorandto calibrations. associatedwithcolorshadingandincorrect Understand artifacts and theimageprocessingused. Describe requiredcalibrationmethodsbasedonthehardwarechosen Understand howthecalibrationlightsourceimpactscalibration. Understand howspectralsensitivitiesandcolormatricesarecalculated. module calibration. betweenclasscalibrationandindividual Describe thedifferences Understand calibrationmethodsusedforcameramodules. application. Describe basicimageprocessingstepsforcolorcamerasbasedon impact overallimagequality. andhowsuchchoicescan tions andthetypeofcalibrationsperformed Understand how hardware choices in compact cameras impact calibra- ers or normal trichromats of different ages. trichromatsofdifferent ers ornormal observ- correct andimprovethevisualdiscrimination ofimagesbyaffected colour vision.Thoseinterestedinunderstanding theprinciplesofhowto withdefective trichromatsaswellobservers colour visionofnormal Colour engineers,scientistsanddesigners.Thosewhowishtounderstand Intended Audience perceiving colouredimages. in observers andaffected betweennormal used tosimulatethedifferences This isfollowedbythepresentationofsomeimageprocessingtechniques lifespan. concentratesonsimulatinghowvisionchangesduringthe The secondpart as welltheirlightanddarkadaptationprocesses. ofspatial,temporal,andcolourresolution isconsideredinterms observers trichromatsandaffected deficiencies. Thedifferencesbetweennormal byinheritedoracquiredcolour affected trichromatsandobservers normal betweencolourvisionin This isfollowedbyananalysisofthedifferences consequence oftheearlydevelopmentoragingvisualsystem. In particular, weexaminethechangesincolourvisionthattakeplaceasa intheperceptionofcolouracrosslifespan. ing individualdifferences derstanding howcolourvisionoperatesandfocusesonthecausesunderly- The firstpartofthecourseprovidesphysiologicalfundamentalstoun- ordefectivecolourvisionacrossthelifespan. withnormal observers and toolsthatcanbeusedtosimulatehowimagesmayperceivedby tive colourvisionandtodescribetheprinciplesofsomeexistingsoftware anddefec- The courseaimstoprovideageneralintroductionnormal • • • • The courseenablestheattendeeto: Objectives Learning Fee: Member:$245/Non-member:$270Student:$95 CaterinaRipamonti,CambridgeResearchSystems Instructor: Course Level:Intermediate Course Length:2hours •3:45-5:45pm Sunday 26January andDefective ColourVisionacrosstheLifespan SC13: Normal University ofArizona. quality. MathersonholdsaMastersandPhDinopticalsciencesfromthe optical systemdesignandanalysis,theoptimizationofcameraimage researchinterestsfocusonsensorcharacterization, products. Hisprimary than 15yearsofexperiencedevelopingminiaturecamerasforconsumer in thedesignanddevelopmentofcompactcamerasatHPhasmore machine vision,andconsumerproducts.PriortoMicrosoft,heparticipated Corporation workingonadvancedopticaltechnologiesforAR/VR, Kevin MathersonisadirectorofopticalengineeringatMicrosoft urement ingeneral. interest istheinfluenceofnoisereductiononimagequalityandMTFmeas- manufacturer ofallkindstestequipmentforthesedevices.Hisspecial at ImageEngineering,anindependenttestlabforimagingdevicesand and finishedwiththeGerman‘DiplomaEngineer’.HeisnowCTO Sciences inColognefollowinganapprenticeshipasaphotographer studiedphototechnologyattheUniversityofApplied Uwe Artmann Simulate howvisionchangesduringthelifespan. colourvision. andaffected betweennormal Appreciate thedifference colour. inperceiving aboutthecausesunderlyingindividual differences Learn colourvisionoperates. Understand hownormal 111

Short Courses #EI2020 electronicimaging.org Describe fundamental concepts in 3D imaging. from 2D images. Develop algorithms for 3D model reconstruction Incorporate camera calibration into your reconstructions. Classify the limitations of reconstruction techniques. Use industry standard tools for developing 3D imaging applications. Alessandro Rizzi is full professor and head of MIPSLab, departmentRizzi is full professor and Alessandro of com- HDR, and related researches color, University of Milan. He puter science, Group, founders of the Italian Color He is one of the perceptual issues. Secretary vice president, topical 8, and IS&T Fellow and of CIE Division editor of the Journal America, associate editor of of the Optical Society of the Journal the Davies Medal of Electronic Imaging. In 2015 Rizzi received Society. from the Royal Photographic January 27, 2020 Monday, SC15: 3D Imaging Monday 27 January 12:45 pm • 8:30 am – Course Length: 4 hours Course Level: Introductory Instructor: Gady Agam, Illinois Institute of Technology Student: $120 Fee: Member: $355 / Non-member: $380 / Learning Outcomes This course enables the attendee to: • • • • • for 3D structure infer- The purpose of this course is to introduce algorithms 3D structure from ence from 2D images. In many applications, inferring 2D images can provide crucial sensing information. The course begins by reviewing geometric image formation and mathematical concepts algorithms for 3D model that are used to describe it, and then discusses reconstruction. problem in which we The problem of 3D model reconstruction is an inverse need to infer 3D informationobservations. based on incomplete (2D) We discuss reconstruction algorithms, which utilize information from multiple some intrinsic and extrinsic views. Reconstruction requires the knowledge of between camera parameters, and the establishment of correspondence discuss algorithms for determining camera parameters (camera views. We epipolar constraints calibration) and for obtaining correspondence using 3D imaging software between views. The course also introduces relevant components available through the industry standard OpenCV library. Intended Audience Engineers, researchers, and software developers, who develop imaging applications and/or use camera sensors for inspection, control, and analy- sis. The course assumes a basic working knowledge concerning matrices and vectors. Gady Agam is an associate professor of computer science at the Illinois of the Visual Computing Lab at IIT He is the director Institute of Technology. which focuses on imaging, geometric modeling, and graphics applica- tions. He received his PhD from Ben-Gurion University (1999). Measure the optical limits in acquisition and display; in particularMeasure the optical limits in acquisition and display; measure the scene dependent effects of optical glare. single and multiple- Compare the accuracy of scene capture using formats. exposures in normal and RAW that responds to the Engage in a discussion of human spatial vision retinal image altered by glare. and standards: tone- Engage in a discussion of current HDR TV systems rendering vs. spatial HDR methods. Explore the history of HDR imaging. 112 Caterina Ripamonti is a senior vision scientist at Cambridge Research scientist at Cambridge is a senior vision Caterina Ripamonti and an HonorarySystems Ltd. of Fellow at UCL Institute Senior Research author of nu- Hospital (UK). She is the and Moorfields Eye Ophthalmology properties of spatial and temporal on human colour vision, merous papers normal aspects of colour science related and defective vision, and applied also the co-author of the book Computational to human factors. She is Colour Science using MATLAB. (HDR) TheorySC14: High-Dynamic-Range and Technology Sunday 26 January pm • 3:45 - 5:45 Course Length: 2 hours Course Level: Intermediate Instructors: Imaging, and Alessandro Rizzi, John J. McCann, McCann University of Milano Non-member: $270 / Student: $95 Fee: Member: $245 / Learning Outcomes This course enables the attendee to: • • • • • Leonardo da Vinci made HDR paintings. Artists, , and image HDR TVs scenes. Today’s processors continue to capture/reproduce HDR QLED); and standards use developing technologies (LCD, LED, OLED, HybridLogGamma). The key to (HDR10, DolbyAtmos, TechnicolorHDR, Is it the display’s HDR is understanding the specific goal of the image. appearance? Since physical accuracy (radiances), or the reproduction’s accurate radiances. 1500, painters have reproduced HDR scenes ignoring (accurate reproduction) This course emphasizes measurements of physics shows limits caused by and psychophysics (visual appearance). Physics Psychophysics optical glare; HDR does not reproduce scene radiances. spatial-image-processing renders scene shows that human vision’s appearance. The course reviews successful HDR reproductions; limits of radiance repro- display technology and standards; appearance and duction; HDR TV’s luminance; and appearance models. HDR technology is a complex prob- lem controlled by optics, signal-processing, and visual limits. The solution depends on its goal: physical information appearance. or preferred Intended Audience Anyone interested in using HDR imaging: science, technology of displays, and applications. This includes students, color scientists, imaging research- ers, medical imagers, software and hardware engineers, photographers, cinematographers, and production specialists. Vision Research John McCann worked in, and managed, Polaroid’s color color constancy, Laboratory(1961-1996). He studied Retinex theory, from rod/cone interactions at low light levels, image reproduction, appear- ance with scattered light, cataracts, and HDR imaging. He is a Fellow of IS&T and the Optical Society of America (OSA); a past president of IS&T and the Artists Foundation, Boston; IS&T/OSA 2002 Edwin Land Medalist and IS&T 2005 Honorary Member. EI 2020 SHORT COURSES AND DESCRIPTIONS AND DESCRIPTIONS COURSES SHORT EI 2020

Short Courses electronicimaging.org #EI2020 electronicimaging.org • • • • • This courseenablestheattendeeto: Outcomes Learning Fee: Member:$245/Non-member:$270Student:$95 Kevin J.Matherson,MicrosoftCorporation ImageEngineeringGmbH&Co.KG,and UweArtmann, Instructors: Course Level:Intermediate Course Length:2hours •8:30–10:30 am Monday 27January Standards International SC17: CameraNoiseSourcesanditsCharacterizationusing the GlobalBigDataTechnologies researchcentre, bothatUTS. andApplicationsresearchcentre ber oftheInnovationinITServices seeking expertiseinthefieldofmachinelearning.Xuisalsoacoremem- coursesfororganisations DataLounge, whichprovidescustomisedshort analytics, andcomputervision.HeisthefounderdirectorofUTS data ing researcherinthefieldsofmachinelearning,deeplearning, andalead- Richard Xuisanassociateprofessorinmachinelearning or programmingwhoareinterestedinenteringthefieldofcomputervision. Professionals andacademicswithsomebackgroundincomputerscience Intended Audience approachtocomputervision. deeplearning modern-day to powerful to manuallydesignedfilters(forexample,scale-invariant-feature-transform) ments incomputervision:fromcameraanatomy(e.g.,calibration) treat- boththeclassicalandmodern course, ouraimistoteachparticipants visual infowithothermedia:e.g.,text-to-imageand/orimage-to-text.Inthis context,tocombine sequences ofimages(videos)andofteninthemodern ers to“see”and“understand”thecontentofstillimages(photos) techniquestohelpcomput- models. CVconcerns and MachineLearning applicationofArtificialIntelligence Computer Vision(CV)isanimportant • • • • This courseenablestheattendeeto: Outcomes Learning Fee: Member:$355/Non-member:$380Student:$120 RichardXu,UniversityofTechnologyInstructor: Sydney Course Level:Intermediate Course Length:4hours •8:30am–12:45pm Monday 27January ComputerVision SC16: ClassicalandDeepLearning-based EI 2020SHORT COURSESANDDESCRIPTIONS sensors. Describe the3Dnoisemethodused formeasuringnear-IRandinfrared tional standards. howtomakedynamicrangemeasurementsbasedoninterna- Learn measurements. 1288, ISO14524,15739,IEEEP2020,andvisualnoise standards:EMVAMake noisemeasurementsbasedoninternational ing devices. howimageprocessingimpactsnoisesourcesinelectronicimag- Learn devices. Become familiarwithbasicnoisesourcesinelectronicimaging generating syntheticimages,andimage-to-text. including objectclassification,detection,facerecognition, Be abletoapplyOpenCVandTensorFlow 2.0tobuildCVmodels, playsincomputervision. Explain theroledeeplearning Explain whycomputervisionischallenging aspectsandtasksincomputervision. Understand severalimportant this visualinformation. understand howthehumansees,interprets,andmakesdecisionsbased on representations arecreatedforhumanconsumption,soitiscriticalus to industry. tasksinevery These manydifferent representations thatsupport Imaging isaverybroadfield.Weproducewiderangeofvisual • • • Outcomes Learning Fee: Member:$355/Non-member:$380Student:$120 Rogowitz,VisualPerspectives andColumbiaUniversity Bernice Instructor: Course Level:Introductory/Intermediate Course Length:4hours •8:30am –12:45pm Monday 27January SC18: PerceptionandCognitionforImaging University ofArizona. quality. MathersonholdsaMastersandPhDinopticalsciencesfromthe optical systemdesignandanalysis,theoptimizationofcameraimage research interestsfocusonsensorcharacterization, products. Hisprimary than 15yearsofexperiencedevelopingminiaturecamerasforconsumer in thedesignanddevelopmentofcompactcamerasatHPhasmore machine vision,andconsumerproducts.PriortoMicrosoft,heparticipated Corporation workingonadvancedopticaltechnologiesforAR/VR, Kevin MathersonisadirectorofopticalengineeringatMicrosoft urement ingeneral. interest istheinfluenceofnoisereductiononimagequalityandMTFmeas- manufacturer ofallkindstestequipmentforthesedevices.Hisspecial at ImageEngineering,anindependenttestlabforimagingdevicesand and finishedwiththeGerman‘DiplomaEngineer’.HeisnowCTO Sciences inColognefollowinganapprenticeshipasaphotographer studiedphototechnologyattheUniversityofApplied Uwe Artmann technology. andstudentsstudyingimage digital imagingprojects,aswelljournalists cameras, andscanners.Technical ofmanufacturers,managers staff People involvedinthedesignandimagequalityofdigitalcameras,mobile Intended Audience izing noise. for noisecharacterization,andsimplehardwaretestsetupscharacter- standards age processingonthesenoisesources,theuseofinternational discusses commonnoisesourcesinimagingdevices,theinfluenceofim- (ADAS), AR/VR,consumer, andmachinevisionapplications.Thecourse “light intobyteout”compactcameramodulesusedautomotive ofnoisesourcesassociatedwith courseprovidesanoverview This short • • • • cognition toreal-worldimagingandvisualanalyticsapplications. Develop skillsinapplyingknowledgeabouthumanperceptionand semantics. Explore basiccognitiveprocesses,includingvisualattentionand by thehumanvisualsystem. Understand basicprinciplesofspatial,temporal,andcolorprocessing transfer curve. usingthephoton howtocompareimagesensorperformance Learn standards. usinginternational Predict systemlevelcameraperformance standards. Describe simpletestsetupsformeasuringnoisebasedoninternational Make noisemeasurementsusingthe3Dmethod. 113

Short Courses #EI2020 electronicimaging.org Become familiar with deep learning concepts and applications. Understand how deep learning methods work, specifically convolu- tional neural networks and recurrent neural networks. Gain hands-on experience in building, testing, and improving the performance networks using popular open-source utilities. of deep Describe an image quality lab and measurement protocols. measurement protocols. image quality lab and Describe an quality of a set of cameras. how to compare the image Understand Tuesday, January 28, 2020 Tuesday, SC20: Fundamentals of Deep Learning 28 Januaryam – 12:45 pm • 8:30 Tuesday Length: 4 hours Level: Intermediate Instructor: Raymond Ptucha, Rochester Institute of Technology Fee: Member: $355 / Non-member: $380 / Student: $120 Learning Outcomes This course enables the attendee to: • • • Deep learning has been revolutionizing the machine learning community winning numerous competitions in computer vision and pattern recognition. Success in this space spans many domains including object detection, classification, speech recognition, natural language processing, action recognition, and scene understanding. In many cases, results surpass the abilities of humans. Activity in this space is pervasive, ranging from aca- demic institutions to small startups This short to large corporations. course • • of this shortThe purpose compare that it is possible to course is to show imaging systems in a perceptually relevant the image quality of consumer generating a concise and Because image quality is multi-faceted, manner. relevant evaluative summary systems can be challenging. of photographic image quality of still and video imaging systems Indeed, benchmarking the but understands not only the capture device itself, requires that the assessor ob- for the system. This course explains how also the imaging applications methodologies are used to benchmark image jective metrics and subjective still image and video capture devices. The course quality of photographic attributes and the flaws that may degrade those reviews key image quality on various subjective evaluation methodologies attributes. Content touches - methodologies relying on existing stand as well as objective measurement is ISO, IEEE/CPIQ, and ITU. The course focus ards from sources such as so the emphasis is on the value of using on consumer imaging systems, and how to generate objective metrics that are perceptually correlated and subjective metrics. benchmark data from the combination of objective Intended Audience to learnImage scientists, engineers, or managers who wish more about cameras for various image quality and how to evaluate still and video and how a camera works applications. A good understanding of imaging is assumed. at Eclipse Optics Henrik Eliasson is a camera systems specialist working quality assessment, in Sweden. He has extensive experience in image at Sony Ericsson/Sony previously working as a camera systems engineer He has previously Mobile Communications and Axis Communications. run by IEEE, and a been a key contributor in the CPIQ initiative, now on photography standards. Swedish delegate to the ISO TC42 committee related areas, from He has published work in a broad range of camera and image sensor optical simulations to camera color characterization of SPIE. crosstalk investigations. Eliasson is a senior member Identify defects that degrade image quality in natural images and what component of the camera may be improved for better image quality. Be aware of existing image quality standards and metrics. Understand how to judge the overall image quality of a camera. Evaluate the impact various output use cases can have on overall im- age quality. 114 The human observerpercep- representations using actively processes visual of years. have evolved over millions mechanisms that tual and cognitive processing an introduction to these this tutorial is to provide The goal of knowledge can guide engineering and to show how this mechanisms, This course will provide represent data visually. decisions about how to foundation for approaching importanta fundamental perceptual in topics analysis, and data visualiza- visual feature quality, imaging, such as image with understanding early vision mechanisms, tion. The course will begin perception, cover importantsuch as contrast and color topics in attention and provide insights into individual differences, aesthetics, and memory, and emotion. Intended Audience application developers. Domain experts are Imaging scientists, engineers, application plays a pivotal role in today’s also welcome, since imaging - medicine, science, environment, telecommunica areas, including finance, art and design, augmented and virtualtions, sensor integration, reality, imaging systems from the and others. Students interested in understanding to attend, as well as perspective of the human user are also encouraged by our eye-brain anyone interested in how the visual world is processed system. Bernice is the Chief Scientist at Visual Perspectives, a consulting Rogowitz and universities to im- and research practice that works with companies through a better understand- prove visual imaging and visualization systems the Data Visualization and ing of human vision and cognition. She created where she is an instructor in the Design curriculum at Columbia University, Editors-in-Chief (with Applied Analytics Program, and is one of the founding Thrasyvoulos Pappas) of the new IS&T Journal Imaging, which of Perceptual perception/cognition and publishes research at the intersection of human received her BS in experimental psychology from Rogowitz imaging. Dr. and a PhD in vision science from Columbia University, Brandeis University, was a post-doctoral Fellow in the Laboratory at Harvard for Psychophysics she was a scientist and research manager For many years, University. Her work includes fundamen- Research Center. Watson at the IBM T.J. tal research in human color and pattern perception, novel perceptual semantics, and human- approaches for visual data analysis and image in medical, financial, centric methods to enhance visual problem solving and scientific applications. She is the founder, and past chair of the IS&T which has been a Conference on Human Vision and Electronic Imaging, vital part of the imaging community for more than 30 years. Rogowitz is of the IEEE. In 2015, a Fellow of IS&T and SPIE, and a Senior Member and was cited as a “leader she was named the IS&T Honorary Member, in defining the research agenda for human-computer interaction in imag- ing, driving technology innovation through research in human perception, cognition, and aesthetics.” SC19: Camera Image Quality Benchmarking Monday 27 January • 10:45 am – 12:45 pm Course Length: 2 hours Course Level: Intermediate Instructor: Henrik Eliasson, Eclipse Optics Fee: Member: $245 / Non-member: $270 / Student: $95 Learning Outcomes This course will enable the attendee to: • • • • EI 2020 SHORT COURSES AND DESCRIPTIONS AND DESCRIPTIONS COURSES SHORT EI 2020

Short Courses electronicimaging.org #EI2020 electronicimaging.org • • • • This courseenablestheattendeeto: Outcomes Learning Fee: Member:$355/Non-member:$380Student:$120 Engineering, GmbH&Co.KG EricWalowit,Instructors: consultant,andDietmarWueller, Image Course Level:Intermediate Course Length:4hours Tuesday •8:30am–12:45pm 28January SC21: ProductionLineCameraColorCalibration active memberofhislocalIEEEchapterandFIRSTroboticsorganizations. structor, ChairoftheRochesterareaIEEESignalProcessingSociety, andan ofSTEMeducation, anNVIDIAcer porter Award.the 2014BestRITDoctoralDissertation Ptuchaisapassionatesup- NSF GraduateResearchFellowship(2010)andhisPhDresearchear aPhDincomputerscience from RIT(2013).Raywasawardedan earned aMSinimagesciencefromRIT.in electricalengineering.Heearned He withaBSincomputerscienceand graduated fromSUNY/Buffalo computational imagingalgorithmsandwasawarded32U.S.patents.He research scientistwithEastmanKodakCompanywhereheworkedon Ptuchawasa and robotics,withaspecializationindeeplearning. Technology. computervision, Hisresearchincludesmachinelearning, atRochesterInstituteof director oftheMachineIntelligenceLaboratory Raymond Ptuchaisanassociateprofessorincomputerengineeringand andascriptinglanguageishelpful. machine learning Priorfamiliaritywiththebasicsof broad understandingofdeeplearning. Engineers, scientists,students,andmanagersinterestedinacquiringa Intended Audience network, andmodificationsforimprovedper perfor with tipsandtechniquesforcreatingtrainingdeepneuralnetworksto how tousethemwithframeworksarediscussed.Thesessionconcludes models.Anover state-of-the-art Institute, wedemonstrateusagewithpopularopen-sourceutilitiestobuild providedbyNVIDIA’scomplex systems,ahands-onportion DeepLearning frameworks. Afterunderstandinghowthesenetworksareabletolear ing, rangingfromwritingC++codetoeditingtextwiththeuseofpopular withdeeplearn- There isanabundanceofapproachestogettingstarted many applicationssuchasautonomousdriving. in ing, andasthistechnologyisdescribed,indicatingtheireffectiveness gies. Examplesaregivendemonstratingtheirwidespreadusageinimag- fromcompetingtechnolo- they evolvedovertheyears,andhowdiffer coursedescribeswhatdeepnetworksare,how recent events.Thisshort sequencesandforgetmore modules, isabletobothrememberlongterm Term RNNframework,LongShort networks. Thebestperforming Memory solved featuresandclassifiers.RNNsinjecttemporalfeedbackintoneural fashion, givingasubstantialadvantageovermethodsusingindependently low-level visualfeaturesandclassifierssimultaneouslyinasuper frameworks.CNNsareend-to-end,learning open-source deeplearning attendees hands-ontrainingonhowtobuildcustommodelsusingpopular networks (CNNs)andrecurrentneural(RNNs),thengives encompasses thetwohottestdeeplear EI 2020SHORT COURSESANDDESCRIPTIONS software technologyandproducts. Evaluate currentcolorimetriccamera characterizationhardwareand and nonlineartechniques. Solve forcolorimetriccameratransfor target-basedandspectral-basedcameracharacterization. Perform impact ofcolorcalibrationonimagequalityandmanufacturingyield. Understand theneedforcameracolorimetriccharacterizationand m classificationonimager view ofpopularnetworkconfigurationsand y, ofatrained assessingtheperformance ning fields:convolutionalneural ms andbuildprofilesusinglinear tified DeepLear formance. ning Institutein- vised n ned as required. concepts, areintroduced ing blockchains,includingbasiccryptographic backgroundinthetechnologies underly- of transactionledgers.Necessary a distributed,secure,andtamper-resistantframeworkforthemanagement application setting,thecourseillustratesconstructionofblockchains as range ofdiversebusinessprocessesandapplications.Usingaconcrete technologythathasthepotentialtodisrupta emerged asarevolutionary This courseintroducesattendeestoblockchains,whichhaverecently • • • • • This courseenablestheattendeeto: Outcomes Learning Fee: Member:$245/Non-member:$270Student:$95 UniversityofRochester GauravSharma, Instructor: Course Level:Introductory Course Length:2hours Tuesday •3:15-5:15pm 28January SC22: AnIntroductiontoBlockchain activities. inseveralotherstandardization ISO/TC42/WG18 andalsoparticipates member ofIS&T, representativefor DGPH,andECIheisthe German suppliers fortestequipmentdigitalimagecapturedevices.Wuellerisa Cologne. HeisthefounderofImageEngineering,oneleading Dietmar Wueller studiedphotographicsciencesfrom1987to1992in ICC, ISOTC42,IS&T, andCIEJTC10. program (1985),concentratingincolorscience.Walowit isa member of hardware andsoftwarecompany. HegraduatedfromRIT’s ImageScience He isthefounder(retired)ofColorSavvySystems,acolormanagement and imageprocessingpipelinesfordigitalphotographicapplications. Eric Walowit’s interestsareincolormanagement,appearanceestimation, line qualityassurance. cessing pipelinedevelopment,imagequalityengineering,andproduction- Engineers, projectleaders,andmanagersinvolvedin-cameraimagepro- Intended Audience software. the basisneededtoimplementadvancedcolorcorrectionincamerasand generation, andmatchingfromcapturetodisplay. Thiscourseprovides end-to-end processofspectralcameracharacterization,transform ofthe mensional colorlookuptables.Alivedemonstrationisperformed methodsincludingmultidi- colortransform of naturalobjects,andmodern multispectral LEDlightsources,insitumeasurementsofspectralradiances solutions using current technology are presented including monochromators, colorimetric transformsandprofilesaredetailedstep-by-step.State-of-the-art in traditionalapproachesarediscussed.Methodologyforbuildingcamera and theimpactongeneralimagequalityarefirstreviewed.Knownissues andpractice.Theneedfor camera characterizationandcalibration theory This coursecoverstheprocessofcolorimetriccameracharacterizationin • Cite exampleapplicationsofblockchains. applications. Explain theutilityandapplicabilityofblockchainsindiverse Describe theconceptsofproof-of-workandproof-of-stake. their prosandcons. Distinguish betweencentralizedanddistributedledgershighlight tampering. Explain howtheblockchainconstructionprovidesresistanceagainst generation,andmatchingfromcapturetodisplay.transform inhands-onspectralcameracharacterization, Participate 115

Short Courses tificial t search #EI2020 electronicimaging.org tificial Intelligence ning and describe the specifics of several tificial neural networks). tificial intelligence. It emphasizes fast and smar Become familiar with AI concepts and applications. Understand machine lear prominent machine-learning methods (e.g., SVMs, decision trees, Bayes nets, and ar Gain hands-on experience in building, and improving, the performance of AI methods tailoring robotics and vision application. Discuss various research solutions for improving current AI algorithms. tificial intelligence is a research field that studies how to realize intelligent heuristics, thoughtful ways to represent knowledge, and incisive techniques that support rational decision-making. Application areas will include image processing and robotics. SC24: Imaging Applications of Ar 29 January pm • 8:30 am – 12:45 Wednesday Course Length: 4 hours Course Level: Introductory/Intermediate Instructor: University of Sos Agaian, The Graduate Center and CSI, City (CUNY) New York Student: $120 Fee: Member: $355 / Non-member: $380 / Learning Outcomes most fundamental The main purpose of this course is to provide the what ar knowledge to the attendees so that they can understand applications. This intelligence (AI) is and how to use it in image processing course enables attendees to: • • • • Ar to make The fundamental goal of AI is human behaviors on a computer. This a computer that can learn, plan, and solve problems independently. course aims to give an overview and an of some basic AI algorithms understanding of the possibilities and limitations of AI. This is an introduc- toryon ar course Intended Audience engineers, statisticians, imaging scientists, mathematicians, Computer and students in those developers, and system and software program managers, multidisciplinarytance of using in exploring the impor fields interested con- and within cognitive science, a fundamental cepts, questions, and methods algorithms for computational necessary field to build novel mathematical systems. is a multilingual a polymath and visionary, Mónica López-González, multidisciplinary artist, public entrepreneur, cognitive scientist, educator, Using her original cross-disciplinary work in research and writer. speaker, imagination, learning, intelligence, and human the science of creativity, merges the cognitive, brain, behavioral, social, López-González uniquely business strategy and artisticand health sciences with acumen to integrate like cognition and behavioral insights, into artificial human-centered factors, and competitive business solutions. She’s the intelligence development chief executive & science-art Productions, a officer of La Petite Noiseuse unique consulting firm at the forefront of innovative science-art integration. López-González holds BAs She is also faculty at Johns Hopkins University. in cognitive science, all in psychology and French, and a MA and PhD and a Certificate of Art in photography from from Johns Hopkins University, Maryland Institute College of Art. She held a postdoctoral fellowship in the member of HVEI. Johns Hopkins School of Medicine. She is a committee January 29, 2020 Wednesday, Identify the major, yet pressing, failures of contemporary autonomous Identify the major, intelligent systems. and necessaryUnderstand the challenges of implementation of mindset needed for integrative, multidisciplinary research. behavioral sciences, par- Review the latest findings in the cognitive and ticularly learning, emotion attention, problem solving, decision-making, perception, and spontaneous creative artistic thinking. and behavioral sciences Explain how relevant findings in the cognitive apply to the advancement of efficient and autonomous intelligent systems. Discuss various research solutions for improving current computational frameworks. 116 Intended Audience interested in understanding students, and managers Engineers, scientists, in a variety how they can be useful are constructed and how blockchains an overview The course includes of business processes. of necessary back- ground information, such as cryptographic tools utilized in the blockchain; concepts is not required. prior familiarity with these Gaurav Sharma electrical and computer engineering is a professor of at the University of Rochester where his research and of computer science learning,span data analytics, machine computer vision, color imaging, and bioinformatics. experience in developing and ap- He has extensive in these areas. Prior to joining the University plying probabilistic models project leader at the Xerox he was a principal scientist and of Rochester, for several companies on he has consulted Innovation Group. Additionally, He vision and image processing algorithms. the development of computer and has authored more than 190 peer-reviewed holds 51 issued patents of the “Digital Color Imaging Handbook” publications. He is the editor published by CRC Press. He currently serves for the as the Editor-in-Chief Processing and previously served on Image as the Editor- IEEE Transactions in-Chief for the SPIE/IS&T Journal Imaging (2011-2015). of Electronic IEEE, and SPIE. Sharma fellow of IS&T, is a and the ArtsSC23: Using Cognitive and Behavioral Sciences in Artificial Intelligence Research and Design 28 January- 5:15 pm • 3:15 Tuesday Course Length: 2 hours Course Level: Introductory/Intermediate Instructor: Mónica López-González, La Petite Noiseuse Productions Student: $95 Fee: Member: $245 / Non-member: $270 / Learning Outcomes This course enables the attendee to: • • • • • An increasing demand of machine learning systems and autonomous research is to create human-like intelligent machines. Despite the current surge of sophisticated computational systems available, from natural lan- guage processors and pattern recognizers to surveillance drones and self- in regards driving cars, machines are not human-like, most fundamentally, to our capacity to integrate past with incoming multi-sensory information create an accu- and creatively adapt to the ever-changing environment. To rate human-like machine entails thoroughly understanding human perceptu- and The complexity of the mind/brain al-cognitive processes and behavior. its cognitive processes necessitates that multidisciplinary expertise and lines of research must be brought together and combined. This introductory to intermediate course presents a multidisciplinary perspective about method, data, and theory and the arts from the cognitive and behavioral sciences evermore imperative in artificial intelligence research and design. The goal of this course is to provide a theoretical framework from which to build highly efficient and integrated cognitive-behavioral-computational models to advance the field of artificial intelligence. EI 2020 SHORT COURSES AND DESCRIPTIONS AND DESCRIPTIONS COURSES SHORT EI 2020

Short Courses electronicimaging.org #EI2020 electronicimaging.org fromcamerasystemstobiometricauthentication. Theaim smartphone, identity. Theseareallenabledbythe machine visioncapabilitiesofthe applications, rangingfromproductinspectionthroughtofinanceand onsecure hasthepotentialtohaveaprofoundeffect The smartphone • • • • • This courseenablestheattendeeto: Outcomes Learning Fee: Member:$245/Non-member:$270Student:$95 AlanHodgson,HodgsonConsultingLtd. Instructor: Course Level:Introductory Course Length:2hours Wednesday •3:15-5:15pm 29January Imaging forSecureApplications SC25: Smartphone Control Systems,RAS. (RAS); andhisdoctorofengineeringsciencesdegreefromtheInstitute from theSteklovInstituteofMathematics,RussianAcademySciences the Yerevan StateUniversity, hisPhDinmathematicsandphysics Armenia; received hisMSinmathematicsandmechanics(summacumlaude)from and ComputerEngineering(HindawiPublishingCorporation).Agaian Transaction ofElectrical ; andJournal onSystems,Man,andCybernetics Electronic Imaging(SPIE,IS&T),IEEETransaction onImageProcessing;IEEE of includingtheJournal and heisanassociateeditorforninejournals, RecognitionandImageAnalysis , ofPattern Board MemberfortheJournal Fellow, IEEEFellow, SPIEFellow, andAAASFellow. AgaianisanEditorial AssuranceResearch.OtherhonorsincludeIS&T Excellence inInformation & Real-Time Prediction andtheDHSNationalCenterofAcademic two universityresearchcenters:theNSFCenterforSimulationVisualization Tech FlashTitans-Top Researcher”Award. Moreover, Agaianestablished Innovator oftheYear Award; and theSanAntonioBusinessJournal’s “The abilities. AgaianistherecipientofnumerousawardsincludingUTSA’s world asartificiallyintelligentdeviceswithhuman-likedataprocessing andunderstandthephysical enables computationaldevicestosee,learn, ing, behavioralscience,neuroscience,andstudiesinhumanperception, approach,combiningcomputerscience,electricalengineer- tidisciplinary Laboratory.Computational VisionandLearning Hisresearchgroup’s mul- Tufts UniversityinMedford,MA.Currently, atCUNY, Agaianleadsthe of Texas, SanAntonio(UTSA).Hehasbeenavisitingfaculty memberat T. FlawnProfessorofelectricalandcomputerengineering attheUniversity the CityUniversityofNewYork (CUNY).Priortothis,AgaianwasthePeter Sos Agaianiscurrentlyadistinguishedprofessorofcomputerscienceat probability.and elementary rithms, proofsinpropositionalandfirst-orderlogic,discretemathematics, Students areexpectedtohaveasolidbackgroundintheanalysisofalgo- basics ofimageprocessingishelpful. broad understandingofartificialintelligence.Priorfamiliaritywiththe Engineers, scientists,students,andmanagersinterestedinacquiringa Intended Audience EI 2020SHORT COURSESANDDESCRIPTIONS technology couldbringtoinspectionandauthentication. thatfuturedevelopmentsinsmartphone Understand theopportunities tions inthisspace. solu- Be abletoidentifythepowerandvulnerabilitiesofsmartphone mentations insecureapplications. Comprehend therisksandbenefitsassociatedwithsmartphoneimple- phones arenowmakingadifference. ofcurrentsecureprintvisiontoolsandwheresmart- Have anoverview in inspectionandauthentication. imaging Evaluate thecurrentandpotentialapplicationsofsmartphone the InstituteofPhysicsandTheRoyalPhotographicSociety. to thisindustry. HodgsonisapastpresidentofIS&Tandfellowboth technology years hehasbeeninvestigatingtheapplicabilityofsmartphone ant, teachingcoursesatsecurityandimagingconferences.Overthelast5 consult- andasanexternal for thepast15years,bothwithinindustry asanimagephysicist.Hehasbeeninvolvedinsecuritydocuments industry Alan Hodgsonhas35years’experienceacrosstheprintandimaging ranging fromstudentsandengineerstomarketinnovatorsacademics. anaudience tothisapplication.Itaimsinform plication ofsmartphones secure applicationsindustry. Allthatisrequiredaninterestintheap- level courseinthatitassumesnoknowledgeof This isanintroductory • • • following perspectives: phone technologyinthebroadareaofsecurityapplications,from The courseisprimarilyintendedforthoseinterestedintheuseofsmart- Intended Audience These conceptsareillustratedthroughanumberofcasestudies. giants” andtheauthenticationindustriessuchascurrencyidentity. fromtheperspective ofboththe“tech benefits ofthesmartphoneplatform and wearableelectronics.Thisleadstoanappraisaloftherisks ofThings and therelevanceofassociatedtechnologiessuchasInternet finance, andidentity.We discusstheimplicationsofsomemarketstudies scape thenmovesintotheearlyimplementationsinproductinspection, byreviewingthetechnical,social,economic,andpoliticalland- It starts inspection, e-commerce,andmobileidentity. inareassuchasdocument perspective ofthesecureapplicationsindustry technology,of thiscourseistotakeafreshlookatsmartphone fromthe broad baseofprobabilisticinference andestimation,thecoursedevelops fromthe usingprobabilisticmodels.Starting timation formachinelearning The courseaimsatprovidingattendeesafoundationininferenceandes - • • • • • This courseenablestheattendeeto: Outcomes Learning Fee: Member:$355/Non-member:$380Student:$120 UniversityofRochester GauravSharma, Instructor: Course Level:Introductory Course Length:4hours •8:30am –12:45pm Thursday 30January SC26: IntroductiontoProbabilisticModelsforMachineLearning Thursday, 30,2020 January Camera systemsformachinevisionindocumentinspection asacarrierfore-commerceandelectronicID Smartphones Biometric imagingapplicationsinidentityandcommerce models in computer vision, machine learning, andimageprocessing. models incomputervision,machinelearning, Cite andexplainapplicationexamplesinvolvingtheuseofprobabilistic and imageprocessingproblems. Develop simpleapplicationsofprobabilisticmodelsforcomputervision expectation maximizationandhiddenMarkovModels. Describe howlatentvariablesandsequentialdependenceunderlie Maximum Likelihood(ML)detectionandestimationrules. Explain thebasisofMaximumAposterioriProbability(MAP)and as independence,Bayes’rule,andstationarity. Describe andintuitivelyexplainfundamentalprobabilisticconceptssuch 117

Short Courses #EI2020 electronicimaging.org 118 the treatment of specific techniques that underlie many current day machine that underlie many current of specific techniques the treatment review of con- covered include a learning Topics and inference algorithms. processes, Markov and IID processes, stochastic and probability from cepts (MAP) Maximum Aposteriori Probability and estimation, basics of inference - (ML), expectation maximization for ML estima and Maximum Likelihood and Markov and conditional random fields. tion, hidden Markov models, is to illustrate the use of models via concrete The pedagogical approach is introduced via a detailed toy example and then examples: each model actual application examples. illustrated via one or two Intended Audience and managers interested in understanding Engineers, scientists, students, are used in inference and parameter estimation how probabilistic models machine learning vision applications and computer problems in today’s to their own problems. Prior familiarity with and in applying such models with matrix vector operations are necessary for a basics of probability and although attendees lacking this background can thorough understanding, still develop an intuitive high-level understanding. Gaurav Sharma engineering is a professor of electrical and computer where his research and of computer science at the University of Rochester span data analytics, machine learning, computer vision, color imaging, and bioinformatics. and ap- He has extensive experience in developing to joining the University plying probabilistic models in these areas. Prior Xerox he was a principal scientist and project leader at the of Rochester, on he has consulted for several companies Innovation Group. Additionally, processing algorithms. He the development of computer vision and image than 190 peer-reviewed holds 51 issued patents and has authored more Imaging Handbook” publications. He is the editor of the “Digital Color published by CRC Press. He currently serves for the as the Editor-in-Chief Processing and previously served on Image as the Editor- IEEE Transactions in-Chief for the SPIE/IS&T Journal Imaging (2011-2015). of Electronic IEEE, and SPIE. Sharma fellow of IS&T, is a EI 2020 SHORT COURSES AND DESCRIPTIONS AND DESCRIPTIONS COURSES SHORT EI 2020

Short Courses Where Industry and Academia Meet to discuss Imaging Across Applications

GENERAL INFORMATION About IS&T

Registration The Society for Imaging Science and Technology Symposium Registration (IS&T)—the organizer of the Electronic Imaging Symposium Registration Includes: Admission to all technical sessions; coffee breaks; the Symposium—is an international non-profit dedi- Symposium Reception, exhibition, poster and demonstration sessions; 3D theatre; and sup- cated to keeping members and other imaging port of free access to all the EI proceedings papers on the IS&T Digital Library. Separate professionals apprised of the latest develop- registration fees are required for short courses. ments in the field through conferences, educa- Short Course Registration tional programs, publications, and its website. Courses are priced separately. Course-only registration includes your selected course(s), IS&T encompasses all aspects of imaging, with course notes, coffee breaks, and admittance to the exhibition. Courses take place in various particular emphasis on digital printing, elec- meeting rooms at the Hyatt Regency San Francisco Airport. Room assignments are noted on tronic imaging, color science, sensors, virtual the course admission tickets and distributed with registration materials. reality, image preservation, and hybrid imaging systems. Author/Presenter Information Speaker AV Prep Room: Conference Office IS&T offers members: Open limited hours during Registration Desk Hours. Visit Registration Desk for more • Free, downloadable access to more than information. 7,000 papers from IS&T conference proceedings via

Each conference room has an LCD projector, screen, lapel microphone, and laser pointer. www.ingentaconnect.com/content/ist General Information All presenters are encouraged to visit the Speaker AV Prep Room to confirm that their pres- • A complimentary online subscription to the entation and personal laptop is compatible with the audiovisual equipment supplied in the Journal of Imaging Science & Technology conference rooms. Speakers who requested special equipment prior to the request deadline or the Journal of Electronic Imaging are asked to confirm that their requested equipment is available. • Reduced rates on products found in the IS&T bookstore, including technical books, No shared laptops are provided. conference proceedings, and journal sub- Policies scriptions • Reduced registration fees at all IS&T Granting Attendee Registration and Admission sponsored conferences—a value equal IS&T, or their officially designated event management, in their sole discretion, reserves the right to accept or decline an individual’s registration for an event. Further, IS&T, or event to the difference between member and management, reserves the right to prohibit entry or remove any individual whether registered nonmember rates alone—and short courses or not, be they attendees, exhibitors, representatives, or vendors, who in their sole opinion • Access to the IS&T member directory are not, or whose conduct is not, in keeping with the character and purpose of the event. • An honors and awards program Without limiting the foregoing, IS&T and event management reserve the right to remove • Networking opportunities through active or refuse entry to any attendee, exhibitor, representative, or vendor who has registered or participation in chapter activities and con- gained access under false pretenses, provided false information, or for any other reason ference, program, and other committees whatsoever that they deem is cause under the circumstances. Contact IS&T for more information on these IS&T Code of Conduct/Anti-Harassment Policy and other benefits. The Society for Imaging Science and Technology (IS&T; imaging.org) is dedicated to ensuring a harassment-free environment for everyone, regardless of gender, gender iden- IS&T tity/expression, race/ethnicity, sexual orientation, disability, physical appearance, age, 7003 Kilworth Lane language spoken, national origin, and/or religion. As an international, professional organi- Springfield, VA 22151 zation with community members from across the globe, IS&T is committed to providing a respectful environment where discussions take place and ideas are shared without threat of 703/642-9090; 703/642-9094 fax belittlement, condescension, or harassment in any form. This applies to all interactions with [email protected] / www.imaging.org the Society and its programs/events, whether in a formal conference session, in a social setting, or on-line.

Harassment includes offensive verbal comments related to gender, sexual orientation, etc., as well as deliberate intimidation; stalking; harassing photography, recording, or postings; sustained disruption of talks or other events; inappropriate physical contact; and unwelcome

electronicimaging.org #EI2020 119 #EI2020 electronicimaging.org - - - dential and only and dential fi Where Industry Academia Meet and to discuss Imaging Across Applications Imaging Across to discuss nished its product, publication or distribution. fi es your agreement to be photographed or videotaped by IS&T in the in IS&T videotaped by or photographedbe to agreement your es fi cation fi ing electronic/digital photographs and or create worksvideos), derivative of these images ing electronic/digital and in any recordings media IS&T now known or later developed, for any legitimate IS&T marketing or promotional purpose. By registering for an IS&T event, I waive waive also I any writtencopy. any of or right recordings or theimages of theuse to approve inspector images, the of use the to related or from arising compensation other or royalties to right any IS&T harmless hold and indemnify defend, release, I registering, By materials. or recordings, from and against any damages claims, or from liability arising or related to the use of the but orincluding not to materials, invasion ofrecordings limited claims images, defamation, blurring, distortion, misuse, any or infringement, copyright or publicity of rights or privacy, of taking, in produced be or occur may that form composite in use or illusion optical alteration, reductionprocessing, or production of the sonal laser pointer represents acceptance user’s of liability for use of a non- IS&T-supplied laser pointer. Laser pointers in Class II and IIIa (<5 mW) are eye safe if power output is To verify registered participants and provide a measure of security, IS&T will ask will attendees verify participants a IS&T and measure provide of To registered security, to present a government issued Photo ID not at to up allowed badgespick are Individuals for attendees other than Further, themselves. registration to collect registration materials. attendees may not have some other person participate in their place at any related conference- activity. Such other individuals will be required to register on their own behalf participate. to Image a Person’s of and Use Capture for By I an registering event, full IS&T grant permission and/ to use, to store, IS&T capture, (includ technique recording visual and/or audio any by likeness or image my reproduce or Payment Method Registrants for paid elements of the event, who will donot be able notto complete Individuals their with registration. incomplete will provide registrations a method of payment, not be able to attend the conference until payment has been made. IS&T accepts MasterCard, American VISA, Express, Club, checksDiner’s Discover, and wire transfers. Onsite pay also Cash. can with registrations Policy Recording Digital Video, Audio, For copyright reasons, recordings of any kind are in presented prohibited materials the use nor without attendees.Attendeescapture not mayfellow or presenter the consent of the the presenter. from permission obtaining without room meeting any signi registration Your materials marketing IS&T in used be may video and photos Such business. normal of course items. promotional or other IS&T Safety Information/Policy Pointer Laser IS&T supplies tested and safety-approved laser pointers for all conference meeting rooms. For safety requests IS&T reasons, that presenters use provided laser Use pointers. of a per 120 propriate, propriate, including within conference talks, online exchanges, or the awarding of prizes. expected behavior are comply to immediately. harassing any stop to asked Participants Those participating in IS&T activities who violate these or IS&T’s Publications Policy a at without therefund membership may be sanctioned or expelled and/or the from conference If ofyou discretion are IS&T. being harassed, notice that someone else is being harassed, or have any other concerns, please contact the IS&T Executive Director or e-mail incident. con kept reports Please notethatall are immediately. [email protected] an reporting anyone against form any in retaliation know”; to those who “need with shared not be tolerated. will independent of the outcome, of harassment, incident Identi sexual sexual attention. Please note that the use imageryof sexual is language never and/or ap

General Information Where Industry and Academia Meet to discuss Imaging Across Applications

correct, but output must be verified because manufacturer labeling may not match actual output. Misuse of any laser pointer can lead to eye damage.

Underage Persons on Exhibition Floor Policy For safety and insurance reasons, no one under the age of 16 will be allowed in the exhibi- tion area during move-in and move-out. During open exhibition hours, only children over the age of 12 accompanied by an adult will be allowed in the exhibition area.

Unauthorized Solicitation Policy Unauthorized solicitation in the Exhibition Hall is prohibited. Any non-exhibiting manufacturer or supplier observed to be distributing information or soliciting business in the aisles, or in another company’s booth, will be asked to leave immediately.

Unsecured Items Policy Personal belongings should not be left unattended in meeting rooms or public areas. Unattended items are subject to removal by security. IS&T is not responsible for items left unattended.

Wireless Internet Service Policy At IS&T events where wireless is included with your registration, IS&T provides wireless ac- cess for attendees during the conference and exhibition, but does not guarantee full cover- age in all locations, all of the time. Please be respectful of your time and usage so that all attendees are able to access the internet. General Information

Excessive usage (e.g., streaming video, gaming, multiple devices) reduces bandwidth and increases cost for all attendees. No routers may be attached to the network. Properly secure your computer before accessing the public wireless network. Failure to do so may allow unauthorized access to your laptop as well as potentially introduce viruses to your computer and/or presentation. IS&T is not responsible for computer viruses or other computer damage.

Mobile Phones and Related Devices Policy Mobile phones, tablets, laptops, pagers, and any similar electronic devices should be silenced during conference sessions. Please exit the conference room before answering or beginning a phone conversation.

Smoking Smoking is not permitted at any event space. Most facilities also prohibit smoking in all or specific areas. Attendees should obey any signs preventing or authorizing smoking in speci- fied locations.

Hold Harmless Attendee agrees to release and hold harmless IS&T from any and all claims, demands, and causes of action arising out of or relating to your participation in the event you are registering to participate in and use of any associated facilities or hotels.

electronicimaging.org #EI2020 121 WWW.IMAGING.ORG A WEEK OF ADVANCED IMAGING PRINTING FOR FABRICATION INTERNATIONAL CONFERENCE & ICAI ON ADVANCED IMAGING 2020

Twenty-eighth CIC28 COLOR AND IMAGING CONFERENCE

CHIBA, JAPAN NOVEMBER 2-6

Sponsored by IS&T and the Japanese Federation of Imaging Societies (FIS): Imaging Society of Japan (ISJ) • Society of Photography & Imaging of Japan (SPIJ) • Institute of Image Electronics Engineers of Japan (IIEEJ) • Japanese Society of Printing Science & Technology (JSPST) • along with the Asian Symposium on Printing Technology (ASPT), Vision Society of Japan (VSJ), and the International Technical Exhibition on Image Technology and Equipment (ITE) electronicimaging.org #EI2020 electronicimaging.org G.IQSP-284 Azevedo, Roberto Aykac, DenizIRIACV-050 Aspiras, TheusH.IMAWM-087 Asma, EvrenCOIMG-194 Aslan, SelinCOIMG-043 Ashraf, NazimISS-213 Asari, VijayanK.IMAWM-087 Asadi Shahmirzadi,Azadeh UweAVM-148Artmann, BrunoCOIMG-377 Artacho, COIMG-059, Arguello, Henry Arai, Ryusuke COLOR-122 Aqeel, MarahIMAWM-301 Aoyama, SatoshiISS-104 Antensteiner, DorisIRIACV-071 Anghel, JoshVDA-374 Anderson, JamesD.VDA-388 Ananthanarayana, Tejaswini Ananpiriyakul, Thanawut S.HVEI-010 Allison, Robert Allebach, JanP. 3DMP-035, Al-fuqaha, AlaIMAWM-270 Alattar, AdnanM.MWSF-117 Alamgeer, SanaIQSP-066 Akopian, DavidMOBMU-275, Akhavan, Tara IQSP-067 Ahmad, RizwanCOIMG-005 Ahmad, KashifIMAWM-270, Aguilar Herrera,CamiloG. Agudelo, William COIMG-307 Agam, GadyCOIMG-344 Agaian, SosS.IPAS-062, Adsumilli, BaluIQSP-284 Adams, DanielSD&A-141 Abramov, SergiyK.IQSP-319 Abdou, MohammedAVM-296 Abbas, HazemAVM-110 AUTHOR INDEX MAAP-107 COIMG-307 FAIS-172 VDA-374 IQSP-373, MWSF-292 IQSP-322, IQSP-323, IQSP-314, IQSP-321, IMAWM-269, IMAWM-302, IMAWM-188, IMAWM-221, IMAWM-185, IMAWM-186, COLOR-353, IMAWM-184, COLOR-351, COLOR-352, COLOR-349, COLOR-350, COLOR-196, COLOR-283, COIMG-341, COLOR-195, MOBMU-335 MOBMU-276, MOBMU-334, IRIACV-074 COIMG-250, COIMG-264 IPAS-177, IPAS-180 A Bourdon, Pascal3DMP-003 Bouman, KatherineL. Bouman, CharlesA.COIMG-113, Bostan, EemrahCOIMG-112 Borrell, IsabelIQSP-373 Boroumand, MehdiMWSF-290 Bondarev, EgorIPAS-095, Bonatto, DanieleERVR-382 Boehm, EmilyHVEI-383 Bodempudi, SriTeja ERVR-223, Bocklisch, ThomasMOBMU-277 Blume, HolgerHVEI-234 Blakey, William HVEI-129 M.IMAWM-114Blakeslee, Bryan Blackwell, William J.COIMG-046 Birkbeck, NeilIQSP-284 Bilkova, ZuzanaIRIACV-052, Bhanushali, JagdishAVM-109 Bhanushali, DarshanRamesh Beziaeva, LiudmilaIPAS-064 Berchtold, Waldemar Benz, StephenC.IPAS-064 Benoit, IliciaSD&A-246 Benning, DominikMOBMU-206 Benjamin, KelseyERVR-339 Dario Benitez Restrepo,Hernan Bell, Tyler 3DMP-034 Behmann, NicolaiHVEI-234 Beghdadi, AzeddineIPAS-027 Becerra, HelardIQSP-167 Baveye, Yoann HVEI-132 Bauer, PeterIQSP-322,IQSP-323 Bauer, JacobR.COLOR-083 Baudin, EmilieIQSP-166 Batsi, SophiaHVEI-068 Bat Mor, OrelCOLOR-195 MichalVDA-389 Bartos, Barkowsky, MarcusHVEI-130 Barenholtz, ElanHVEI-268 Barcelo, StevenCOIMG-341 Bang, Yousun IQSP-314, Banchi, Yoshihiro SD&A-244 Bampis, ChristosHVEI-131 Bala, RajaCOIMG-357, Bakr, EslamAVM-296 Baker, PatrickSD&A-141 Babaei, Vahid MAAP-107 Baah, KwameCOLOR-238 COIMG-047, PLENARY-056 COIMG-249 IPAS-313 ERVR-224 VDA-389 AVM-257, IRIACV-014 MOBMU-207, MWSF-118 IQSP-168, IQSP-169 IQSP-321 ERVR-381 to discussImagingAcrossApplications Where Industry andAcademiaMeet Where Industry B Chesnokov, ViacheslavIPAS-182 Cheng, Yang IMAWM-186, Cheng, Wei-Chung COLOR-123, Chen, XinpingIMAWM-271 Chen, Wenhao MWSF-215 Chen, MichaelCOIMG-112 Chen, JianhangIMAWM-184, Chen, JianIRIACV-013 Chen, HomerH.COIMG-390 Chen, Chin-NingCOLOR-353 Chelini, ColleenERVR-363 Cheikh, FaouziAlaya Chapnik, PhilipCOIMG-046 Chang, SoonKeunISS-230 Chang, SeunghyukISS-105 Chang, Pao-Chi3DMP-002 Chandler, Talon COIMG-160 Chanas, LaurentIQSP-166, Chan, ManChiSD&A-098 Chakraborty, SujoyMWSF-078 Chakraborty, BasabduttaHVEI-209 Cha, SunghoIQSP-320 Caucci, LucaIPAS-310 Castañón, DavidCOIMG-294 MarceloM.IQSP-285 Carvalho, Carpenter, KatherineM.FAIS-173 Canessa, EnriqueA.SD&A-053 Callahan, DennisSD&A-403 Caldas, LuisaIMAWM-222 Cai, Yang ERVR-338 Cai, MingpengIMAWM-084 Cahall, ScottIQSP-240 StevenSD&A-403 Byrnes, MichaelIMAWM-085Byrne, T.Buzzard, Gregery COIMG-249 Butora, JanMWSF-289 SamuelERVR-363Burns, PeterD.IQSP-240 Burns, Burch, CristyIMAWM-085 Brunnström, KjellHVEI-090, Brundage, ElizabethSD&A-403 Brown, ScottIMAWM-086 Brown, MadellynA.COIMG-263 Brouwers, GuidoM.IRIACV-049 Brosch, NicoleIRIACV-071 Brookshire, CharlesCOIMG-355 Braun, AlexanderAVM-149 Bradley, BrettMWSF-024 Bovik, AlanC.IQSP-168 IMAWM-188 IQSP-317, IQSP-372 IMAWM-188, IMAWM-221 IRIACV-072, IRIACV-074 IPAS-027, IPAS-312, IQSP-370 HVEI-091 C Czeresnia Etinger, Isak Cwierz, HalinaCOLOR-259, Cui, Wei IMAWM-271 Cubero, LuisA.ISS-145 Cuadra, JeffersonA.COIMG-146 Creutzburg, ReinerMOBMU-205, Coudron, Inge3DMP-036 Costa, LucasCOIMG-264 OliverCOIMG-126, Cossairt, Corriveau, PhilipJ.HVEI-092 Comer, Tomas R.COIMG-264 Comer, COIMG-250, Mary Colligan, MichaelHVEI-092 Clouet, AxelMAAP-106 Clark, JamesJ.IQSP-067 Cicek, MuratcanMOBMU-303 Ciccarelli, MarinaERVR-337 Chunko, LaraERVR-339 Chubarau, AndreiIQSP-067 Chowdhury, Shwetadwip Choudhury, AnustupCOLOR-162, Choi, Wonho ISS-328 Choi, Sung-HoISS-103 Choi, JuneHVEI-209 Choi, EunbinMOBMU-332 Choi, BaekduCOLOR-349, Cho, SunghyuckISS-103 Cho, MinkiIQSP-314,IQSP-321 Chiu, GeorgeCOLOR-349, Chiracharit, Werapon IPAS-177, Chipman, RussellA.COIMG-263 Childs, HankVDA-376 Chevobbe, StephaneISS-115 Cheung, KaytonW. SD&A-098, Dalca, AdrianD.COIMG-047 Daikoku, ShoheiISS-104,ISS-272 D’Alessandro, PaulERVR-363 IMAWM-187 COLOR-260 MOBMU-336 MOBMU-331, MOBMU-333 MOBMU-278, MOBMU-304 MOBMU-276, MOBMU-277 MOBMU-254, MOBMU-275 MOBMU-252, MOBMU-253 MOBMU-206, MOBMU-232 COIMG-147, COIMG-306 COIMG-264, COIMG-305 COIMG-112 IQSP-214 COLOR-350, COLOR-351 IMAWM-302 COLOR-352, COLOR-353, COLOR-350, COLOR-351, IPAS-180 SD&A-140 D 123

Author Index AUTHOR INDEX

Daly, Scott J. COLOR-162, Fachada, Sarah ERVR-382 Ghani, Muhammad Usman COLOR-350, COLOR-351, IQSP-214 Fageth, Reiner IMAWM-183 COIMG-008 COLOR-352 Darvish Morshedi Hosseini, Falkenstern, Kristyn COLOR-197 Gigilashvili, Davit MAAP-033 He, Jiangpeng IMAWM-300 Morteza MWSF-077 Fang, Shengbang MWSF-217 Giruzzi, Filippo AVM-109 Hedlund, John HVEI-091 Davis, Patricia COLOR-353 Farias, Mylène C. IQSP-066, Godaliyadda, Dilshan AVM-255 Helbert, David 3DMP-003 Davis, William ISS-228 IQSP-167, IQSP-285 Goddard, James IRIACV-050 Hernandez-Morales, Emmanuelle Deegan, Brian M. AVM-001 Farnand, Susan FAIS-173, Gohar, Touqir IMAWM-270 ERVR-339 Deering, Amanda J. IMAWM-302 IQSP-039, IQSP-288 Goljan, Miroslav MWSF-077 Hicok, Gary PLENARY-153 de Gouvello, Alix ISS-329 Farrell, Joyce E. ISS-212 Goma, Sergio R. COIMG-358 Higuchi, Kokoro SD&A-055 De la Calle-Martos, Antonio ISS-231 Favalora, Gregg SD&A-403 Gómez-Merchán, Rubén ISS-231 Hillard, Luke ERVR-360 Delp, Edward J. COIMG-341, Feller, Michael AVM-258 Goncalves Ribeiro, Laura IPAS-096 Hines, Andrew IQSP-167 IMAWM-221 Feng, Yu ISS-273 Gong, Qi IQSP-372 Hirai, Keita MAAP-032 Demaree, Clark MWSF-216 Fenneteau, Alexandre 3DMP-003 Gotchev, Atanas P. IPAS-096 Hirakawa, Keigo COIMG-355 Denes, Gyorgy HVEI-233 Ferguson, Katy COLOR-353 Gove, Robert VDA-387 Ho, Chi-Jui (Jerry) COIMG-390 de Ridder, Huib HVEI-209 Fernandes, Victor M. AVM-201 Green, Jessica HVEI-384 Hochgraf, Clark AVM-257, Derrick, Keith AVM-298 Ferrell, Regina IRIACV-050 Green, Phil COLOR-121, IRIACV-014 Devadithya, Sandamali COIMG-294 Finlayson, Graham D. COLOR-237, COLOR-280 Hodgson, Alan MWSF-398 de With, Peter H. IPAS-095, COLOR-163 Greiwe, Daniel A. COIMG-264 Hödlmoser, Michael AVM-203 IPAS-313, IRIACV-049, Finley, Matthew G. 3DMP-034 Griffith, John IQSP-240 Hoffmeister, Kurt SD&A-400 IRIACV-051 Fiske, Lionel COIMG-306 Grigorakis, Kriton FAIS-171 Hollick, Joshua SD&A-266 Diaz, Gabriel IQSP-288 Flöry, Simon SD&A-138 Groh, Florian AVM-200 Holliman, Nicolas S. SD&A-243 Diaz-Amaya, Susana Forien, Jean-Baptiste COIMG-146 Groot, Herman G. IPAS-095 Hong, Soonyoung AVM-081, IMAWM-302 Fotio Tiotsop, Lohic HVEI-093, Grzeszczuk, Radek COIMG-378 IPAS-135 Díaz-Barrancas, Francisco HVEI-130 Guan, Haike FAIS-174 Hoover, Melynda ERVR-339 COLOR-259, COLOR-260 Frank, Ian SD&A-403 Guan, Meng IMAWM-271 Horio, Masaya ISS-273 Dietz, Henry G. COIMG-392, Frank, Tal COLOR-195 Guan, Yong MWSF-215 Horiuchi, Takahiko COLOR-122, ISS-228, ISS-327, MWSF-216 Fremerey, Stephan HVEI-069 Guarnera, Giuseppe C. COLOR-164 Dilhan, Lucie ISS-144 Frick, Raphael MWSF-116 MAAP-031 Hradis, Michal IRIACV-052 Author Index Dornaika, Fadi IPAS-311 Fridrich, Jessica MWSF-075, Guichard, Frédéric IQSP-166, Hsiao, James SD&A-403 Drummy, Lawrence F. MWSF-289, MWSF-290 IQSP-190, IQSP-370 Hu, Han COIMG-341 COIMG-249 Fröhlich, Jan ISS-229 Guo, Kai ISS-230 Hu, Litao IQSP-322, IQSP-323 Dubouchet, Lucas MAAP-033 Frossard, Pascal IQSP-284 Guo, Min COIMG-160 Hu, Sige COLOR-349, Dupret, Antoine ISS-329 Fry, Edward W. COLOR-281, Gursoy, Doga COIMG-156 COLOR-350, COLOR-351, IQSP-170, IQSP-239, Gutiérrez, Jesús HVEI-128 COLOR-352 IQSP-345, IQSP-348 Hu, Zhenhua IQSP-322, IQSP-323 Fu, Lan COIMG-248 Huang, Fang COIMG-058 E Fujihara, Yasuyuki ISS-143 Huang, Hsuan-Kai SD&A-155 Fujisawa, Takanori IPAS-181 H Huang, Wan-Eih COLOR-353 Eberhart, Paul S. COIMG-392, Hung, Chuang Hsin IQSP-170 ISS-228, ISS-330 Habas, Christophe 3DMP-003 Hur, Jaehyuk IQSP-320 Eckert, Regina COIMG-112 Haber, Casey VDA-387 Hwang, Alex D. HVEI-009 Edirisinghe, Eran IPAS-182 G Habib, Tanzima COLOR-121 Egiazarian, Karen O. Hadar, Ofer MWSF-076 COIMG-059, IPAS-137, Gadgil, Neeraj J. IPAS-178 Hahn, William HVEI-268 IPAS-179, IQSP-319, IQSP-371 Galloway, Ryan COIMG-111 Hajimirza, Navid HVEI-129 I Eicher-Miller, Heather A. Galstian, Alexander SD&A-142 Hakamata, Masashi ISS-104 IMAWM-301 Galvis, Laura V. COIMG-059, Hall, Heidi IQSP-240 Iacomussi, Paola AVM-202 Eicholtz, Matthew IRIACV-070 COIMG-307 Hampton, Cheri M. COIMG-249 Ichihara, Yasuyo G. COLOR-235 Eisemann, Leon ISS-229 Ganguly, Amlan AVM-257, Han, Jaeduk IPAS-135 Ieremeiev, Oleg IPAS-137 Ellison, Charles ISS-227 IRIACV-014 Han, Rungang COIMG-045 Ikehara, Masaaki IPAS-026, Elsey, Matt COIMG-378 Ganguly, Ishani HVEI-367 Hardeberg, Jon Yngve MAAP- IPAS-181 El Shawarby, Marwa A. AVM-110 Gao, Zhao IPAS-182 033, MAAP-060, MAAP-396 Imbriaco, Raffaele IPAS-313 El Traboulsi, Youssof IPAS-311 Gapon, Nikolay IPAS-029 Hardin, Jason IMAWM-085 Imran, Ali Shariq IRIACV-074 Endo, Kazuki IPAS-028 Garcia, Henrique D. IQSP-285 Harris, Todd IQSP-322, IQSP-323 Inoue, Michihiro ISS-104, ISS-272 Escobar, Rodrigo MOBMU-334 Gat, Shani COLOR-195 Hartz, Axel ISS-229 Inupakutika, Devasena Esparza, Laura MOBMU-334 Gaubatz, Matthew COLOR-196, Harvey, Joshua MAAP-060 MOBMU-335 Ewaisha, Mahmoud AVM-110 COLOR-198 Hasche, Eberhard G. Irshad, Muhammad IQSP-066 Ezhov, Vasily SD&A-142 Ge, Tao COIMG-007, MOBMU-205, MOBMU-206 Ivashkin, Peter SD&A-142 COIMG-193 Hata, Junichi COIMG-192 Gee, Tim IRIACV-070 Hauptmann, Alexander G. Geese, Marc AVM-019 IMAWM-187 F Gelautz, Margrit AVM-200, Hayakawa, Hiroki SD&A-099 J AVM-203, SD&A-138 Hayashi, Jeonghwang IRIACV-016 Facciolo, Gabriele IQSP-370 Geng, Yu IMAWM-084 Hayashishita, Ayuki SD&A-055 Jaber, Mustafa I. FAIS-176, Fach, Sebastian MWSF-218 George, Sony MAAP-060 He, Davi COLOR-349, IPAS-064

124 #EI2020 electronicimaging.org AUTHOR INDEX

Jacobson, Ralph E. IQSP-239, Kawahito, Shoji ISS-104, ISS-272, L Lin, Tzung-Han SD&A-155 IQSP-345 ISS-273, ISS-274 Lin, Zillion COLOR-349, Jain, Ramesh C. IMAWM-211 Kawai, Takashi SD&A-244 COLOR-350, COLOR-351, Jakob Elle, Ole IPAS-027 Kawakita, Masahiro SD&A-100 Labschütz, Matthias SD&A-138 COLOR-352 ERVR-382 Janatra, Ivan IQSP-284 Keel, Min-Sun ISS-103 Lafruit, Gauthier Lindle, James R. COIMG-125 Janowski, Lucjan HVEI-131 Kellman, Michael COIMG-112 Lam, Samuel IQSP-372 Lindstrand, Mikael MWSF-219 Jarvis, John IQSP-345, IQSP-348 Kelly, Sean C. FAIS-172 Lamont, Wesley SD&A-266 Ling, Suiyi HVEI-132 Jenkin, Robin B. AVM-040, Kenzhebalin, Daulet COLOR-349, Lanaro, Matteo P. COLOR-164 Linke, Christian MOBMU-304 MWSF-204 AVM-042, IQSP-239, COLOR-350, COLOR-351, Lancaster, Ian Liu, Heather COIMG-191 FAIS-151 IQSP-241, IQSP-345, COLOR-352 Lang, Kevin Liu, Huajian MOBMU-207, PLENARY-261 IQSP-347, IQSP-348 Khan, Sultan Daud IPAS-312, Lanman, Douglas MWSF-118, MWSF-218 Jeon, Byeungwoo MOBMU-332 IRIACV-072 Larabi, Mohamed Chaker Liu, Jiayin COLOR-195 IQSP-286 IQSP-287 Jeong, Kyeonghoon IPAS-134 Kim, Hyojin VDA-375 , Liu, Ruirui COIMG-193 Jiang, Fang HVEI-368 Kim, Irina ISS-230 La Riviere, Patrick COIMG-160 Liu, Ruixu IMAWM-087 Jiao, Jichao IMAWM-271 Kim, Jeonghun COIMG-342 Lebow, Paul S. COIMG-125 Liu, Weichen ERVR-364 Jiao, Yuzhong SD&A-098, Kim, Jinhyun IQSP-316 Le Callet, Patrick HVEI-128, Liu, Xin IPAS-095 SD&A-140 Kim, Jisu 3DMP-004 HVEI-132 Liu, Yu-Lun SD&A-155 Jin, Young-Gu ISS-103 Kim, Jonghyun IPAS-134 LeBlanc, John SD&A-403 Lo, Eric ISS-227 COIMG-192 Jogeshwar, Anjali K. IQSP-288 Kim, Joonseok ISS-103 Lee, Brian Lockerman, Yitzchak D. ISS-225 Johannsen, Andreas Kim, Juyeong ISS-274 Lee, Chang H. SD&A-265 Logan, Larry MWSF-017 MOBMU-252, MOBMU-253 Kim, Minsub AVM-081 Lee, Changkeun ISS-103 Loomis, John S. IRIACV-325, Johnson, Janet ERVR-363 Kim, Munchurl COIMG-342 Lee, Chi-Cheng SD&A-155 IRIACV-326 Jörg, Sebastian MWSF-118 Kim, Sae-Young ISS-103 Lee, Chulhee IPAS-133 López-González, Mónica Joshi, Alark VDA-374, VDA-386 Kim, Sang-Hwan ISS-105 Lee, Deokwoo 3DMP-004 AVM-088 ISS-328 Ju, Alexandra L. MAAP-120 Kim, Sung-su IQSP-315, Lee, Euncheol Loui, Alexander IMAWM-086 Jumabayeva, Altyngul IQSP-316, IQSP-320 Lee, Jimi AVM-108 Lucknavalai, Karen ERVR-364 COLOR-195 Kim, TaeHyung IQSP-315, Lee, Jimin ISS-105 Lukin, Vladimir V. IPAS-137, Jun, Hyunsu ISS-328 IQSP-316, IQSP-320 Lee, Jungmin IQSP-315, IQSP-316 IQSP-319, IQSP-371 VDA-375 Jun, Sung-Wook ISS-104 Kim, Yitae ISS-103 Lee, Kyungwon Lyons, Robert MWSF-024 Jung, Taesub ISS-103 Kim, Youngchan ISS-103 Lee, Sang-Jin ISS-105 Kim, Younghoon IQSP-315, Lee, Seung-hak ISS-328 IQSP-316 Lehmann, Matthias AVM-149 Kips, Robin MAAP-082 Lei, Yang COIMG-341 M COLOR-123 K Kitanovski, Vlado COLOR-197 Lemaillet, Paul ,

Klingspon, Jonathan ISS-227 IQSP-372 MacNeilage, Paul R. HVEI-385 Author Index Kagawa, Keiichiro ISS-104, Klomp, Sander R. IRIACV-049, Leman, Steven COIMG-046 Madabhushi Balaji, Muralidhar ISS-272, ISS-273, ISS-274 IRIACV-051 Leñero-Bardallo, Juan A. ISS-231 COIMG-147 VDA-376 Kaghyan, Sahak I. MOBMU-334 Knappen, Jackson IQSP-346 Lessley, Brenton Maddox, Justin D. MWSF-102 Kak, Avinash COIMG-006, Ko, ByoungChul AVM-108, Lesthaeghe, Tyler COIMG-355 Maggard, Eric IQSP-314, COIMG-293 IPAS-094 Leszczuk, Mikolaj HVEI-092 IQSP-321 ISS-189 Kakeya, Hideki SD&A-055 Kocsis, Peter IPAS-179 Leuze, Christoph Mahoney, Jeannette R. HVEI-394 Kamath, Ajith MWSF-023 Kokado, Tomoya ISS-273 Levitan, Carmel A. HVEI-367 Maki, Takashi FAIS-174 COIMG-006 Kamilov, Ulugbek S. COIMG-007 Kokaram, Anil IPAS-136 Li, Fangda , Maldonado, Kenly MOBMU-402 Kanda, Takuya IRIACV-016 Koltun, Vladlen AVM-262 COIMG-293 Malherbe, Emmanuel MAAP-082 Kane, Paul AVM-041 Komazawa, Akihito ISS-272 Li, Fengqiang COIMG-147 Malletzky, Nicole MOBMU-232 COIMG-391 Kang, Huixian MWSF-291 Komuro, Takashi ISS-273 Li, Jieyu Maltz, Martin COIMG-357 COIMG-263 Kang, Moon Gi AVM-081, Kondi, Lisimachos P. HVEI-068, Li, Lisa W. Manerikar, Ankit COIMG-006, IPAS-134, IPAS-135 IPAS-063 Li, Qin IMAWM-084 COIMG-293 Kant, Daniel MOBMU-252, Kondo, Keita ISS-104, ISS-272 Li, Shaomeng VDA-376 Manghi, Karan AVM-257, MOBMU-253 Koren, Norman IQSP-242, Li, Tiantian COIMG-194 IRIACV-014 Kapantagakis, Argyris FAIS-171 IQSP-347 Li, Tianyu COIMG-305 Mantiuk, Rafal HVEI-233, HVEI-131 Kapeller, Christian IRIACV-071, Kramb, Victoria COIMG-355 Li, Zhi IQSP-067 SD&A-138 Kudo, Takahiro IPAS-181 Liang, Yining ERVR-363 Mantke, Rene MOBMU-304 Karaschewski, Oliver Kuester, Falko ISS-227 Liao, Rui COIMG-193 Mao, Ruiyi IMAWM-185 HVEI-366 MOBMU-205, MOBMU-206 Kuhl, Michael AVM-257, Likova, Lora T. Mao, Runyu IMAWM-300 Karl, W. Clem COIMG-008 IRIACV-014 Lim, Jihyung ISS-328 Mars, Kamel ISS-104 Karnowski, Thomas IRIACV-050, Kumar, Srichiran ERVR-381 Lim, Moosup ISS-103 Masala, Enrico HVEI-093, IRIACV-070 Kupinski, Meredith COIMG-263, Lim, SukHwan ISS-230 HVEI-130 Kassim, Yasmin IMAWM-085 IPAS-310 Lin, Li MWSF-215 Masarotto, Lilian ISS-144 Katkovnik, Vladimir IPAS-179 Kuroda, Rihito ISS-143 Lin, Luotao IMAWM-301 Mas Montserrat, Daniel Katsaggelos, Aggelos K. Kwasinski, Andres AVM-257, Lin, Meng K. COIMG-192 IMAWM-221 COIMG-306 IRIACV-014 Lin, Qian IMAWM-184, Matsui, Takuro IPAS-026 Katsavounidis, Ioannis HVEI-131, Kwen, Hyeunwoo ISS-105 IMAWM-185, IMAWM-186, Matts, Paul COIMG-357 HVEI-132 Kwon, Kihyun ISS-328 IMAWM-188, IMAWM-221, Maucher, Johannes ISS-229 Katselis, George FAIS-171 Kwon, Yonghun ISS-103 IMAWM-269 Mazhar, Mohsin A. ISS-226

electronicimaging.org #EI2020 125 AUTHOR INDEX

McCann, John J. COLOR-236, Novozamsky, Adam VDA-389 Pilgermann, Michael Rashed, Hazem A. AVM-079 COLOR-279 Nuanes, Tyler S. COIMG-378 MOBMU-232, MOBMU-254, Reed, Alastair M. COLOR-197 McCourt, Mark E. HVEI-383 Nussbaum, Peter COLOR-121, MOBMU-277 Rêgo, Thais G. AVM-201 McCutcheon, Andy ERVR-295 COLOR-237 Pinez, Thomas IRIACV-071 Reid, Emma COIMG-249 Medrano, Maria COIMG-193 Pinilla, Samuel COIMG-059 Reinders, Stephanie MWSF-215 Mehmood, Asif COIMG-249 Pinson, Margaret H. HVEI-092 Relyea, Robert AVM-257, Mendelowitz, Eitan ERVR-360 Pinson, Margaret HVEI-090 IRIACV-014 Mendiburu, Bernard SD&A-245 O Piramuthu, Robinson Ren, Yonghuan David Metze, Florian IMAWM-187 MOBMU-303 COIMG-112 Meyer, Dominique E. ISS-227 Oesch, Sean IRIACV-070 Pitié, François IPAS-136 Ringis, Daniel J. IPAS-136 Milanfar, Peyman COIMG-089 Ogata, Hiroyuki IRIACV-016 Plutino, Alice COLOR-165 Ritz, Martin MAAP-020 Miller, Eric COIMG-044 Ohya, Jun IRIACV-016 Pocta, Peter HVEI-130 Rivera, Susan HVEI-369 Miller, Jack ERVR-339 Okaichi, Naoto SD&A-100 Polania, Luisa COIMG-357 Riza, Nabeel A. ISS-213, ISS-226 Miller, Michael COIMG-192 Okajima, Katsunori MAAP-061 Polian, Ilia MWSF-076 Rizzi, Alessandro COLOR-164, Millet, Laurent ISS-115 Okutomi, Masatoshi IPAS-028 Politte, David G. COIMG-193 COLOR-165 Milstein, Adam B. COIMG-046 Oldenbourg, Rudolf COIMG-160 Pollard, Stephen COLOR-196 Robitza, Werner HVEI-069 Mineyama, Yoshiyuki ISS-104 Olshausen, Bruno HVEI-401 Ponomarenko, Mykola IPAS-137, Rodríguez-Vázquez, Angel Mitra, Partha P. COIMG-192 Omer, Khalid IPAS-310 IPAS-179 ISS-231 Miyakawa, Kazuya IRIACV-016 Omura, Takuya SD&A-100 Pont, Sylvia HVEI-267 Roh, JuneChul AVM-255 Mizdos, Tomas HVEI-130 Ophus, Colin COIMG-159 Poovey, Christopher ERVR-360 Romanczyk, Paul AVM-150 Moebius, Michael SD&A-403 Ostrovsky, Alain ISS-144 Porras-Chaverri, Mariela Rossi, Maurizio COLOR-165 Mohammadrezazadeh, Iman O’Sullivan, Joseph A. COIMG-193 Routhier, Nicholas SD&A-154 HVEI-369 COIMG-007, COIMG-193 Pothukuchi, Vijay AVM-255 Rubel, Andrii S. IQSP-371 Mohammed, Ahmed K. Potter, Kristi VDA-374 Rubel, Oleksii IQSP-319, COLOR-083 Prakash, Ravi IQSP-285 IQSP-371 Mohan, K. Aditya COIMG-146 P Prakash, Tanmay COIMG-006, Rudd, Michael E. HVEI-208 Mohan, Shiwali IMAWM-220 COIMG-293 Ruiz, Pablo COIMG-306 Mok, Mark P. SD&A-098, Preza, Chrysanthe COIMG-157 Ruiz-Munoz, Jose F. IQSP-169 Author Index Padmanabhan, Ganesh HVEI-383 SD&A-140 Price, Bob ERVR-381 Ryu, Hyunsurk ISS-103 Palaniappan, Kannappan Molchanov, Pavlo IPAS-025 Price, Jeff IRIACV-070 IMAWM-085 Moon, Chang-Rok ISS-103 Psarrou, Alexandra IQSP-241 Pan, Xunyu IMAWM-309 Morche, Dominique ISS-145 Ptucha, Ray AVM-257, FAIS-172, Pandey, Nilesh COIMG-377 Morris, Evan D. COIMG-191 IRIACV-014 S Pappas, Thrasyvoulos N. Morris, Susan ERVR-337 Purwar, Ankur COIMG-357 HVEI-209 Mote, Kevin IMAWM-085 Puttemans, Steven 3DMP-036 Said, Naina IMAWM-270 Paquet, Romain ISS-144 Motohashi, Naoki FAIS-174 Salvaggio, Carl IMAWM-086 Pardo, Pedro José COLOR-259, Mou, Ting-Chen 3DMP-002 Sandin, Daniel SD&A-139 COLOR-260 Mower, Andrea ERVR-363 Sang, Xinzhu SD&A-012 Park, Chanhee VDA-375 Müller, Patrick AVM-149 Q Sangid, Michael COIMG-305 Park, JongHo ISS-105 Mulligan, Jeffrey B. HVEI-210 Sarangi, Viswadeep HVEI-268 Park, Minji IPAS-094 Murata, Maasa ISS-143 Qi, Jinyi COIMG-194 Sargent, Sara AVM-057 Park, Samuel COIMG-125 Murray, Carolyn HVEI-365 Qi, Wenyuan COIMG-194 Sarkar, Subrata COIMG-005 Park, Younghyeon MOBMU-332 Quintero, Roger Figueroa Sarkar, Utpal IQSP-373 Parra-Medina, Deborah IQSP-168 Saron, Clifford HVEI-369 MOBMU-334 Quinto, Eric COIMG-044 Sasaki, Hisayuki SD&A-100 N Paul, Larry SD&A-065 Quiroga, Carlos IQSP-169 Savakis, Andreas COIMG-377, Pedersen, Marius COLOR-197, IMAWM-114 Nakayama, Shota ISS-143 MAAP-033 Sawyer, William SD&A-403 Nam, Jaeyeal AVM-108 Peizerat, Arnaud ISS-145 Schiffers, Florian COIMG-126, Naske, Rolf-Dieter SD&A-054 Pelah, Adar HVEI-268 R COIMG-306 Nayab, Aysha IMAWM-270 Peli, Eli HVEI-009 Schniter, Philip COIMG-005 Nelson, Les ERVR-381 Pellizzari, Casey COIMG-113 Raake, Alexander HVEI-069 Schonfeld, Dan SD&A-139 Netchaev, Anton ISS-227 Peltonen, Sari IPAS-096 Rabb, David COIMG-111 Schörkhuber, Dominik AVM-200 Newman, Jennifer MWSF-215 Pelz, Jeff IQSP-288 Rabizadeh, Shahrooz IPAS-064 Schulze, Jürgen P. ERVR-363, Nezveda, Matej SD&A-138 Peng, Tao IRIACV-325, Rachlin, Yaron COIMG-046 ERVR-364 Ngahara, Hajime ISS-273 IRIACV-326 Raksincharoensak, Pongsathorn Schwartz, Alexander I. IQSP-346 Nieto, Roger G. IQSP-168, Pérez, Ángel Luis COLOR-259, ERVR-380 Schwarz, Franziska MOBMU-275, IQSP-169 COLOR-260 Raman Kumar, Umamaheswaran MOBMU-276, MOBMU-278 Niezgoda, Stephen R. Pérez, Patrick AVM-109 3DMP-036 Schwarz, Klaus MOBMU-275, COIMG-247 Perkinson, Joy SD&A-403 Ramirez, Juan M. COIMG-307 MOBMU-276, MOBMU-278, Nikolidakis, Savas FAIS-171 Perrot, Matthieu MAAP-082 Ramzan, Naeem HVEI-129 MOBMU-331, MOBMU-336 Nikou, Christophoros IPAS-063 Perrotton, Xavier AVM-109 Ranga, Adithya Pravarun Reddy Scriven, John MWSF-117 Nirmal, Don L. IRIACV-325, Petra, Eleni FAIS-171 AVM-109 Sebro, Ronnie A. MWSF-217 IRIACV-326 Piadyk, Yurii ISS-225 Rangarajan, Prasanna Seidel, Hans-Peter MAAP-107 Niu, YunFang ERVR-362 Pichard, Céline ISS-144 COIMG-147

126 #EI2020 electronicimaging.org AUTHOR INDEX

Seitner, Florian AVM-203, Stamm, Matthew C. MWSF-119, U Wang, Peng SD&A-012 SD&A-138 MWSF-217 Wang, Qiaosong MOBMU-303 Semenishchev, Evgeny A. Stanciu, Lia IMAWM-302 Wang, Quanzeng IQSP-317 IPAS-029, IPAS-062, Steinebach, Martin MWSF-116, Uchic, Michael COIMG-355 Wang, Song COIMG-248 IRIACV-015 MWSF-118, MWSF-218 Uhrina, Miroslav HVEI-130 Wang, Xing IMAWM-271 Seo, Sungyoung ISS-103 Stevenson, Robert L. COIMG-379, Ulichney, Robert COLOR-195, Wang, Yin 3DMP-035, COLOR-198 Servetti, Antonio HVEI-093 COIMG-391 COLOR-196, , COLOR-195, COLOR-350 MWSF-397 Sespede, Braulio SD&A-138 Stewart, Samaruddin K. Wang, ZiWei ERVR-362 Shah, Megna COIMG-248 MWSF-102 Ullah, Habib IPAS-312, Watanabe, Hayato SD&A-100 Shams, Ladan HVEI-365 Stiles, Noelle R. HVEI-367 IRIACV-072, IRIACV-074 Watanabe, Shuhei MAAP-030 COLOR-083 Shankar, Karthick IMAWM-269 Štolc, Svorad IRIACV-071 Ullah, Mohib , Watnik, Abbie T. COIMG-125 IPAS-312 Shankar Prasad, Swaroop Su, Guan-Ming IPAS-178 IMAWM-270, , Watson, Kyle COIMG-111 IRIACV-072 IRIACV-074 MWSF-076 Suero, María Isabel COLOR-259, , Webb, Tyler COIMG-193 AVM-203 Shao, Zeman IMAWM-300 COLOR-260 Unger, Astrid Webber, James W. COIMG-044 Sharma, Avinash AVM-124 Sugawa, Shigetoshi ISS-143 Uricar, Michal AVM-079 Weibel, Nadir ERVR-363 Sharma, Gaurav MWSF-399 Sullenberger, Ryan COIMG-046 Werth, Sören MOBMU-254 Sharma, Ravi MWSF-117 Sumner, Robert AVM-038 West, Ruth ERVR-360 Sharma, Sharad ERVR-223, Sun, Fang ERVR-362 V Whiting, Bruce COIMG-193 ERVR-224 Sun, He COIMG-047 Wiebrands, Michael ERVR-337 Shaw, Mark COLOR-353 Sun, Jin ERVR-381 Vaillant, Jérôme ISS-144, Wiegand, Sven IMAWM-183 Shen, John COIMG-378 Sun, Shih-Wei 3DMP-002 MAAP-106 Wijnhoven, Rob G. IRIACV-049 Shen, Tak Wai SD&A-098 Sun, Yufang MWSF-292 van Aardt, Jan FAIS-127 Wijntjes, Maarten HVEI-267 Shen, Xun ERVR-380 Suominen, Olli IPAS-096 Vandewalle, Patrick 3DMP-036 Wilcox, Laurie M. HVEI-010 Shevkunov, Igor IPAS-179 Suto, Hiroto ERVR-380 van de Wouw, Dennis W. Willett, Rebecca COIMG-045 Shimojo, Shinsuke HVEI-354, Szeto, Christopher W. IPAS-064 IRIACV-051 Williams, Don IQSP-240 HVEI-367 van Egmond, Rene HVEI-209 Williams, Haney W. AVM-299 Shin, Jang-Kyoo ISS-105 van Kasteren, Anouk HVEI-091 Williams, Peter J. ERVR-361 Shkolnikov, Viktor COIMG-341 T Van Tonder, Derek ERVR-295 Williamson, Jeffrey F. COIMG-193 Shreve, Matthew ERVR-381, van Zuijlen, Mitchell HVEI-267 Willomitzer, Florian COIMG-126, IMAWM-220 COIMG-147 Tada, Naoki MAAP-032 van Zwanenberg, Oliver Shroff, Hari COIMG-160 IQSP-241 Winer, Eliot ERVR-339 Takahashi, Runa MAAP-061 COLOR-281, Sicard, Gilles ISS-145 Vargas, Edwin COIMG-307 Winton, Trevor SD&A-141 Siddiqui, Hasib A. COIMG-359 Takarae, Yukari HVEI-369 Wiranata, Anton COLOR-353 Tanaka, Masayuki IPAS-028 Vashist, Abhishek AVM-257, Sidorov, Oleksii MAAP-060 IRIACV-014 Wirbel, Emilie AVM-109 Silva, Alessandro IQSP-066 Tanaka, Midori COLOR-122, Wischgoll, Thomas VDA-388 Author Index COLOR-164 Vasilieva, Natalia SD&A-142 Silva, Claudio ISS-225 Velten, Andreas COIMG-126 Wong, Sala ERVR-361 Silva, Verônica M. AVM-201 Tanaka, Norihiro MAAP-031 Woods, Andrew J. ERVR-337, Tandon, Sarthak IQSP-346 Vigier, Toinon HVEI-128 Simmons, Jeffrey P. COIMG-248 Villa, Umberto COIMG-007 SD&A-141, SD&A-266 Simske, Steven AVM-299, Tanguay, Armand R. HVEI-367 Wright, Lucas ERVR-339 Tarlow, Sarah May HVEI-365 Villarreal, Ofelia COIMG-307 MOBMU-402 Viola, Célia MAAP-106 Wu, Hanzhou MWSF-021, Singer, Amit COIMG-158 Tastl, Ingeborg MAAP-120 MWSF-022, MWSF-291 Taylor, Doug AVM-124 Vlahos, Nikos FAIS-171 Singla, Ashutosh HVEI-069 MOBMU-334 Wueller, Dietmar COLOR-282, Temochi, Akane ERVR-340 Vo, Thinh H. Siniscalco, Andrea COLOR-165 IPAS-029 IQSP-018 Teynor, William SD&A-403 Voronin, Viacheslav , Sizyakin, Roman IPAS-029 IPAS-062 Wynn, Charles COIMG-046 Thomas, François-Xavier IQSP-370 , IRIACV-015 Skinner, Jim HVEI-132 COIMG-251 Thomas, Zach ERVR-360 Voyles, Paul M. Skorka, Orit AVM-041 IPAS-063 Toirrisi, Savina SD&A-265 Vrigkas, Michalis Smeaton, Corrie COIMG-046 Vu , Tuan-Hung AVM-109 Smejkal, Jan IQSP-170 Tokic, Dave AVM-124 X Smithson, Hannah MAAP-060 Tokola, Ryan IRIACV-070 Snijders, Chris HVEI-091 Tominaga, Shoji MAAP-031 Xiang, Xiaoyu IMAWM-186, Sobh, Ibrahim AVM-110 Tong, Jonathan HVEI-010 W IMAWM-188 Sommer, Bjorn SD&A-265 Tran, Quoc N. MAAP-082 Xie, Jinrong MOBMU-303 Son, Donghyun IPAS-094 Triantaphillidou, Sophie Wagner, Fabian COIMG-126 Xie, Wanze ERVR-363 Song, Bing IPAS-064 COLOR-281, IQSP-170, Waibel, Alex IMAWM-187 Xin, Daisy COIMG-341 Song, Qing IPAS-178 IQSP-239, IQSP-241, Waliman, Matthew COIMG-343 Xiong, Nian COLOR-161 Song, Seongwook ISS-230 IQSP-345, IQSP-348 Waller, Laura COIMG-112, Xionog, Weihua IRIACV-048 Soni, Ayush IMAWM-086 Trongtirakul, Thaweesak COIMG-152 Xu, Shaoyuan IMAWM-186 Soulier, Laurent ISS-329 IPAS-177, IPAS-180 Wang, Congcong IPAS-027 Xu, Yujian COLOR-195, Spehar, Branka HVEI-393 Tsagkatakis, Grigorios FAIS-171, Wang, Danli ERVR-362, COLOR-196 Spencer, Mark COIMG-113 FAIS-176 SD&A-011 Spivey, Brett COIMG-111 Tsakalides, Panagiotis FAIS-171 Wang, Haoyu SD&A-139 St. Arnaud, Karl AVM-124 Tsumura, Norimichi ERVR-380 Wang, Jiangfeng MWSF-022 Stadtlander, Birte IMAWM-183 Tyler, Christopher W. HVEI-395 Wang, Jing HVEI-209 Stafford, Jason W. COIMG-111 Wang, Mingming IQSP-039, IQSP-288

electronicimaging.org #EI2020 127 AUTHOR INDEX

Y Yim, JoonSeo IQSP-315, Z Zhang, Song AVM-258 IQSP-316, IQSP-320, ISS-328 Zhang, Xingguo ERVR-380 Yogamani, Senthil AVM-079, Zhang, Xinpeng MWSF-021, AVM-080 Yahiaoui, Lucie AVM-080 Zakhor, Avideh COIMG-343 MWSF-022, MWSF-291 IQSP-370 Yamaai, Toshifumi FAIS-174 Yoo, Hyunjin IQSP-067 Zalczer, Eloi Zhao, Min IMAWM-302 Yamaguchi, Takuro IPAS-026 Yoon, Jiho IPAS-133 Zelenskii, Alexander IPAS-029, Zhao, Tianyu COIMG-193 SD&A-101 Yamauchi, Masaki Yoon, Seung-Chul FAIS-175 IPAS-062 Zhao, Xinwei MWSF-119 Yang, Cheoljong IQSP-315, Yoshikawa, Keisuke SD&A-244 Zerva, Matina C. IPAS-063 Zhao, Yaxin IMAWM-271 COIMG-045 IQSP-316 Yotam, Ben-Shoshan COLOR-195 Zhang, Anru Zhao, Ziyi COLOR-196 Yang, Geng IMAWM-084 You, Jane IMAWM-084 Zhang, Chenyu COIMG-251 Zhdanova, Marina IPAS-062 IPAS-097 SD&A-012 Yang, Guoan Yousefhussien, Mohammed Zhang, Di Zhen, Ada COIMG-379 IQSP-373 Yang, Yi IQSP-314, FAIS-176 Zhang, Dingnan IRIACV-325, Zheng, Nan ERVR-362 IRIACV-326 Yao, Yuwei MWSF-022 Yousfi, Yassine MWSF-075 Zhou, Zhong IMAWM-187 Yarlagadda, Sri Kalyan Yu, Hee-Jin FAIS-308 Zhang, Menghe ERVR-364 Zhu, Fengqing IMAWM-300 IMAWM-300 Yu, Hongkai COIMG-248 Zhang, Mengxi COIMG-194 Zhu, Yuteng COLOR-163 IQSP-314 Yasutomi, Keita ISS-104, ISS-272, Yue, Kang ERVR-362 Zhang, Runzhe , Zitova, Barbara VDA-389 IQSP-321 ISS-273, ISS-274 Yüksel, Benjamin MOBMU-331 Zmudzinski, Sascha MWSF-116 COIMG-379 Yendo, Tomohiro ERVR-340, Zhang, Shuang Zujovic, Jana HVEI-209 SD&A-099, SD&A-101 Zhang, Shuanyue COIMG-193 Author Index

128 #EI2020 electronicimaging.org imaging across applications

IS&T DIGITAL LIBRARY

OPEN ACCESS PAPERS: EI Symposium Proceedings Journal of Perceptual Imaging (JPI) Select papers from other conferences

LIBRARY HOLDINGS

ARCHIVING: COLOR and IMAGING DIGITAL PRINTING and JOURNAL of Imaging Digitization, Preservation, FABRICATION Science and Technology and Access

Also proceedings from European Conference on Colour in Graphics, Imaging, and Vision (CGIV) and International Symposium on Technologies for Digital Photo Fulfillment (TDPF). Members have unlimited access to full papers from all IS&T conference proceedings. Members with JIST as their subscription choice, have access to all issues of the journal.

www.ingentaconnect.com/content/ist/ your resource for imaging knowledge SAVE THE DATE

2021

SAN FRANCISCO, CA • 17-21 JANUARY

IS&T

imaging.org