Vrije Universiteit Brussel

3D Visualisation Schelkens, Peter; Demeulemeester, Aljosha; Krzesklo, Eric; Lambert, Peter; Van Looy, Jan; Luong, Hiep; Munteanu, Adrian; Vervaeke, Jasmien; Wittebrood, Rimmert Published in: iMinds Insights

Publication date: 2014

Document Version: Final published version

Link to publication

Citation for published version (APA): Schelkens, P., Demeulemeester, A., Krzesklo, E., Lambert, P., Van Looy, J., Luong, H., ... Wittebrood, R. (2014). 3D Visualisation. iMinds Insights.

General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

Download date: 30. Sep. 2021 “WE SEE A FUTURE WHERE 3D VISUALIZATION 3D SENSORS TECHNOLOGIES

COMPLETELY Transforming the way we deal with information - from REPLACE 2D “ consumption to interaction SENSORS. Eric Krzeslo

BASED ON INTERVIEWS WITH ACADEMIC AND INDUSTRY EXPERTS - WWW.IMINDS.BE/INSIGHTS

EXECUTIVE SUMMARY

ARE YOU READY FOR YOUR HOLOTABLE? generation 3D images is to capture multiple viewing angles with 3D imaging is already part of our multiple cameras. Yet it’s financially visual environment, not only in and physically impractical to deploy the forms of 3D movies and TV a camera for every possible view. shows but also as the backbone This is why researchers are seeking of countless 2D visualization ways to fill the gaps between “ THE systems. As it becomes more captured viewpoints—for example, immersive, 3D will enable whole mining available image data and new types of entertainment and statistics to fill blanks between HOLOGRAPHIC professional applications. Yet true recording angles, interpolating and holographic 3D visualization—3D creating backgrounds, matching FUTURE MAY BE that won’t require viewers to repeating images and recreating wear special glasses and stare blocked objects. Researchers are MUCH CLOSER at a flat screen—lies on the other also seeking to better understand side of solving some key research what underlies our ability to challenges. iMinds is taking an perceive 3D images in the first THAN THE end-to-end approach to enabling place, tackling the problem of next-generation 3D visualization, stereoblindness that makes today’s

8 TO 10 YEARS investigating the required 3D techniques ineffective—and in

techniques and technologies from doing so, helping to open up new CURRENTLY image capture to processing and possibilities for 3D visualization. “ presentation. EXPECTED. RENDERING THE PICTURE ELIMINATING THE FRAME With 3D visualization, what was once a single, flat image is now a massive COMPILING THE PICTURE block of data requiring immense processing power to render. Full One way of generating next- holographic 3D today would need

02 | iMinds insights

EXECUTIVE SUMMARY

exascale computing power and a For , the lynchpin massive supply of energy. Today, technology will be spatial light researchers are investigating modulators. Currently costing tens compression approaches to of thousands of dollars apiece, streamline processing demands researchers are working to make and relieve network pressure. these more economical to realize That’s been the focus of iMinds’ the holographic future. work with Graphine, a Flemish company whose innovative texture- ENVISIONING THE FUTURE streaming technology improves graphics quality and load times The impact of successful 3D while reducing memory usage, disk visualization will be immense. and download size. iMinds will be working with industry and research partners to ADVANCING 3D VISUALIZATION further investigate the potential TECHNOLOGY applications of 3D technology. An example includes its research with While holography is the most SoftKinetic, a company looking to advanced 3D application being use 3D sensors and software to pursued, researchers are also revolutionize the way our devices working to improve traditional interact with the environment as an interim step—for around us. The ultimate aim: example, by using autostereoscopic bringing the advantages of imaging that doesn’t require special holographic 3D technology to FOR MORE INFORMATION glasses. iMinds is collaborating domains such as culture, health and with companies like Belgian the economy. about iMinds’ expertise in the field of TV manufacturer TP Vision to 3D visualization, please contact advance this technology and Peter Schelkens more fully explore the potential of [email protected] autostereoscopic 3D TV. +32 2 629 1681

iMinds insights | 03 ARE YOU READY FOR YOUR HOLOTABLE?

HOW HOLOGRAPHIC 3D WILL REVOLUTIONIZE well as professional, collaborative OUR VISUAL EXPERIENCE applications such as telesurgery and computer-aided design. In “ Picture it: the World Cup final has each case, next-generation 3D NEXT- gone into extra time and a penalty marks both a technological step kick is about to decide the match. forward and, more profoundly, a The striker lines up the shot, shifting shift away from simple, one-way GENERATION tensely from foot to foot... and visual consumption to immersive, you’re right beside him. Virtually, visual interaction. 3D MARKS A that is—thanks to holographic 3D visualization technology. You and 3D imaging is already a growing millions of other fans are there at part of our visual environment— TECHNOLOGICAL field level, each choosing your own most conspicuously in the form

unique vantage point to watch the of 3D movies and TV shows, but critical moment unfold. also as the backbone of countless

STEP 2D visualization systems. Yet It may sound fanciful, but in fact true holographic 3D visualization “ Japan made delivering that kind lies on the other side of solving FORWARD. of 3D experience a key part of its some key challenges related bid to host the 2022 FIFA World to capturing, processing and Cup. Over the coming decade, 3D presenting 3D images. Research visualization will advance by leaps organizations around the world are and bounds, enriching sports, working actively to address those entertainment and gaming as challenges, including iMinds, where

04 | iMinds insights ARE YOU READY FOR YOUR HOLOTABLE? HOLOGRAPHIC CAPTURING

Setups for recording and reconstructing a hologram.

Beam splitter OBJECT

LASER Illumination beam

Beam expander Object beam Reference beam Reference teams are taking a uniquely end- to-end approach to enabling next- generation 3D visualization. MIRROR HOLOGRAM CCD

3D: THEN, NOW AND NEXT HOLOGRAPHIC RENDERING 3D imaging is hardly a new RECONSTRUCTED IMAGE phenomenon. The first 3D patents were filed in the 1890s and the first , The Power of Love, screened in 1922. Early 3D technologies simulated the functioning of human eyes by using dual cameras to capture slightly overlapped images. With considerable technological Object beam refinement, that stereoscopic approach continues to be what LASER Reading beam most viewers would associate with 3D today: walk into a movie theater HOLOGRAM SLM during summer blockbuster season Beam expander and you’ll find ushers handing out ‘3D glasses’ to help the brain interpret the stereoscopic images.

iMinds insights | 05 ARE YOU READY FOR YOUR HOLOTABLE?

While stereoscopy simulates depth In such a scenario, viewers will be in a two-dimensional image, it is not able to interact with the image in TERMINOLOGY: PLENOPTIC CAMERA truly 3D and has limitations related any number of ways—adjusting the to image quality and usability. To viewing angle, zooming in or out, Plenoptic cameras capture start with, today’s audiences are rotating objects—to personalize the information about a ‘light used to very high-definition 2D viewing experience. Many of the field’ versus a conventional images, making the relatively low foundational technologies for this photographic image. The light image quality of stereoscopic 3D kind of next-generation 3D solution field provides 4D information unsatisfying. On top of that, many have already been developed and and requires an array of viewers find 3D glasses or headgear are in use today, often to augment microlenses for recording. annoying, and a small percentage traditional 2D imagery. reports headaches and nausea when watching stereoscopic 3D content. TOWARD HIGH-DEFINITION HOLOGRAPHIC 3D Initially emerging from physicist Dennis Gabor’s efforts to Anyone who has used Google’s improve the electron microscope, Street View application has had holography involves recording a taste of one of today’s key pre- the scattering of light around an 3D technologies: free viewpoint “ object, then recreating the pattern visualization. Free viewpoint as a 3D image. The invention of the captures data from multiple angles THE NEXT WAVE laser in the 1960s made holography and synthesizes it into a single more practical, enabling a wide image that can be rendered in OF HOLOGRAPHY range of current applications for either two or three dimensions. everything from data storage to anti-counterfeiting. Yet laser-based This kind of capture usually REQUIRES THE holography also has its limitations, involves deploying standard or especially when it comes to slightly augmented 2D cameras USE OF DIGITAL representing complex, multi-angle in an array, or a single standard imagery. camera outfitted with plenoptic cameras to capture different pixels.

TECHNOLOGIES IN The next wave of holography—the With these technologies, individual kind required to realize Japan’s images are captured in two

AN END-TO-END vision for the 2022 FIFA World dimensions and provide the basis “ Cup, for example—requires the use for a panoramic or 3D model that’s of digital technologies in an end-to- created, rendered and transmitted SYSTEM. end system that is ultimately capable to the viewer. 3D images can also of projecting three-dimensional be created from imagery captured moving images in space. by a standard 2D camera in motion.

06 | iMinds insights END-TO-END VISUALIZATION PROCESS

Image Capture Processing Visualization and Rendering and Representation

Beyond capturing a view of the track human gestures—for use with TACKLING THE RESEARCH CHALLENGES world outside the narrow rectangle games like the latest edition of the of a two-dimensional frame, popular Just Dance franchise. The IMAGE CAPTURE: COMPILING THE 3D developers also need tools potential applications of time- PICTURE for analyzing the movement of of-flight technology extend far objects with depth and in motion— beyond gaming, however. From In the multi-camera setup required consistently and realistically. Video robotics to topography, a vast for free viewpoint visualization, game creators have done a lot of number of fields could make use image capture is a phased process. pioneering work in this area already, of its ultra-sensitive depth data. First, visual data is gathered by creating synthetic 3D content the various cameras. Next, those with support from technologies While multi-camera and time- images have to be calibrated— like time-of-flight cameras that of-flight technologies provide a in other words, linked together interpret 3D environments. basis for 3D imaging, a number seamlessly. After calibration, the of research challenges need to content needs to be assembled Time-of-flight cameras calculate be addressed before they can be in a step developers refer to as depth based on the speed of incorporated into an end-to-end registration. light: an object is illuminated with holographic visualization solution— a laser and the time it takes for ‘end-to-end’ because it has to Once registration is done, the the light to return to its source is cover the full chain of production final set of visual contents will measured. Belgium’s SoftKinetic from capture and rendering to re- likely contain some occlusions: used time-of-flight technology in creation and presentation. regions that have been recorded Sony’s PlayStation 4 console to imperfectly, blocked from view >>

iMinds insights | 07 ARE YOU READY FOR YOUR HOLOTABLE?

“ or simply left out of the capture. real. Currently, inpainting is done RESEARCHERS With today’s multi-camera in a semi-automated way: the approaches, occlusion is more or technology has not yet matured TODAY ARE less unavoidable. Today’s most for computers to perform it sophisticated screens can support automatically in real time. roughly 200 different viewpoints, LOOKING AT but it would be expensive and PROCESSING AND RENDERING: unwieldy to deploy 200 individual BUILDING THE PICTURE TECHNIQUES TO cameras to cover them all. With 3D visualization, what was once FILL BLANKS So the research challenge becomes a single, flat image is now a massive one of finding a way to fill the block of data containing a figure,

visual gaps—both textures and its surroundings and background

BETWEEN depth cues. Researchers today are viewed from almost every possible looking at techniques for mining angle. Today, each additional view RECORDING “ the image data and statistics that adds about 10 percent more bits are available to fill blanks between to the overall file size. By the time ANGLES. recording angles, and at other 200 views are reached, a file can be methods for interpolating and massive—and complex. creating backgrounds, matching repeating images and recreating Rendering such an image requires blocked objects. immense processing power, especially if it happens in real time The process of producing accurate as the image is being recorded. 3D videos with a mixture of Full holographic 3D currently recorded and artificial imagery would require exascale computing is called inpainting. Its success power, roughly 100 times greater depends on it being seamless— than is available, and about 30 or by avoiding ‘artifacts’ or obvious 40 megawatts of energy. iMinds is flaws that remind the viewer what currently involved with developing he or she is watching is not wholly standards for MPEG Free View

08 | iMinds insights THE IMINDS TELESURGERY PROJECT

While holographic surgery is still far away, iMinds has been researching ways to create digital operating rooms that allow real-time consultation by experts all over the world. The research contributed to Barco’s Nexxis product, an IP-centric video management tool currently used in more than 100 operating rooms across Europe.

Television (FTV), looking at ways transmitted, extremely low latency multi-view video can be efficiently is essential. For applications like compressed. telesurgery, patient safety depends on an end-to-end information delay Fortunately, not all views have to be of no more than 20 milliseconds. rendered at once. Reconstructing For entertainment applications, the a subset of all possible views tolerance might be as much as 150 conserves processing power and milliseconds—though no sports fan bandwidth. iMinds researchers and wants to hear neighbors through others are investigating various the wall cheering about a goal that compression approaches that would hasn’t yet been displayed on his or allow a system to extract only the her own TV. views needed at a given moment. iMinds has conducted research To better create 3D models and into latency requirements and optimize transmission, researchers thresholds, including the iCocoon are also exploring a variety of project, which looked at low-latency alternate presentation systems— video conferencing in collaboration TERMINOLOGY: MESH AND VOXEL such as point clouds, which plot with Alcatel-Lucent. data using a three-dimensional Voxel-based encoding is coordinate system and are easier Much about processing, rendering closest to reality but, because to compress. Moving in parallel to and the like remain open questions of that, requires more this research are efforts to advance today. Researchers continue to data. Its ideal application compression standards, including work to understand what’s really would be, for example, a a volumetric extension of the required to reconstruct an image hospital environment where JPEG 2000 standard as well as the in holographic 3D. Based on the verisimilitude is vital. Mesh- development of mesh- and voxel- outcome of those investigations, based encoding is leaner and based encoding algorithms and teams will be able to tune their less realistic, but its lighter texture-coding approaches. compression algorithms and data demands make it ideal visualization devices for optimal for gaming. Even when a subset of views is performance. >>

iMinds insights | 09 ARE YOU READY FOR YOUR HOLOTABLE?

VISUALIZATION AND REPRE- supports a maximum of 200 SENTATION: AUTOSTEREO- views, resulting in a relatively low SCOPIC TECHNOLOGY resolution, especially given that consumers are used to 4K visuals The last (and in some ways most on their TV screens. (Current light- “THE LYNCHPIN significant) research challenge field autostereoscopic displays with respect to 3D has to do would be the equivalent of VGA TECHNOLOGY FOR with visualization itself. While quality, by comparison.) With holography is the end goal, it is microlenses, the number of views is not fully achievable in the short unlimited but a problem of ‘angular FIRST-GENERATION term. But researchers are actively resolution’ can arise that makes it

advancing models that will deliver hard for the eye to discern small HOLOTABLES WILL a marked improvement upon details in the image.

traditional stereoscopy. Unlike classical stereoscopic BE SPATIAL LIGHT “ One of the most promising of imagery, with autostereoscopic these is autostereoscopic imaging, visualization viewers can MODULATORS. producing 3D images that can be physically rotate the image or viewed without special glasses or move themselves around it. At headgear. can current resolutions, these shifts in be created in one of two ways: by views are noticeable—but work is using microlens arrays or by using underway to increase the available light-field technologies. views to 500, which should yield a significant improvement in image The light-field approach currently quality. >>

10 | iMinds insights DISPLAYS

Conventional (2D), light field,multiview and holographic display technologies.

2D LCD DISPLAY • HD-4K-8K Resolution • High Frame Rate • High Dynamic Range • Stereoscopy with glasses • Green Power

HALF/FULL PARALLAX LIGHT FIELD DISPLAY • Reduced horizontal/vertical resolution (70Mpel) N PROJECTORS • 200+ viewing angles, 180° field of view (FOV) • Large set-up, High Computing Power

PINHOLE SCREEN

HALF(/FULL) PARALLAX MULIVIEW LCD DISPLAY • Reduced horizontal resolution • 32 viewing angles, limited FOV • Compact display • Green Power

HALF/FULL PARALLAX HOLOGRAPHIC DISPLAY • Reduced horizontal/vertical resolution N LASERS • 1 view per arc-min, 180° FOV N SLMS • Compact set-up • MW power consumption

LIGHT WAVEFRONT

iMinds insights | 11 ARE YOU READY FOR YOUR HOLOTABLE?

THE ‘HOLOTABLE’ AND OTHER Moreover, SLMs are time- In addition to a strong history HOLOGRAPHIC DISPLAYS consuming to manufacture and conducting this type of cost thousands per piece, making collaborative, multidisciplinary In its 2022 FIFA World Cup bid, mass production or consumer research, iMinds also has the Japan envisioned establishing applications prohibitive in the infrastructure to host teams in holographic viewing centers in short term. Manufacturing SLMs pursuing visualization-related 208 countries that would allow from carbon nanotubes is a focus research. spectators to watch the action in for several research teams today, real time, projected up from the however, and if this reveals a way to 3D technologies are already horizontal surface of a holographic efficiently mass-produce SLMs, the integrated into the way we table. holographic future may be much consume information. As they closer than the eight to 10 years become more fully immersive—en The lynchpin technology for first- currently expected. route to true holography—3D will generation holotables will be shift that consumption to a new spatial light modulators (SLMs)— kind of interaction, enabling whole thousands of which are required to THE WAY FORWARD new types of entertainment and generate the display’s visual field. professional applications. Ideally, these SLMs would have the Solving the various research same pixel size as a wavelength of challenges surrounding 3D light: at two microns currently, they visualization demands the dedicated are roughly five times too large to effort of multidisciplinary research produce an optimal image (SLM- teams that can collectively address generated holographic images have the end-to-end requirements from a resolution of about 2K). capture to representation.

12 | iMinds insights “ 3D WILL SHIFT INFORMATION CONSUMPTION TO A NEW KIND OF INTERACTION, ENABLING

WHOLE NEW TYPES

OF ENTERTAINMENT AND PROFESSIONAL “ APPLICATIONS.

iMinds insights | 13 MAKING VIRTUAL WORLDS LOAD FASTER

The gaming industry is pushing the boundaries of 3D graphics, creating larger and fuller worlds, levels and objects. But the more detailed the graphics, the more memory and storage are required. Founded by researchers who met on a former iMinds project, Flemish company Graphine has developed an innovative approach to texture-streaming technology that improves graphics quality and load times while reducing memory usage, disk and download size. We spoke to Graphine co-founder ALJOSHA DEMEULEMEESTER to find out how the Granite SDK technology is revolutionizing 3D gaming graphics.

14 | iMinds insights Q: What is texture mapping, and why we developed our Granite how is it used? technology. It cuts video memory usage by 50 to 75 percent and Aljosha Demeulemeester: In 3D storage requirements by 60 percent “ GRANITE gaming, objects are represented or more. Those savings allow us to by two types of data: geometric improve graphics quality by a factor IMPROVES data and texture data. Geometric of four and reduce load times by a data represents the structure of factor of five. an object. Texture data defines its GRAPHICS surface properties: the bark on a tree, the features of human skin. The Q: How does it work? QUALITY BY A level of detail can be quite complex. Structurally, the side of a building Aljosha Demeulemeester: Tradi­ can be represented by as few as tionally, textures are loaded in FACTOR OF four geometric data points, one bulk. When you start a new level, for each corner. Texturally, though, the game will load every texture in FOUR AND you have bricks, windows and that part of the world. Granite uses doors with different properties like a concept called texture streaming colour, roughness, reflection. These to dramatically shorten load times: REDUCE LOAD elements can be very intricate. the textures in the level are broken

up into smaller pieces and load TIMES BY A

incrementally. Q: That has an impact on a gaming system’s resources. FACTOR OF “ Q: Is texture streaming a new Aljosha Demeulemeester: Pro– concept? FIVE. cessing all that detail can be very draining in terms of memory and Aljosha Demeulemeester: No. What storage. You can only have as is new is how small we’re able to much detail and texture as you have make those pieces, which we call memory on your graphics card. ‘tiles’. Other texture-streaming Big worlds mean a lot of textures, technologies might have a player and that creates long load times. walk up to a building, and the And you have to get the game textures both inside and outside to the customer somehow, either the building will be loaded even if physically or via download, making the player hasn’t entered yet. With storage space another limitation. Granite, we load only the textures in Game developers are always hitting front of the virtual camera. As the the boundaries of hardware. That’s camera moves, new textures are >>

iMinds insights | 15 MAKING VIRTUAL WORLDS LOAD FASTER

loaded. That helps us dramatically and texture memory consumption reduce load times, freeing up by 80 percent. We are providing memory for even better graphics our tech for other games as well— and gameplay. You get bigger including the just-announced sci-fi worlds, more detailed images and title, Get Even, from The Farm 51, the “ sharper, higher quality textures. studio behind the Painkiller series. GRANITE Think of watching a feature length HD movie on YouTube. If you had CUTS STORAGE to load the full movie before you Q: What was iMinds’ role in helping started playing, you’d have to wait Graphine get started? for quite a while—hours on a slow SIZE BY 60 connection. But if you load the Aljosha Demeulemeester: It all movie frame by frame as you watch, came from iMinds’ LLINGO project, PERCENT AND you can start watching instantly. which was looking at different ways of using game technology for

TEXTURE MEMORY education. My section focused on Q: What games are using this improving high-quality graphics,

technology? with the idea that the better the CONSUMPTION BY graphics are, the more immersive “ Aljosha Demeulemeester: iMinds and effective the learning 80 PERCENT. helped us connect with Belgium’s experience will be. That’s where I largest game developer, Larian, met the rest of the Graphine team. during the Language Learning We formed the company using a in an Interactive Environment lot of the technology and concepts project—LLINGO. Larian included developed during LLINGO. our technology in their Dragon Commander title in August 2013. It’s been a very productive partnership. Q: How has iMinds remained Normally, a development team involved? makes technology decisions early in the process—years before a title Aljosha Demeulemeester: We took ships. But with Dragon Commander, part in iMinds’ iBoot program, they were able to integrate Granite which is sort of a boot camp in in the final months of development, running a start-up that, at the time, cutting storage size by 60 percent was delivered through six two-

16 | iMinds insights day workshops. At each stage we these. Anything that uses real-time developers the tools they need refined our pitch, and even made a 3D visualization can benefit from to create bigger, better and more 15-minute presentation to potential this type of technology. expansive worlds—on whatever investors. Our pitch was selected as platform they choose. the best, but the most important part was the opportunity for us to Q: What’s next for your research work together as a team outside of into 3D visualization? the lab. We knew we meshed well with our research, but we needed Aljosha Demeulemeester: Well, one to find out how to work as business of the biggest challenges we’re ABOUT partners. facing now is storage. Breaking a huge virtual world up into smaller, GRAPHINE The other iMinds program we were discrete pieces certainly helps with involved in was iStart. It provided loading and displaying the graphics, Combining academic roots seed funds for initial research, but the size of the world doesn’t with industry experience, helped us test our assumptions, change—and that requires storage. Graphine delivers rock-solid investigate the market and contact If you’ve got a 600 gigabyte world, middleware and high-quality potential customers. And it gave us you still need to get that to the services to the gaming and office space to work from, which is customer somehow. So one of our 3D visualization industries. critical for a small company in its research paths is focused on storage Its middleware—Granite early stages. Through all of this, we and texture compression. We are SDK—minimizes memory had the support of the iMinds team, already boosting Blu-Ray capacity usage, storage size and who provided not only technical by a factor of 2.5. We’re hoping loading times while allowing expertise but also connected us to increase this even further with the use of massive amounts to other researchers, potential similar or better graphics quality. of unique texture data. investors and customers. Graphine also helps clients A second research area is looking at integrate its technology bringing our technology to mobile into their products, Q: Do you foresee any applications platforms, where it will have a lot providing guidance and for Granite outside of gaming? of utility because of the memory best practices on how to and storage limitations associated deliver high-fidelity, real- Aljosha Demeulemeester: Abso­ with mobile devices. We’re working time 3D graphics as well as lutely. Flight and military simulators, on that with iMinds. Along with our optimization of technology architectural visualization and texture-streaming technology, we stacks. design… we’re looking into all of think this research will give game

iMinds insights | 17 BRINGING THE FUTURE TO YOUR TV

For television manufacturers like TP Vision, innovation is all about the viewer experience. New features need not only to enrich people’s enjoyment of their favorite programs but also do so in ways the marketplace may not yet fully expect. RIMMERT WITTEBROOD, Innovation Lead for Picture Quality and Displays at TP Vision, explains how that mission has led TP Vision to explore the potential of 3D TV.

18 | iMinds insights BRINGING THE FUTURE TO YOUR TV

Q: What is driving TV companies’ suppliers—the companies that make efforts to add 3D as a feature? LCD screens—to deliver the best 3D viewing experience. Rimmert Wittebrood: If you look at TV technology as it has evolved— “ in terms of picture quality, sound, Q: How big of a market are you WE DON’T etc.—3D is a big experience that looking at? has been missing. Television makers need to cater to the potential Rimmert Wittebrood: Maybe the MAKE 3D TVS: wishes of consumers, so 3D has current market estimates for 3D are been on our minds for many years. over-positive, but it’s hard to project 3D IS A Of course, consumers haven’t how any feature will evolve or necessarily been saying, “We would mature. You see, we don’t make ‘3D really like our television to deliver a TVs’: 3D is a feature of televisions, FEATURE OF 3D picture.” But one doesn’t always along with many other features. know what’s missing if one hasn’t From that perspective, 3D has been experienced it in the first place. implemented in more than half the TELEVISIONS, Nobody asked for a smartphone, TVs made by any given supplier. for example: the telecom industry And of course there are millions and ALONG WITH started adding smart features to millions of TV sets in the world, so devices and consumers awakened the potential market is enormous. to the possibilities. 3D could be the MANY OTHER same. “ Q: You mention the need for LCD panel advancements. What are FEATURES. Q: Given that, what’s involved in some of the other barriers currently bringing 3D to the market? limiting the adoption of 3D in televisions? Rimmert Wittebrood: It requires the full chain of supply: you need the Rimmert Wittebrood: One of the right technology, the right content biggest hurdles is that current and the right cost point. Things stereoscopic 3D technology started to come together for 3D requires glasses, which makes a few years ago, which led to the it cumbersome to use. As well, introduction of the first 3D-enabled crosstalk—interference between TVs in 2010. I think, though, that the right and left eyes that causes we need to see big investments by strain—can make the experience less the content industry and the panel comfortable. Autostereoscopic 3D >>

iMinds insights | 19 does not require glasses, but it has Q: What did that research yield? divided into a number of cones in other drawbacks: cost, quality and front of the TV. If you’re outside the restrictions on where the viewer Rimmert Wittebrood: We gained cone, the image inverts. This is very can sit in relation to the display. For a good understanding of the uncomfortable, so we have been those reasons, autostereoscopic challenges and where more looking into this as well. The most 3D is likelier for business-to- research needs to be done. For basic solution is user tracking via a business markets in the near term, autostereoscopic 3D, you put a lens camera—face detection—so the TV not the consumer TV marketplace. overtop of an ultra-HD panel to steer can direct the cone at the viewer. But For commodity televisions today, the light of the pixels in various this requires robust face detection, stereoscopic is the only option. directions. This effectively reduces complex processing and adaptations the resolution of the display by the in the display itself, which has an number of viewing angles, or views, impact on the image quality, the TV Q: You did some research into that you create. So if you have nine specifications and cost. So today, 3D autostereoscopic 3D as part of views, your resolution is reduced by TV is necessarily stereoscopic. the iMinds 3D 2.0 project. How did a factor of nine. What this means is that project come about? you need to start with a very high- resolution display. In 2012 we were Q: But there have been advances Rimmert Wittebrood: In 2010, working with 4K by 2K. Using that on that front as well. What we were involved with the in combination with our proprietary differentiates the newest generation JEDI project—Just Exploring lens design, the 3D picture quality of stereoscopic 3D from the DImensions—an EU-wide ultra- would have been comparable to previous? high definition and 3D initiative High-Definition TV, but that’s no focused on exploring the next longer sufficient for a market where Rimmert Wittebrood: There are image formats and how we must Ultra High-Definition is rapidly being two aspects, not so much related prepare the broadcast chain to introduced. There are trends now to processing as to the panel support them. It was through the toward 8K displays. That might be technology. One has been making other Belgian participants in that the innovation we need. the liquid crystal in the displays faster project that we got invited to to react. This increases quality, with join iMinds’ 3D 2.0. That project The other issue to resolve relates less crosstalk. The other is cost: the was multifaceted: there was our to the limited viewing positions. panel industry is focused on driving company, there was Alcatel-Lucent Because of the way the 3D image costs down, delivering better quality looking at videoconferencing, is created, the viewing positions are at a lower price. there was GrassValley working on camera systems, and a network of university partners supporting it all. We worked very closely ABOUT TP VISION with the universities on our own applications, including processing TP Vision develops, manufactures and markets Philips-branded TV for autostereoscopic displays. This sets in Europe, Russia, the Middle East, South America and select led to the development of a fully countries in Asia-Pacific. A leader in the hospitality industry, it working prototype, which has been provides televisions to the majority of the world’s major international displayed at various large trade and national hotel groups, as well as individual hotels, hospitals, shows like the IFA in Berlin and the cruise ships and other professional facilities. CES in Las Vegas.

20 | iMinds insights “ THE iMINDS’ 3D 2.0 PROJECT LED TO THE DEVELOPMENT OF A FULLY WORKING PROTOTYPE,

WHICH HAS BEEN

DISPLAYED AT VARIOUS LARGE TRADE “ SHOWS.

iMinds insights | 21 22 | iMinds insights SEEING A 3D FUTURE

3D sensors and software will revolutionize the way devices interact with our environment—and how we interact with those devices. Formed by the merger of two companies spun out of research groups at the Vrije Universiteit Brussel, with support from the Université Libre de Bruxelles, SoftKinetic has become a leader in both 3D cameras and 3D gesture- recognition software. We spoke with co-founder ERIC KRZESLO about how new 3D vision technologies will affect our daily lives.

Q: What will we gain from more going on around him or her and advanced 3D sensors and software? responding intelligently.

Eric Krzeslo: At the highest level, 3D vision can bring a real Q: How far away are we from that? sense of human vision to our computers, making them much Eric Krzeslo: We’re getting to more intelligent. Take our mobile that point pretty quickly. Sensor devices, for example. We call them technology is getting small smartphones, but really they’re enough and cheap enough to use quite dumb. You always have to in a wide variety of devices, and interact with them, tell them what there are viable applications for you want them to do. Instead of these types of technologies. There constantly asking for inputs and are improvements to be made in information, a device enabled with resolution, the distance our sensors true 3D vision will understand the and cameras can see, how they deal user and the user’s environment, with lighting… we’re working on all anticipating the user’s needs much that. But we’re really not all that far better by actually ‘seeing’ what’s off. >>

iMinds insights | 23 SEEING A 3D FUTURE

“ONE OF OUR PATENTS LETS US CAPTURE 3D IMAGES MUCH MORE

EFFICIENTLY

THAN COMPETING TIME-OF-FLIGHT“ CAMERAS.

Q: SoftKinetic develops sensors find in a PlayStation 4. We’re also then sounds an alert if he looks and software. What are the key looking at something we call like he’s falling asleep or is driving applications and use cases you’re ‘context awareness’, where the too fast for the traffic conditions. looking at? device gathers information about The third area we’re looking at its environment and analyzes is ‘real-world acquisition’, which Eric Krzeslo: We started with what’s happening without direct is basically 3D scanning for 3D gesture recognition, which we input from the user. There are printing: a technology that could call ‘active control’, with the user already early applications of this be used in a wide variety of controlling a device; providing it kind, for instance, in the automotive industries, from tailoring custom with input. Gaming is a big use industry. You might have a device clothes to making perfectly fitted case here, obviously, with motion- that analyzes road conditions, braces for healthcare. sensing technology like what you’d weather or even the driver—and

24 | iMinds insights Q: What technologies have you We’ve also developed middleware Q: Your gesture-recognition developed to enable these use called iisu (which stands for “the technology has already hit the cases? interface is you”), which is designed market, correct? How far away are to improve human gesture your other technologies? Eric Krzeslo: Key to our products recognition. Full-body tracking and research is our time-of-flight identifies the user, separates him Eric Krzeslo: Our software is part of camera, which is a ‘true’ 3D sensing or her from the environment, and the PlayStation 4 and has already technology. While stereoscopic tracks the position and movement been incorporated into Ubisoft’s technologies simply compute 3D of the hands, shoulders, feet, Just Dance 2014 game. We’ve images using 2D data, the time- chest, etc. This becomes a also announced other new games of-flight camera actually senses skeleton model that game or app that will use our technology: Just the 3D by emitting infrared light developers can use to create an Dance 2015, Rabbids Invasion: that accurately measures depth. avatar that responds to the users’ The Interactive TV Show and a That gives us much more data movements. We also have a second game called Commander Cherry’s and precision to work with. One of part to the iisu middleware that Puzzled Journey. That game is quite our patents, the Current Assisted is much more focused on hands interesting, as it uses the shape Photonic Demodulation (CAPD), and fingers and can be used for of the user’s body to build layers lets us capture 3D images much close-range applications such as and platforms in the game. In our more efficiently than competing mobile devices—for things like context-awareness division, we’ll time-of-flight cameras. We use the virtual keyboards and controlling have technology on the market depth of the sensor to collect more applications and games on your next year. We’re working with photons—and more information— smartphone without actually Delphi in Detroit to develop sensor more efficiently, using less energy touching the screen. technology for cars. And for our and fewer pixels. 3D reconstruction software, we’re >>

iMinds insights | 25 SEEING A 3D FUTURE

OUR“ SOFTWARE

IS PART OF THE PLAYSTATION 4. “

26 | iMinds insights working with MakerBot to develop Q: What’s next for the company? a high-precision 3D scanner for use with 3D printers that is accessible for Eric Krzeslo: Well, in the mid-term, consumers. Most of these scanners we’re looking to strengthen our run up to $10,000; ours will retail for position in gaming, mobile devices, a few hundred. 3D scanning and the automotive industry. This may sound strange, ABOUT but over the long term we want Q: How did SoftKinetic come to be? to be invisible. We see a future SOFTKINETIC where 3D sensors completely Eric Krzeslo: We were born when replace 2D sensors. I mean, we’ve SoftKinetic is a Brussels- iMinds was initiated at ETRO in got cameras in almost everything. based semiconductor and the Vrije Universiteit Brussel’s You’ve probably got two on your software development Department of Electronics and phone, one on your laptop, maybe company that provides 3D Informatics, where a lot of research one on your TV. And we want to be time-of-flight sensor and was being done in 3D sensing providing that technology. We want cameras as well as advanced and processing. Simultaneously, to be ubiquitous, in all these devices, middleware for building real- the Université Libre de Bruxelles helping interpret what’s happening world acquisition, context gave its support to our visionary and anticipating user needs—not awareness and active control entrepreneurs in need of computer reading minds, but close to it. applications. Its patented vision expertise. Two companies And we’ll get there by developing technologies provide a resulted of that research. SoftKinetic sensors that are smaller and less direct way of acquiring 3D SA was looking at a natural way expensive with greater resolution, information about objects, of interacting with 3D content, can cover longer distances, and with people and the entire while Optrima NV was building 3D software extremely optimized for environment. SoftKinetic’s cameras. We started using Optrima’s active control, context-awareness 3D vision solutions will prototypes for 3D interaction and real-world acquisition. revolutionize man/machine research and found them much more Basically, we want to bring you and machine/world effective than the fully developed and your environment into your interactions in domains such cameras already on the market. devices—making them smarter and as entertainment and gaming We brought the two companies more responsive, without you even applications, automation, together so we could control both noticing. security, automotive, and the software and the hardware. health and lifestyle by Now we can optimize the complete providing real-time easy-to technology flow, from the pixels to compute 3D images. the camera system to the software.

iMinds insights | 27 “WE FOUND STRONG DIFFERENCES IN HOW GOOD PEOPLE

PERCEIVE STEREO

WITH 6 TO 10 PERCENT BEING “ STEREOBLIND.

28 | iMinds insights SEEING A WAY AROUND STEREOBLINDNESS

Today’s consumer 3D technology is stereoscopic—imitating the way people’s brain interprets visual information to perceive depth in a process known as . Yet for a portion of the population, stereoscopic 3D doesn’t work: they are affected by what’s known as stereoblindness. At Ghent University, iMinds’ Media and ICT (MICT) research group has developed a new, cheaper and easier way to test for stereoblindness. We spoke to researchers JAN VAN LOOY and JASMIEN VERVAEKE about their test and the potential impact of stereoblindness on the future of 3D visualization.

Q: What is stereopsis and how of watching today’s 3D films with imagery. It tends to affect all types does it factor into 3D visualization coloured 3D glasses. On screen, of stereoscopic imagery, since the technology? you’d have one image projected various technologies function in with a red tint, and an image with basically the same way. So whether Jan Van Looy: Because our eyes are a cyan tint, filmed by cameras that you have coloured glasses or located in slightly different positions, were offset by the distance between polarizing filters, someone suffering they give our brains slightly different your pupils. Your glasses would have from stereoblindness won’t be able images of the world. These images one cyan lens and one red lens too, to make a 3D image using either. are compared and combined in the ensuring that each eye would only brain, creating a single, 3D image receive a single image. Your brain Jasmien Vervaeke: It’s caused by a of the world. That process is called combines the two images into one variety of factors—there’s not really stereopsis, and it’s one—but just and interprets this as a 3D picture. one single cause. The literature one—of the many ways we can suggests about 10 percent of the perceive depth. population has stereoblindness to Q: How does stereoblindness affect a certain degree. It can be caused Jasmien Vervaeke: Most 3D that? by several eye disorders, such as a visualization technology is based lazy eye or a substantial difference on stereoscopy, which mimics Jan Van Looy: Stereoblindness is in visual acuity between your eyes, stereopsis by making your brain a condition that interferes with among others. And there’s no believe that a flat, 2D image has stereopsis, preventing you from immediate treatment or cure for it depth. In its most basic sense, think seeing the stereopsis in stereoscopic at the moment. >>

iMinds insights | 29 SEEING A WAY AROUND STEREOBLINDNESS

Jan Van Looy: Most people with it open-source and free for stereoblindness don’t even know download through SourceForge they have it. There are so many at sourceforge.net/projects/ other ways to perceive depth, as stereogramtest. we mentioned before, that your brain would more than make up for Q: You performed the test at an it in the real world. It’s only when iMinds conference, is that right? you’re trying to watch a 3D movie, for example, that you might notice. “OUR TEST IS Jan Van Looy: We had about 130 And, of course, if you get tested. volunteers and tested them all to see how well they were able to FREE AND perceive the squares. About six Q: What made you investigate a percent couldn’t see any squares at new stereopsis testing method? all. Some had had scores that were Was something wrong with the OPEN SOURCE higher than random, but still not previous tests? very good. But the large majority SOFTWARE scored 15 or above. Jasmien Vervaeke: The traditional tests for stereopsis are expensive, Jasmien Vervaeke: This suggests for one, and they require special AND EASY TO the generally accepted figure of 10 equipment. Ours is very simple, by percent of people suffering from comparison. It’s basically a random stereoblindness may be a little high. dot stereogram, so two images that ADMINISTER It may be more between six and

look like random noise, with one to 10 percent, and there’s gradation.

10 squares shifted slightly within ON ANY In other words, it’s not necessarily the images. It’s displayed on a 3D binary—that is, that you have TV, and we have participants put “ stereoblindness or you don’t. There on glasses and count the squares. If HARDWARE. are stages in between. you’re not stereoblind, the squares pop out of the screen. Each number of squares is shown twice, resulting Q: How are you following up on in 20 trials, and the participants end your initial research? up with a score out of 20. Jasmien Vervaeke: We’re working Our test can be used on any on a new project right now, an 3D screen, and we’ve made exploratory study of how well

30 | iMinds insights children perceive stereoscopic 3D: such as 3D TV, for example, hasn’t looking at which age stereoscopy really taken off. And that’s due to develops, when children reach a number of factors. People don’t an acceptable level and what like wearing the glasses, for one, factors might affect this ability. It’s but it also has to do with quality generally assumed that stereopsis of experience, which of course is develops after age three, but we affected by how well people see 3D. don’t know much more than that. Jasmien Vervaeke: As 3D Jan Van Looy: We’re taking a look visualization develops and we start at kids aged six to 12, using our looking at autostereoscopic screens test as well as examining physical that don’t require glasses, knowing features such as the distance more about how the brain sees between pupils—which changes 3D will help refine the technology. with age—and cognitive factors, Being able to measure stereopsis things like that. We want to see and detect stereoblindness will if there are correlations between help unlock a larger 3D market these factors and the children’s and also potentially unearth new performance on the stereoscopy applications for 3D visualization. test. Early results seem to indicate that it is not the distance between Jan Van Looy: In iMinds projects the pupils, as is often assumed, but like ASPRO+, we’re working with the development of the brain—more engineers and companies that are particularly, how well both sides of developing new technologies using the brain communicate with one autostereoscopy and multiview another—that determines how well displays, different screens and you see stereoscopic 3D. processes. Our test can support the development of these technologies by providing valuable information Q: From your perspective, why is about the consumers of 3D content. this research important? Before you create the technology, you need to understand the Jan Van Looy: Well, stereoscopy audience. We’re contributing to is really coming back into the that understanding. spotlight. Many people have noticed that 3D stereoscopy in the home,

iMinds insights | 31 LIST OF SUBJECT-MATTER EXPERTS WHO CONTRIBUTED TO THIS PAPER

ALJOSHA DEMEULEMEESTER CEO Graphine

ERIC KRZESLO CMO SoftKinetic

PETER LAMBERT iMinds - MMLab - Ghent University Associate Professor

JAN VAN LOOY iMinds - MICT - Ghent University Assistant Professor

HIEP LUONG iMinds - IPI - Ghent University Postdoctoral Researcher

ADRIAN MUNTEANU iMinds - ETRO - VUB Professor

PETER SCHELKENS iMinds - ETRO - VUB Professor

JASMIEN VERVAEKE iMinds - MICT - Ghent University Researcher

RIMMERT WITTEBROOD Innovation Lead Picture Quality PETER SCHELKENS & Displays TP Vision Professor & Research Director at iMinds - ETRO - VUB

FOR MORE INFORMATION

about iMinds’ expertise in the field of 3D visualization, please contact Peter Schelkens [email protected], +32 2 629 1681

iMinds editorial team: Sven De Cleyn, Koen De Vos, Thomas Kallstenius, Els Van Bruystegem, Wim Van Daele, Stefan Vermeulen

Copy: Ascribe Communications Design: Coming-Soon.be Photography: Lieven Dirckx

©2014 iMinds vzw - CC-BY 4.0. You are free to share and adapt the content in this publication with reference to iMinds.

Additional content will be published on www.iminds.be/insights.