Project-Team THOTH

Total Page:16

File Type:pdf, Size:1020Kb

Project-Team THOTH Activity Report 2017 Project-Team THOTH Learning visual models from large-scale data IN COLLABORATION WITH: Laboratoire Jean Kuntzmann (LJK) RESEARCH CENTER Grenoble - Rhône-Alpes THEME Vision, perception and multimedia interpretation Table of contents 1. Personnel :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 1 2. Overall Objectives :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 2 3. Research Program :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 3 3.1. Designing and learning structured models3 3.2. Learning of visual models from minimal supervision5 3.3. Large-scale learning and optimization6 3.4. Datasets and evaluation7 4. Application Domains ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::8 4.1. Visual applications8 4.2. Pluri-disciplinary research9 5. Highlights of the Year ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 9 6. New Software and Platforms :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 9 6.1. ACT-detector9 6.2. Joint object-action learning 10 6.3. BlitzNet 11 6.4. LCR-Net 11 6.5. CKN-seq 11 6.6. CKN-TensorFlow 12 6.7. Stochs 12 6.8. MODL 12 6.9. Loter 13 6.10. SPAMS 13 6.11. MP-Net 13 6.12. LVO 14 6.13. SURREAL 14 7. New Results ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 15 7.1. Visual recognition in images 15 7.1.1. Dynamic Filters in Graph Convolutional Networks 15 7.1.2. LCR-Net: Localization-Classification-Regression for Human Pose 15 7.1.3. Incremental Learning of Object Detectors without Catastrophic Forgetting 17 7.1.4. BlitzNet: A Real-Time Deep Network for Scene Understanding 17 7.1.5. SCNet: Learning semantic correspondence 18 7.1.6. Auxiliary Guided Autoregressive Variational Autoencoders 18 7.1.7. Areas of Attention for Image Captioning 18 7.1.8. Enhancing Energy Minimization Framework for Scene Text Recognition with Top-Down Cues 20 7.1.9. Learning deep face representations using small data 21 7.1.10. Invariance and Stability of Deep Convolutional Representations 21 7.1.11. Weakly-supervised learning of visual relations 21 7.1.12. Learning from Synthetic Humans 22 7.2. Visual recognition in videos 24 7.2.1. Detecting Parts for Action Localization 24 7.2.2. Learning from Web Videos for Event Classification 24 7.2.3. Learning Motion Patterns in Videos 24 7.2.4. Learning Video Object Segmentation with Visual Memory 26 7.2.5. Learning to Segment Moving Objects 26 7.2.6. Joint learning of object and action detectors 27 7.2.7. Action Tubelet Detector for Spatio-Temporal Action Localization 27 7.3. Large-scale statistical learning 28 2 Activity Report INRIA 2017 7.3.1. Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite-Sum Structure 28 7.3.2. Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice 29 7.3.3. A Generic Quasi-Newton Algorithm for Faster Gradient-Based Optimization 29 7.3.4. Catalyst Acceleration for Gradient-Based Non-Convex Optimization 29 7.4. Machine learning and pluri-disciplinary research 30 7.4.1. Predicting Transcription Factor Binding Sites with Convolutional Kernel Networks 30 7.4.2. Loter: Inferring local ancestry for a wide range of species 30 7.4.3. High Dimensional Classification with combined Adaptive Sparse PLS and Logistic Regression 31 7.4.4. Probabilistic Count Matrix Factorization for Single Cell Expression Data Analysis 32 8. Bilateral Contracts and Grants with Industry ::::::::::::::::::::::::::::::::::::::::::::: 32 8.1. MSR-Inria joint lab: scientific image and video mining 32 8.2. MSR-Inria joint lab: structured large-scale machine learning 34 8.3. Amazon 34 8.4. Intel 34 8.5. Facebook 34 8.6. Xerox Research Center Europe 34 8.7. Naver 35 9. Partnerships and Cooperations ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 35 9.1. Regional Initiatives 35 9.2. National Initiatives 35 9.2.1. ANR Project Macaron 35 9.2.2. ANR Project DeepInFrance 35 9.3. European Initiatives 36 9.3.1.1. ERC Advanced grant Allegro 36 9.3.1.2. ERC Starting grant Solaris 36 9.4. International Initiatives 36 9.4.1. Inria Associate Teams Not Involved in an Inria International Labs 36 9.4.2. Inria International Partners 37 9.4.3. Participation in Other International Programs 37 9.5. International Research Visitors 37 10. Dissemination :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 37 10.1. Promoting Scientific Activities 37 10.1.1. Scientific Events Organisation 37 10.1.2. Scientific Events Selection 38 10.1.2.1. Member of the Conference Program Committees 38 10.1.2.2. Reviewer 38 10.1.3. Journal 38 10.1.3.1. Member of the Editorial Boards 38 10.1.3.2. Reviewer - Reviewing Activities 38 10.1.4. Invited Talks 38 10.1.5. Scientific Expertise 39 10.1.6. Research Administration 39 10.2. Teaching - Supervision - Juries 40 10.2.1. Teaching 40 10.2.2. Supervision 40 10.2.3. Juries 41 10.3. Popularization 41 11. Bibliography :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::41 Project-Team THOTH Creation of the Team: 2016 January 01, updated into Project-Team: 2016 March 01 Keywords: Computer Science and Digital Science: A3.4. - Machine learning and statistics A5.3. - Image processing and analysis A5.4. - Computer vision A5.9. - Signal processing A6.2.6. - Optimization A8.2. - Optimization A9.2. - Machine learning A9.3. - Signal analysis A9.7. - AI algorithmics Other Research Topics and Application Domains: B5.6. - Robotic systems B8.4. - Security and personal assistance B8.5. - Smart society B9.4.1. - Computer science B9.4.5. - Data science 1. Personnel Research Scientists Cordelia Schmid [Team leader, Inria, Senior Researcher, HDR] Karteek Alahari [Inria, Researcher] Julien Mairal [Inria, Researcher, “en détachement du corps des mines”, HDR] Grégory Rogez [Inria, Starting Research Position] Jakob Verbeek [Inria, Senior Researcher, HDR] Post-Doctoral Fellow Henrique Morimitsu [Inria] PhD Students Alberto Bietti [Univ. Grenoble Alpes, funded by MSR-Inria joint centre, from Sep 2016 to Sep 2019] Dexiong Chen [Univ. Grenoble-Alpes, from Sep 2017 to Sep 2020] Nicolas Chesneau [Univ. Grenoble-Alpes, funded by ERC Allegro] Thomas Dias-Alves [Univ. Grenoble-Alpes, from Sep 2014 until Sep 2017] Mikita Dvornik [Univ. Grenoble-Alpes, funded by ERC Allegro and ANR Macaron, from Sep 2016 to Sep 2019] Maha Elbayad [Univ Grenoble Alpes, funded by Persyval DeCoRe project, from Oct 2016 until Sep 2019] Roman Klokov [Inria, from Sep 2017 until Aug 2020] Andrei Kulunchakov [Inria, funded by ERC Solaris, from Sep 2017 to Sep 2020] Hongzhou Lin [Univ. Grenoble-Alpes, from Sep 2014 to Dec 2017] Pauline Luc [Univ. Grenoble, funded by Facebook, from Jan 2016 to Dec 2018] Thomas Lucas [Univ. Grenoble Alpes, from Feb 2016 to Sep 2019] Alexander Pashevich [Inria, from Sep 2017] 2 Activity Report INRIA 2017 Alexandre Sablayrolles [Univ. Grenoble, funded by Facebook, from Mar 2017] Konstantin Shmelkov [Inria] Vladyslav Sydorov [Inria] Pavel Tokmakov [Inria] Nitika Verma [Univ. Grenoble Alpes, from Sep 2017 to Sep 2020] Daan Wynen [Inria, from Sep 2015 to Sep 2017] Technical staff Ghislain Durif [Inria, from Feb 2017] Vasiliki Kalogeiton [Inria, until Nov 2017] Xavier Martin [Inria] Interns Dexiong Chen [Inria, from Apr 2017 until Aug 2017] Vasileios Choutas [Inria, from Apr 2017] Pau de Jorge Aranda [Inria, from Feb 2017 until Jun 2017] Andrei Kulunchakov [Inria, from Feb 2017 until Jun 2017] Erwan Le Roux [Inria, from Feb 2017 until Dec 2017] Alexander Pashevich [Inria, from Feb 2017 until Jun 2017] Sergey Rubtsovenko [Inria, from Feb 2017 until Aug 2017] Administrative Assistant Nathalie Gillot [Inria] Visiting Scientists Francisco Manuel Castro [University of Malaga, Spain, from Mar 2017 until Jun 2017] Courtney Paquette [University of Washington, Apr 2017] Juha Ylionas [Aalto University, from Jan 2017 until Mar 2017] Gunnar Atli Sigurdsson [Carnegie Mellon University, USA, from Jul 2017 until Nov 2017] External Collaborators Marco Pedersoli [École de technologie supérieure, Montreal, Canada] Danila Potapov [until Jan 2017] 2. Overall Objectives 2.1. Overall Objectives In 2018, it is expected that nearly 80% of the Internet traffic will be due to videos, and that it would take an individual over 5 million years to watch the amount of video that will cross global IP networks each month by then. Thus, there is a pressing and in fact increasing demand to annotate and index this visual content for home and professional users alike. The available text and speech-transcript metadata is typically not sufficient by itself for answering most queries, and visual data must come into play. On the other hand, it is not imaginable to learn the models of visual content required to answer these queries by manually and precisely annotating every relevant concept, object, scene, or action category in a representative sample of everyday conditions—if only because it may be difficult, or even impossible to decide a priori what are the relevant categories and the proper granularity level. This suggests reverting back to the original metadata as source of annotation, despite the fact that the information it provides is typically sparse (e.g., the location and overall topic of newscasts in a video archive) and noisy (e.g., a movie script may tell us that two persons kiss in some scene, but not when, and the kiss may occur off screen or not have survived the final cut). On the other hand, this weak form of “embedded annotation” is rich and diverse, and mining
Recommended publications
  • Oculus Rift CV1 (Model HM-A) Virtual Reality Headset System Report by Wilfried THERON March 2017
    Oculus Rift CV1 (Model HM-A) Virtual Reality Headset System report by Wilfried THERON March 2017 21 rue la Noue Bras de Fer 44200 NANTES - FRANCE +33 2 40 18 09 16 [email protected] www.systemplus.fr ©2017 by System Plus Consulting | Oculus Rift CV1 Head-Mounted Display (SAMPLE) 1 Table of Contents Overview / Introduction 4 Cost Analysis 83 o Executive Summary o Accessing the BOM o Main Chipset o PCB Cost o Block Diagram o Display Cost o Reverse Costing Methodology o BOM Cost – Main Electronic Board o BOM Cost – NIR LED Flex Boards Company Profile 9 o BOM Cost – Proximity Sensor Flex o Oculus VR, LLC o Housing Parts – Estimation o BOM Cost - Housing Physical Analysis 11 o Material Cost Breakdown by Sub-Assembly o Material Cost Breakdown by Component Category o Views and Dimensions of the Headset o Accessing the Added Value (AV) cost o Headset Opening o Main Electronic Board Manufacturing Flow o Fresnel Lens Details o Details of the Main Electronic Board AV Cost o NIR LED Details o Details of the System Assembly AV Cost o Microphone Details o Added-Value Cost Breakdown o Display Details o Manufacturing Cost Breakdown o Main Electronic Board Top Side – Global view Estimated Price Analysis 124 Top Side – High definition photo o Estimation of the Manufacturing Price Top Side – PCB markings Top Side – Main components markings Company services 128 Top Side – Main components identification Top Side – Other components markings Top Side – Other components identification Bottom Side – High definition photo o LED Driver Board o NIR LED Flex Boards o Proximity Sensor Flex ©2017 by System Plus Consulting | Oculus Rift CV1 Head-Mounted Display (SAMPLE) 2 OVERVIEW METHODOLOGY ©2017 by System Plus Consulting | Oculus Rift CV1 Head-Mounted Display (SAMPLE) 3 Executive Summary Overview / Introduction o Executive Summary This full reverse costing study has been conducted to provide insight on technology data, manufacturing cost and selling price of the Oculus Rift Headset* o Main Chipset supplied by Oculus VR, LLC (website).
    [Show full text]
  • M&A @ Facebook: Strategy, Themes and Drivers
    A Work Project, presented as part of the requirements for the Award of a Master Degree in Finance from NOVA – School of Business and Economics M&A @ FACEBOOK: STRATEGY, THEMES AND DRIVERS TOMÁS BRANCO GONÇALVES STUDENT NUMBER 3200 A Project carried out on the Masters in Finance Program, under the supervision of: Professor Pedro Carvalho January 2018 Abstract Most deals are motivated by the recognition of a strategic threat or opportunity in the firm’s competitive arena. These deals seek to improve the firm’s competitive position or even obtain resources and new capabilities that are vital to future prosperity, and improve the firm’s agility. The purpose of this work project is to make an analysis on Facebook’s acquisitions’ strategy going through the key acquisitions in the company’s history. More than understanding the economics of its most relevant acquisitions, the main research is aimed at understanding the strategic view and key drivers behind them, and trying to set a pattern through hypotheses testing, always bearing in mind the following question: Why does Facebook acquire emerging companies instead of replicating their key success factors? Keywords Facebook; Acquisitions; Strategy; M&A Drivers “The biggest risk is not taking any risk... In a world that is changing really quickly, the only strategy that is guaranteed to fail is not taking risks.” Mark Zuckerberg, founder and CEO of Facebook 2 Literature Review M&A activity has had peaks throughout the course of history and different key industry-related drivers triggered that same activity (Sudarsanam, 2003). Historically, the appearance of the first mergers and acquisitions coincides with the existence of the first companies and, since then, in the US market, there have been five major waves of M&A activity (as summarized by T.J.A.
    [Show full text]
  • Instructions on Using VR Oculus Rift S
    Instructions on Using VR Oculus Rift S Turn on the VR Computer. Login as usual. No password is set. If it is not already connected, connect the end of the cable (two connections) of the Oculus Rift S VR headset to the computer. One end will be a USB cable and the other will be a Display Port cable to the back of the computer. On the computer:- • Click and launch the Oculus software program. • The Oculus software programme screen once launched should look similar to below. • The devices may already be paired. But if starting fresh - Click on Devices on the left menu. • Click on right pointing arrow beside Rift S under listing of devices. • Follow steps to confirm connection. • Click on the right pointing arrow for Left and Right Touch to setup controllers. • To pair left controller :– o Press and hold the Menu and ‘Y’ button until the status light starts blinking. You will also feel a slight vibration to indicate pairing. Page 1 of 4 • To pair right controller :– o Repeat above step but press and hold Oculus icon and ‘B’ button. o Once paired you will get a green tick to indicate pairing successful. Wearing the VR Headset • Before you wear your Oculus Rift S headset with glasses, check to make sure that the width and height of your frames are as follows: o Width: 142 mm or less. o Height: 50 mm or less. Note: If your glasses don't fit in the headset or the lenses of your glasses touch the Rift S lens, Oculus recommend taking off your glasses while using Rift S.
    [Show full text]
  • Using Virtual Reality to Engage and Instruct: a Novel Tool for Outreach and Extension Age Group: All Ages! Dr. Geoffrey Attardo
    Using Virtual Reality to Engage and Instruct: A novel tool for Outreach and Extension Age Group: All Ages! Dr. Geoffrey Attardo Email: [email protected] Assistant Professor Room 37 Briggs Hall Department of Entomology and Nematology University of California, Davis Davis, California 95616 Recent developments in computer and display technologies are providing novel ways to interact with information. One of these innovations is the development of Virtual Reality (VR) hardware. Innovations in hardware and software have made this technology broadly accessible with options ranging from cell phone based VR kits made of cardboard to dedicated headsets driven by computers using powerful graphical hardware. VR based educational experiences provide opportunities to present content in a form where they are experienced in 3 dimensions and are interactive. This is accomplished by placing users in virtual spaces with content of interest and allows for natural interactions where users can physically move within the space and use their hands to directly manipulate/experience content. VR also reduces the impact of external sensory distractions by completely immersing the user in the experience. These interactions are particularly compelling when content that is only observable through a microscope (or not at all) can be made large allowing the user to experience these things at scale. This has great potential for entomological education and outreach as students can experience animated models of insects and arthropods at impossible scales. VR has great potential as a new way to present entomological content including aspects of morphology, physiology, behavior and other aspects of insect biology. This demonstration allows users of all ages to view static and animated 3D models of insects and arthropods in virtual reality.
    [Show full text]
  • Exploration Into Advancing Technology on UX of Social Media Apps by Cameron Allsteadt — 121
    Exploration into Advancing Technology on UX of Social Media Apps by Cameron Allsteadt — 121 An Exploration into the Effect of Advancing Technology on UX of Social Media Applications Cameron Allsteadt Strategic Communications Elon University Submitted in partial fulfillment of the requirements in an undergraduate senior capstone course in communications Abstract While the development of technology and its impact on the social media space may seem unclear, one idea is widely agreed upon: The ways individuals use and interact with social media applications will change dramatically as technologies infiltrate the space. This study explored how evolving technologies will impact the user experience (UX) of social media applications. Three interviews were conducted with futurists and members of Elon communications faculty, as well as a thorough case study of Facebook’s 2017 F8 conference. Findings suggest that evolving technology will rapidly transform the components of the UX framework and enhance consumer convenience, but may create challenges with corporate trust. I. Introduction Since the inception of the smartphone in the mid-1990s, the world of digital media has exploded in growth and continues to evolve at a frantic rate. The advancements in consumer technology have been principal driving forces behind the diversifying social media landscape. However, the rise of technologies like augmented reality, virtual reality, and artificial intelligence have prompted essential questions surrounding the effect of technological advancement on user experience and human culture as a whole. When Facebook launched and quickly gained popularity in 2004, the social networking website altered the media landscape. Facebook set the bar for similar social media applications to provide new ways for users to engage with and connect to one another.
    [Show full text]
  • Fmcode: a 3D In-The-Air Finger Motion Based User Login Framework for Gesture Interface
    FMCode: A 3D In-the-Air Finger Motion Based User Login Framework for Gesture Interface Duo Lu Dijiang Huang Arizona State University Arizona State University [email protected] [email protected] Abstract—Applications using gesture-based human-computer (e.g., Oculus Rift and Vive) and wearable devices rely on interface require a new user login method with gestures because the connected desktop computer or smartphone to complete it does not have traditional input method to type a password. the login procedure using either passwords or biometrics, However, due to various challenges, existing gesture based au- thentication systems are generally considered too weak to be which is inconvenient. A few standalone VR headsets (e.g., useful in practice. In this paper, we propose a unified user Microsoft Hololens [4]) present a virtual keyboard in the air login framework using 3D in-air-handwriting, called FMCode. and ask the user to type a password. However, it is slow We define new types of features critical to distinguish legitimate and user unfriendly due to the lack of key stroke feedback users from attackers and utilize Support Vector Machine (SVM) and a limited recognition accuracy of the key type action for user authentication. The features and data-driven models are specially designed to accommodate minor behavior variations in-the-air. Moreover, passwords have their own drawbacks that existing gesture authentication methods neglect. In addition, due to the trade-off between the memory difficulty and the we use deep neural network approaches to efficiently identify password strength requirement. Biometrics employ the infor- the user based on his or her in-air-handwriting, which avoids mation strongly linked to the person, which cannot be revoked expansive account database search methods employed by existing upon leakage and may raise privacy concerns in online login.
    [Show full text]
  • Head Tracking for the Oculus Rift
    Head Tracking for the Oculus Rift Steven M. LaValle1 Anna Yershova1 Max Katsev1 Michael Antonov Oculus VR, Inc. 19800 MacArthur Blvd Irvine, CA 92612 USA Abstract— We present methods for efficiently maintaining human head orientation using low-cost MEMS sensors. We particularly address gyroscope integration and compensa- tion of dead reckoning errors using gravity and magnetic fields. Although these problems have been well-studied, our performance criteria are particularly tuned to optimize user experience while tracking head movement in the Oculus Rift Development Kit, which is the most widely used virtual reality headset to date. We also present novel predictive tracking methods that dramatically reduce effective latency (time lag), which further improves the user experience. Experimental results are shown, along with ongoing research on positional tracking. I. INTRODUCTION In June 2012, Palmer Luckey’s prototype headset generated widespread enthusiasm and hopes for trans- formative virtual reality (VR) as John Carmack used it to develop and show a compelling Doom demo at the Electronic Entertainment Expo (E3). This convinced in- dustry leaders that a low-cost, high-fidelity VR experience could be achieved by leveraging MEMS sensing and video display technology from the smart phone industry. One important aspect of the design is a wide viewing angle, which greatly increases the sense of immersion over most prior headsets. Momentum has continued to build since that time, with broad media coverage progress on the potential for consumer VR, the dispersal of 50,000 Oculus Rift Development Kits, and the active cooperation of developers and researchers around the world. Although Fig. 1. The Oculus Rift Development Kit tracks head movement to originally targeted at the game industry, it has been finding present the correct virtual-world image to the eyes.
    [Show full text]
  • Gaming Firm Settles VR Lawsuit with Facebook-Owned Oculus 12 December 2018
    Gaming firm settles VR lawsuit with Facebook-owned Oculus 12 December 2018 Terms of the settlement were confidential, according to the companies. "We're pleased to put this behind us and continue building the future of VR," a Facebook spokesman told said in response to an AFP inquiry. Facebook acquired Oculus in 2014 for more than $2 billion and now sells Rift headsets as part of the social network's push into virtual reality. ZeniMax released a statement by chief executive Robert Altman saying the company was pleased and "fully satisfied" by the deal to resolve the litigation. "While we dislike litigation, we will always ZeniMax Media, whose Bethesda Game Studios' vigorously defend against any infringement or presentation at an industry event is seen here, said it misappropriation of our intellectual property by third settled its lawsuit with Facebook's Oculus unit over parties," Altman said. copyright infringement Oculus co-founder Palmer Luckey departed Facebook not long after Facebook was hit with a big tab in the ZeniMax lawsuit—and after he was ZeniMax Media on Wednesday said it struck a deal criticized for covertly helping an online "troll" group with Facebook-owned Oculus to settle a lawsuit that promoted memes in favor of Donald Trump over the video game giant's virtual reality during the US presidential election. technology. The ZeniMax lawsuit accused Luckey and his A US jury early last year ordered that Facebook colleagues who developed Rift virtual reality gear and creators of Oculus Rift pay $500 million in a with the help of source code from the gaming firm.
    [Show full text]
  • Facebook Takes $2 Billion Dive Into Virtual Reality (Update 3) 25 March 2014, by Glenn Chapman
    Facebook takes $2 billion dive into virtual reality (Update 3) 25 March 2014, by Glenn Chapman Oculus headgear affordable and ubiquitous, according to Zuckerberg. The California-based social network does not intend to become a hardware company, but Zuckerberg said it is open to people using virtual reality devices for immersive shopping experiences at Facebook. Facebook plans to build on Oculus technology for areas such as communications, education, and entertainment. Oculus shareholders will receive $400 million in cash and 23.1 million Facebook shares in the deal. Facebook called Oculus, launched in 2012, the An attendee wears an Oculus Rift HD virtual reality head- leader in immersive virtual reality technology with a mounted display at the 2014 International CES on strong following among developers. January 9, 2014 in Las Vegas Facebook on Tuesday announced a $2-billion deal The company has already garnered more than to buy a startup behind virtual reality headgear that 75,000 orders for the $350 Oculus Rift headset promises to let people truly dive into their friends' development kits. lives. Beyond games Facebook co-founder and chief Mark Zuckerberg said that the acquisition of Oculus was a long-term "While the applications for virtual reality technology bet that making the social network's offerings more beyond gaming are in their nascent stages, several immersive would pay off. industries are already experimenting with the technology," Facebook said. "People will build a model of a place far away and you will just go see it; it is just like teleporting," "Oculus has the chance to create the most social Zuckerberg said.
    [Show full text]
  • Facebook to Buy Oculus for $2B
    Deal of the Week: Facebook to buy Oculus for $2B Announcement Date March 25, 2014 Acquirer Facebook (Nasdaq: FB) Acquirer Description Operates as a social networking company worldwide. It provides a set of development tools and application programming interfaces that enable developers to integrate with Facebook to create mobile and Web applications Incorporated in 2004 and headquartered in Menlo Park, California Acquirer Financial Mkt Cap: $160.0 billion LTM EBITDA: $3.9 billion Statistics EV: $142.0 billion EV / LTM Revenue: 18.0x LTM Revenue: $7.9 billion EV / LTM EBITDA: 36.1x Target Company Oculus VR Target Description Virtual reality headset maker that has the potential to revolutionize the way people experience video games The company’s first product, the Oculus Rift, is a virtual reality headset that allows players to step inside the game. It provides an immersive, stereoscopic 3D experience with an ultra‐wide field of view and super low latency head tracking Oculus got its start on crowdfunding site Kickstarter, where it raised $2.4 million in 2012 Headquartered in Irvine, CA Price / Consideration Price: $2 billion Consideration: Cash & Stock Rationale Facebook reached a $2 billion agreement to buy Oculus in a bet that technology commonly associated with science fiction can help eventually turn social networking into an immersive 3‐D experience Mark Zuckerberg, the founder and chief executive of Facebook, said the deal reflected his belief that virtual reality could be one of the “platforms of tomorrow” On
    [Show full text]
  • The Oculus Rift As a Portal for Presence: the Effects of Technology Advancement and Sex Differences in the Horror Video Game Genre
    THE OCULUS RIFT AS A PORTAL FOR PRESENCE: THE EFFECTS OF TECHNOLOGY ADVANCEMENT AND SEX DIFFERENCES IN THE HORROR VIDEO GAME GENRE By Thomas William Murphy Day A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Media and Information – Master of Arts 2015 i ABSTRACT THE OCULUS RIFT AS A PORTAL FOR PRESENCE: THE EFFECTS OF TECHNOLOGY ADVANCEMENT AND SEX DIFFERENCES IN THE HORROR VIDEO GAME GENRE By Thomas William Murphy Day Virtual reality (VR) and presence research has historically been conducted using large laboratory equipment. With the affordance of new technology, like the Oculus Rift, we can begin to delve into the world of consumer based VR interactions. The majority of consumers purchasing the Oculus Rift are interested in playing video games and one popular genre is horror. In addition, both males and females enjoy video games and VR yet there are still unanswered questions regarding sex differences and presence with these technologies. This study examined the impact of technological advancement (VR headset: Oculus Rift vs. traditional compute monitor), sex differences (males vs. females), and their interaction on spatial presence and fright reaction (self-reported measure as well as observable fear behaviors) while playing the horror game, Alien: Isolation. It was found that males and females did not differ statistically on spatial presence, self-reporting of fear, and observable fear responses. Technological advancement increased feelings of spatial presence. In addition, spatial presence was found to mediate the relationship between technology advancement and fear response (self-reported and observable fear behaviors). ii TABLE OF CONTENTS LIST OF TABLES ........................................................................................................................
    [Show full text]
  • Edge Computing Vision and Challenges
    Edge Computing Vision and Challenges Mahadev Satyanarayanan School of Computer Science Carnegie Mellon University © 2014-2017 M. Satyanarayanan HotCloud-HotStorage Keynote July 11, 2017 1 Classic Data Center © 2014-2017 M. Satyanarayanan HotCloud-HotStorage Keynote July 11, 2017 2 Strange Data Centers CommercialCommercial EffortsEfforts TodayToday © 2014-2017 M. Satyanarayanan HotCloud-HotStorage Keynote July 11, 2017 3 Cloudlet-1 ApplicationCloudlet Back-ends Services & Internet + + distributedshared storage & currentlyMobile associated devicesLinux with & sensors cloudlet-1 Cloudlet-2 cache Like aApplication CCloudletDN Back-ends Services for &computation © 2014-2017 M. Satyanarayanan + + distributedshared storage & currentlyMobile associated devices with & sensors cloudlet-2 Linux cache Cloudlet-N ApplicationCloudlet Back-ends Services & HotCloud-HotStorage Keynote July 11, 2017 + + distributedshared storage & currentlyMobile associated devices with & sensors cloudlet-N Linux cache 4 What is a Cloudlet? aka “micro data center”, “mobile edge cloud”, “fog node” Small data center at the edge of the Internet • one wireless hop (+fiber or LAN) to mobile devices (Wi-Fi or 4G LTE or 5G) • multi-tenant, as in cloud • good isolation and safety (VM-based guests) • lighter-weight containers (e.g. Docker) within VMs Subordinate to the cloud (“second-class data center”) Non-constraints (relative to mobile devices) • energy • weight/size/heat Catalyst for new mobile applications © 2014-2017 M. Satyanarayanan HotCloud-HotStorage Keynote July 11, 2017 5 Value Proposition 1. Highly responsive cloud services Latency “New applications and microservices” (mean and tail) 2. Edge analytics in IoT Bandwidth “Scalable live video analytics” (peak and average) 3. Exposure firewall in the IoT Privacy “Crossing the IoT Chasm” 4. Mask disruption of cloud services Availability “Disconnected operation for cloud services” How do we realize this value? © 2014-2017 M.
    [Show full text]