Nvidia Gtc Spring

Total Page:16

File Type:pdf, Size:1020Kb

Nvidia Gtc Spring NVIDIA GTC SPRING NVIDIA GTC は、AI、グラフィックス、アクセラレーテッド コンピューティング、インテリジェント ネット ワーキングなどにおける技術革新を紹介するイベントです。GTC では、現代で最も斬新なテクノロジ に関するさまざまなインタラクティブ セッション、プレゼンテーション、デモ、ポッドキャスト、トレーニングな どを提供します。 登録・参加無料: ・登録サイト https://www.nvidia.com/ja-jp/gtc/?ncid=partn-38038 スケジュール: ・基調講演(日本時間)4月13日 10:00 (日本語字幕) ・カンファレンス&トレーニング(日本時間)4月13日~16日 ※ 日本語セッションもあります。 1 APRIL 12 – 23, 2021 おすすめGTCセッション – 基調講演 www.nvidia.com/gtc Session ID Title Speaker Date Time (JST) 日本語 S33016 GTC Keynote (APAC Rebroadcast) [S33016] Jensen Huang, Founder and CEO, NVIDIA 4/13/2021 10:00 字幕 2 APRIL 12 – 23, 2021 https://www.nvidia.com/j おすすめGTCセッション – 日本語セッションまとめ a-jp/gtc/?ncid=partn-38038 14日(水) 15日(木) S32816 S32675 S32672 S32815 NTTドコモとEDGEMATRIXの映像 エッジAIプラットフォームで、AI FUJITSU DATA CENTER REFERENCE ABCI 2.0: 大規模オープンAIインフラストラ 人工知能が俳句を詠む日 をもっと身近に DESIGN -ZINRAI DEEP LEARNING 10:00 – 11:00 クチャの大幅アップグレード ~機械が知能を獲得するために ~導入が始まる「5G x インテリジェント SYSTEM with NetApp ビデオ分析」~ 株式会社NTTドコモ 産業技術総合研究所 北海道大学 富士通株式会社 EDGEMATRIX株式会社 S32674 S32727 S32819 AI模倣学習の商品化 11:00 – 11:40 Multi-tenancy Data Center 金融領域におけるGPUの活用 Networking with SRv6 株式会社デンソーウェーブ LINE株式会社 HEROZ株式会社 S32677 S32367 S32728 S32812 コロナ禍の働き方を支援するデジタル GPUで加速する AI モデル作成と解析 ~ Create a Virtual Custom Guitar GPUが可能とする大規模ゲノムデータ 12:00 – 12:40 ワークスペース、vGPU搭載VDIでより Elasticsearchと Azure AIを活用~ 解析基盤のエクスポネンシャル進化 Using Ray Tracing 快適なテレワーク環境を Elastic Daisuke Sakamoto 株式会社NTTデータ 東京大学医科学研究所 S32813 S32817 S31217 S33204 富士フイルムが目指す未来の医療DX戦略: 超大規模ニューラルネットワークのため AI Platform with Kubernetes and Jetsonを活用した自動運転モビリティ AI技術「REiLI」による次世代診断ワーク 13:00 – 13:40 サービス フロー の並列分散学習ミドルウェアRaNNC GPU in Private Cloud 富士フイルム株式会社 情報通信研究機構 ヤフー株式会社 WHILL株式会社 ※ うす緑の色付きのセッションは、セッションカタログで公開済みです。 3 APRIL 12 – 23, 2021 おすすめGTCセッション – ディープラーニング www.nvidia.com/gtc Session ID Title Speaker Date Time (JST) Beyond RGB: Transfer Image Knowledge to Multimodal 日本語 S31265 Lin Gu, Research Scientist, RIKEN AIP, Japan 4/13/2021 8:00 Learning [S31265] ABCI 2.0: 大規模オープンAIインフラストラクチャの大幅アップグレード 日本語 S32816 ABCI 2.0: A Major Upgrade of Large-scale Open AI Computing 小川 宏高, 産業技術総合研究所 4/14/2021 10:00 Infrastructure [S32816] 超大規模ニューラルネットワークのための自動並列化深層学習ミドル ウェアRaNNC RaNNC RaNNC: Distributed Deep Learning 日本語 S32817 田仲 正弘, 主任研究員, 情報通信研究機構 4/14/2021 13:00 Middleware for Extremely Large-Scale Neural Network [S32817] 木内 一慶, Senior Director HYPERSCALE DIV. Fujitsu Data Center Reference Design: Zinrai Deep Learning 日本語 S32815 INFRASTRUCTURE SYSTEM BUSINESS UNIT, 富 4/15/2021 10:00 System with Netapp [S32815] 士通株式会社 金融領域におけるGPUの活用 Utilization of GPU in the 日本語 S32819 井口 圭一, 取締役CTO兼開発部長, HEROZ株式会社 4/15/2021 11:00 Financial Field [S32819] 4 APRIL 12 – 23, 2021 おすすめGTCセッション – 高等教育・研究 www.nvidia.com/gtc Session ID Title Speaker Date Time (JST) S32092 A Future with Self-Driving Vehicles [S32092] Raquel Urtasun, Professor, University of Toronto 4/13/2021 3:00 Yoshua Bengio, Full Professor, University of Montreal and Founder and Scientific Director, Mila - Quebec Human-Inspired Inductive Biases for Causal Reasoning and S32760 Artificial Intelligence Institute, University of 4/13/2021 4:00 Out-of-Distribution Generalization [S32760] Montreal, Mila - Quebec Artificial Intelligence Institute Convergence of AI and HPC to Solve Grand Challenge Science S31858 Tom Gibbs, Manager, Developer Relations, NVIDIA 4/13/2021 5:00 Problems [S31858] Using Molecular Simulations to Help Guide Pharmaceutical David Mobley, Professor, University of California, S32110 4/13/2021 8:00 Drug Discovery [S32110] Irvine Deep Learning Warm-Starts Grasp-Optimized Motion Jeff Ichnowski, Postdoctoral Researcher, University S31905 4/14/2021 2:00 Planning [S31905] of California - Berkeley Preparing Students for a Career in Media and Entertainment Andrew Cook, Senior Manager, Deep Learning S31685 4/15/2021 1:00 with Remote Workflows, AI, and Virtual Production [S31685] Institute, NVIDIA S33090 Insights from NVIDIA Research [S33090] Bill Dally, Chief Scientist & SVP Research, NVIDIA 4/16/2021 2:00 Accelerate Your Journey to AI through the Experience of the Cheryl Martin, Director, Higher Education and S32258 4/16/2021 2:00 University of Florida [S32258] Research, NVIDIA Full-Time Interactive Ray Tracing for Scientific Visualization: John Stone, Senior Research Programmer, University S31587 4/16/2021 3:00 The VMD Experience [S31587] of Illinois at Urbana Champaign 5 APRIL 12 – 23, 2021 おすすめGTCセッション – AI FOR ENTERPRISE www.nvidia.com/gtc Session ID Title Speaker Date Time (JST) S31900 A Vision for the Next Decade of Computing [S31900] Rene Haas, President, IP Products Group, Arm 4/13/2021 2:00 AI Implementation at Scale: Lessons from the Front Lines Ashok Srivastava, Senior Vice President and Chief S31938 4/13/2021 3:00 [S31938] Data Officer, Intuit Enterprise AI: Enabling the Next Wave of Digital S32277 Miroslav Hodak, Global AI Architect, Lenovo 4/13/2021 4:00 Transformation [S32277] The Next Step for AI Nations: Developing a National AI S32743 Keith Strier, VP Worldwide AI Initiatives, NVIDIA 4/13/2021 8:00 Compute Plan [S32743] S31959 AI is the Future of Entertainment [S31959] Richard Butler, CEO, BEN 4/13/2021 8:00 Architecting the Secure Accelerated Data Center of the S32278 Michael Kagan, CTO, NVIDIA 4/13/2021 22:00 Future [S32278] S32605 The AI-Powered Evolution of Agriculture [S32605] Ozzy Johnson, SE lead, Developer program, NVIDIA 4/14/2021 1:00 人工知能が俳句を詠む日 ~ 機械が知能を獲得するために The 日本語 S32675 Day Artificial Intelligence Composes Haiku: For Machines to 川村 秀憲, 教授, 北海道大学大学院情報科学研究院 4/14/2021 10:00 Gain Intelligence [S32675] Top AI Use Cases Implemented by the Most Innovative S32013 Chris Walton, Founder and Editor-in-Chief, Omni Talk 4/15/2021 23:00 Retailers [S32013] 6 APRIL 12 – 23, 2021 おすすめGTCセッション – 製造業 www.nvidia.com/gtc Session ID Title Speaker Date Time (JST) S32207 The Future of Advertising [S32207] Perry Nightingale, SVP, Creative AI, WPP 4/13/2021 8:00 Physics-Informed Neural Network for Fluid-Dynamics S31681 Biswadip Dey, Research Scientist, Siemens 4/13/2021 8:00 Simulation and Design [S31681] High-Fidelity Digital Twin Visualization for Large-Scale David Burdick, Senior Product Manager, Bentley S31832 4/13/2021 8:00 Infrastructure Projects [S31832] Systems A Simulation-First Approach in Leveraging Collaborative Anderson Vankayala, Machine Learning Engineer, S31367 4/13/2021 23:00 Robots for Inspecting BMW Vehicles [S31367] BMW Innovation and Research, Silicon Valley AI模倣学習の商品化 Commercialization of Limitation 澤田 洋祐, FA・ロボット事業部 ソリューションビジネス推進部, 日本語 S32674 4/14/2021 11:00 Learning [S32674] 株式会社デンソーウェーブ Leveraging GPU Acceleration in Design to Manufacturing S31226 Workflow: Topology Optimization for 3D Print Production Aram Goganian, Co-Owner / CEO, Predator Cycling 4/15/2021 5:00 [S31226] BMW's Approach to a Holistic Digital Twin Using NVIDIA's Sebastian Schwarz, General Manager, NetAllied S32398 4/15/2021 23:00 Omniverse [S32398] Systems 7 APRIL 12 – 23, 2021 おすすめGTCセッション – データサイエンス www.nvidia.com/gtc Session ID Title Speaker Date Time (JST) John Bowman, Director, Data Science, Walmart How Walmart Improves Computationally Intensive Business S32220 Richard Ulrich, Senior Director II, Wal-Mart Stores, 4/13/2021 3:00 Processes with NVIDIA GPU Computing [S32220] Inc. - USA Chemicals Informatics: Software to Streamline New Product Takashi Isobe, Director of Software/AI Development, 日本語 S31301 4/13/2021 8:00 Research in the Chemical Industry [S31301] Hitachi High Tech America GPUで加速する AI モデル作成と解析 ~ Elasticsearch と Azure 鈴木 章太郎, Technical Product Marketing 日本語 S32677 AI を活用~ GPU-Accelerated AI Model Creation and Analysis: 4/14/2021 12:00 Manager/Evangelist, Elastic Using Elasticsearch and Azure AI [S32677] Denis Shiryaev, CEO and Product Director , Neural Transforming Experiences of the Past Through the Power of Love SS32979 AI and Z by HP Data Science Workstations (Presented by Z by 4/15/2021 1:00 Andrew Kemp, Data Science / AI Segment Manager , HP) [SS32979] HP Data Science Stack 3.0: Jumpstarting Data Science Dima Rekesh, Data Science Architect, NVIDIA S32147 4/15/2021 3:00 Workflows on NVIDIA Data Science Workstations [S32147] Karan Jhavar, Senior Product Manager, NVIDIA Fighting Fraud with One App in Many Ways: GPU-Accelerated Sophie Watson, Principal Data Scientist, Red Hat S31604 4/16/2021 2:00 End-to-End MLops on Kubernetes [S31604] William Benton, Principal Product Architect, NVIDIA NASA and NVIDIA Collaborate to Accelerate Data Science and S32457 Zahra Ronaghi, System Software Manager, NVIDIA 4/22/2021 2:00 Scientific Use Cases [S32457] 8 APRIL 12 – 23, 2021 おすすめGTCセッション – 医療用画像 www.nvidia.com/gtc Session ID Title Speaker Date Time (JST) Ensemble Learning Models Provide Expert-Level Prenatal Rima Arnaout, Assistant Professor, University of S31385 4/13/2021 6:00 Detection of Complex Heart Birth Defects [S31385] California, San Francisco Infrastructure in Finding Cures and Saving Children with St. Zhaohua Lu, Assistant Member, St. Jude Faculty, St. SS33101 Jude Children's Research Hospital (Presented by Mark III 4/13/2021 8:00 Jude Children's Research Hospital Systems) [SS33101] Terrence Chen, CEO of U.S., United Imaging S31930 AI in Medical Imaging and Smart Medical Devices [S31930] 4/13/2021 8:00 Intelligence Developing Robust Medical Imaging AI Applications: S32014 Daniel Rubin, Professor, Stanford University 4/13/2021 8:00 Federated Learning and Other Approaches [S32014] Behrooz Hashemian, Machine Learning Lead, Mass A Comprehensive Use Case of Clinical Deployment of AI S31826 General Brigham 4/13/2021 22:00 Models In Hospitals [S31826] Jiahui Guan, Data Scientist, NVIDIA 富士フイルムが目指す未来の医療DX戦略:AI技術「REiLI」による 次世代診断ワークフロー FUJIFILM's
Recommended publications
  • Automated Staging for Virtual Cinematography Amaury Louarn, Marc Christie, Fabrice Lamarche
    Automated Staging for Virtual Cinematography Amaury Louarn, Marc Christie, Fabrice Lamarche To cite this version: Amaury Louarn, Marc Christie, Fabrice Lamarche. Automated Staging for Virtual Cinematography. MIG 2018 - 11th annual conference on Motion, Interaction and Games, Nov 2018, Limassol, Cyprus. pp.1-10, 10.1145/3274247.3274500. hal-01883808 HAL Id: hal-01883808 https://hal.inria.fr/hal-01883808 Submitted on 28 Sep 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Automated Staging for Virtual Cinematography Amaury Louarn Marc Christie Fabrice Lamarche IRISA / INRIA IRISA / INRIA IRISA / INRIA Rennes, France Rennes, France Rennes, France [email protected] [email protected] [email protected] Scene 1: Camera 1, CU on George front screencenter and Marty 3/4 backright screenleft. George and Marty are in chair and near bar. (a) Scene specification (b) Automated placement of camera and characters (c) Resulting shot enforcing the scene specification Figure 1: Automated staging for a simple scene, from a high-level language specification (a) to the resulting shot (c). Our system places both actors and camera in the scene. Three constraints are displayed in (b): Close-up on George (green), George seen from the front (blue), and George screencenter, Marty screenleft (red).
    [Show full text]
  • Architectural Model for an Augmented Reality Based Mobile Learning Application Oluwaranti, A
    Journal of Multidisciplinary Engineering Science and Technology (JMEST) ISSN: 3159-0040 Vol. 2 Issue 7, July - 2015 Architectural Model For An Augmented Reality Based Mobile Learning Application Oluwaranti, A. I., Obasa A. A., Olaoye A. O. and Ayeni S. Department of Computer Science and Engineering Obafemi Awolowo University Ile-Ife, Nigeria [email protected] Abstract— The work developed an augmented students. It presents a model to utilize an Android reality (AR) based mobile learning application for based smart phone camera to scan 2D templates and students. It implemented, tested and evaluated the overlay the information in real time. The model was developed AR based mobile learning application. implemented and its performance evaluated with This is with the view to providing an improved and respect to its ease of use, learnability and enhanced learning experience for students. effectiveness. The augmented reality system uses the marker- II. LITERATURE REVIEW based technique for the registration of virtual Augmented reality, commonly referred to as AR contents. The image tracking is based on has garnered significant attention in recent years. This scanning by the inbuilt camera of the mobile terminology has been used to describe the technology device; while the corresponding virtual behind the expansion or intensification of the real augmented information is displayed on its screen. world. To “augment reality” is to “intensify” or “expand” The recognition of scanned images was based on reality itself [4]. Specifically, AR is the ability to the Vuforia Cloud Target Recognition System superimpose digital media on the real world through (VCTRS). The developed mobile application was the screen of a device such as a personal computer or modeled using the Object Oriented modeling a smart phone, to create and show users a world full of techniques.
    [Show full text]
  • Virtual Cinematography Using Optimization and Temporal Smoothing
    Virtual Cinematography Using Optimization and Temporal Smoothing Alan Litteneker Demetri Terzopoulos University of California, Los Angeles University of California, Los Angeles [email protected] [email protected] ABSTRACT disagree both on how well a frame matches a particular aesthetic, We propose an automatic virtual cinematography method that takes and on the relative importance of aesthetic properties in a frame. a continuous optimization approach. A suitable camera pose or path As such, virtual cinematography for scenes that are planned is determined automatically by computing the minima of an ob- before delivery to the viewer, such as with 3D animated films made jective function to obtain some desired parameters, such as those popular by companies such as Pixar, is generally solved manu- common in live action photography or cinematography. Multiple ally by a human artist. However, when a scene is even partially objective functions can be combined into a single optimizable func- unknown, such as when playing a video game or viewing a pro- tion, which can be extended to model the smoothness of the optimal cedurally generated real-time animation, some sort of automatic camera path using an active contour model. Our virtual cinematog- virtual cinematography system must be used. raphy technique can be used to find camera paths in either scripted The type of system with which the present work is concerned or unscripted scenes, both with and without smoothing, at a rela- uses continuous objective functions and numerical optimization tively low computational cost. solvers. As an optimization problem, virtual cinematography has some difficult properties: it is generally nonlinear, non-convex, CCS CONCEPTS and cannot be efficiently expressed in closed form.
    [Show full text]
  • Teaching Visual Storytelling for Virtual Production Pipelines Incorporating Motion Capture and Visual Effects
    Teaching Visual Storytelling for virtual production pipelines incorporating Motion Capture and Visual Effects Gregory Bennett∗ Jan Krusey Auckland University of Technology Auckland University of Technology Figure 1: Performance Capture for Visual Storytelling at AUT. Abstract solid theoretical foundation, and could traditionally only be ex- plored through theory and examples in a lecture/lab style context. Film, television and media production are subject to consistent Particularly programs that aim to deliver content in a studio-based change due to ever-evolving technological and economic environ- environment suffer from the complexity and cost-time-constraints ments. Accordingly, tertiary teaching of subject areas such as cin- inherently part of practical inquiry into storytelling through short ema, animation and visual effects require frequent adjustments re- film production or visual effects animation. Further, due to the garding curriculum structure and pedagogy. This paper discusses a structure and length of Film, Visual Effects and Digital Design de- multifaceted, cross-disciplinary approach to teaching Visual Narra- grees, there is normally only time for a single facet of visual nar- tives as part of a Digital Design program. Specifically, pedagogical rative to be addressed, for example a practical camera shoot, or challenges in teaching Visual Storytelling through Motion Capture alternatively a visual effects or animation project. This means that and Visual Effects are addressed, and a new pedagogical frame- comparative exploratory learning is usually out of the question, and work using three different modes of moving image storytelling is students might only take a singular view on technical and creative applied and cited as case studies. Further, ongoing changes in film story development throughout their undergraduate years.
    [Show full text]
  • Fusing Multimedia Data Into Dynamic Virtual Environments
    ABSTRACT Title of dissertation: FUSING MULTIMEDIA DATA INTO DYNAMIC VIRTUAL ENVIRONMENTS Ruofei Du Doctor of Philosophy, 2018 Dissertation directed by: Professor Amitabh Varshney Department of Computer Science In spite of the dramatic growth of virtual and augmented reality (VR and AR) technology, content creation for immersive and dynamic virtual environments remains a signifcant challenge. In this dissertation, we present our research in fusing multimedia data, including text, photos, panoramas, and multi-view videos, to create rich and compelling virtual environments. First, we present Social Street View, which renders geo-tagged social media in its natural geo-spatial context provided by 360° panoramas. Our system takes into account visual saliency and uses maximal Poisson-disc placement with spatiotem- poral flters to render social multimedia in an immersive setting. We also present a novel GPU-driven pipeline for saliency computation in 360° panoramas using spher- ical harmonics (SH). Our spherical residual model can be applied to virtual cine- matography in 360° videos. We further present Geollery, a mixed-reality platform to render an interactive mirrored world in real time with three-dimensional (3D) buildings, user-generated content, and geo-tagged social media. Our user study has identifed several use cases for these systems, including immersive social storytelling, experiencing the culture, and crowd-sourced tourism. We next present Video Fields, a web-based interactive system to create, cal- ibrate, and render dynamic videos overlaid on 3D scenes. Our system renders dynamic entities from multiple videos, using early and deferred texture sampling. Video Fields can be used for immersive surveillance in virtual environments. Fur- thermore, we present VRSurus and ARCrypt projects to explore the applications of gestures recognition, haptic feedback, and visual cryptography for virtual and augmented reality.
    [Show full text]
  • Joint Vr Conference of Eurovr and Egve, 2011
    VTT CREATES BUSINESS FROM TECHNOLOGY Technology and market foresight • Strategic research • Product and service development • IPR and licensing VTT SYMPOSIUM 269 • Assessments, testing, inspection, certification • Technology and innovation management • Technology partnership • • • VTT SYMPOSIUM 269 JOINT VR CONFERENCE OF EUROVR AND EGVE, 2011. CURRENT AND FUTURE PERSPECTIVE... • VTT SYMPOSIUM 269 JOINT VR CONFERENCE OF EUROVR AND EGVE, 2011. The Joint Virtual Reality Conference (JVRC2011) of euroVR and EGVE is an inter- national event which brings together people from industry and research including end-users, developers, suppliers and all those interested in virtual reality (VR), aug- mented reality (AR), mixed reality (MR) and 3D user interfaces (3DUI). This year it was held in the UK in Nottingham hosted by the Human Factors Research Group (HFRG) and the Mixed Reality Lab (MRL) at the University of Nottingham. This publication is a collection of the industrial papers and poster presentations at the conference. It provides an interesting perspective into current and future industrial applications of VR/AR/MR. The industrial Track is an opportunity for industry to tell the research and development communities what they use the tech- nologies for, what they really think, and their needs now and in the future. The Poster Track is an opportunity for the research community to describe current and completed work or unimplemented and/or unusual systems or applications. Here we have presentations from around the world. Joint VR Conference of
    [Show full text]
  • VR As a Content Creation Tool for Movie Previsualisation
    Online Submission ID: 1676 VR as a Content Creation Tool for Movie Previsualisation Category: Application Figure 1: We propose a novel VR authoring tool dedicated to sketching movies in 3D before shooting them (a phase known as previsualisation). Our tool covers the main stages in film-preproduction: crafting storyboards (left), creating 3D animations (center images), and preparing technical aspects of the shooting (right). ABSTRACT previsualisation technologies as a way to reduce the costs by early anticipation of issues that may occur during the shooting. Creatives in animation and film productions have forever been ex- ploring the use of new means to prototype their visual sequences Previsualisations of movies or animated sequences are actually before realizing them, by relying on hand-drawn storyboards, phys- crafted by dedicated companies composed of 3D artists, using tra- ical mockups or more recently 3D modelling and animation tools. ditional modelling/animation tools such as Maya, 3DS or Motion However these 3D tools are designed in mind for dedicated ani- Builder. Creatives in film crews (editors, cameramen, directors of mators rather than creatives such as film directors or directors of photography) however do not master these tools and hence can only photography and remain complex to control and master. In this intervene by interacting with the 3D artists. While a few tools such paper we propose a VR authoring system which provides intuitive as FrameForge3D, MovieStorm or ShotPro [2, 4, 6] or research pro- ways of crafting visual sequences, both for expert animators and totypes have been proposed to ease the creative process [23, 27, 28], expert creatives in the animation and film industry.
    [Show full text]
  • (12) United States Patent (10) Patent No.: US 9,729,765 B2 Balakrishnan Et Al
    USOO9729765B2 (12) United States Patent (10) Patent No.: US 9,729,765 B2 Balakrishnan et al. (45) Date of Patent: Aug. 8, 2017 (54) MOBILE VIRTUAL CINEMATOGRAPHY A63F 13/70 (2014.09); G06T 15/20 (2013.01); SYSTEM H04N 5/44504 (2013.01); A63F2300/1093 (71) Applicant: Drexel University, Philadelphia, PA (2013.01) (58) Field of Classification Search (US) None (72) Inventors: Girish Balakrishnan, Santa Monica, See application file for complete search history. CA (US); Paul Diefenbach, Collingswood, NJ (US) (56) References Cited (73) Assignee: Drexel University, Philadelphia, PA (US) PUBLICATIONS (*) Notice: Subject to any disclaimer, the term of this Lino, C. et al. (2011) The Director's Lens: An Intelligent Assistant patent is extended or adjusted under 35 for Virtual Cinematography. ACM Multimedia, ACM 978-1-4503 U.S.C. 154(b) by 651 days. 0616-Apr. 11, 2011. Elson, D.K. and Riedl, M.O (2007) A Lightweight Intelligent (21) Appl. No.: 14/309,833 Virtual Cinematography System for Machinima Production. Asso ciation for the Advancement of Artificial Intelligence. Available (22) Filed: Jun. 19, 2014 from www.aaai.org. (Continued) (65) Prior Publication Data US 2014/O378222 A1 Dec. 25, 2014 Primary Examiner — Maurice L. McDowell, Jr. (74) Attorney, Agent, or Firm — Saul Ewing LLP: Related U.S. Application Data Kathryn Doyle; Brian R. Landry (60) Provisional application No. 61/836,829, filed on Jun. 19, 2013. (57) ABSTRACT A virtual cinematography system (SmartWCS) is disclosed, (51) Int. Cl. including a mobile tablet device, wherein the mobile tablet H04N 5/222 (2006.01) device includes a touch-sensor Screen, a first hand control, a A63F 3/65 (2014.01) second hand control, and a motion sensor.
    [Show full text]
  • VR As a Content Creation Tool for Movie Previsualisation
    VR as a Content Creation Tool for Movie Previsualisation Quentin Galvane* I-Sheng Lin Fernando Argelaguet† Tsai-Yen Li‡ Inria, CNRS, IRISA, M2S National ChengChi Inria, CNRS, IRISA National ChengChi France University France University Taiwan Taiwan Marc Christie§ Univ Rennes, Inria, CNRS, IRISA, M2S France Figure 1: We propose a novel VR authoring tool dedicated to sketching movies in 3D before shooting them (a phase known as previsualisation). Our tool covers the main stages in film-preproduction: crafting storyboards (left), creating 3D animations (center images), and preparing technical aspects of the shooting (right). ABSTRACT storyboards or digital storyboards [8] remain one of the most tradi- Creatives in animation and film productions have forever been ex- tional ways to design the visual and narrative dimensions of a film ploring the use of new means to prototype their visual sequences sequence. before realizing them, by relying on hand-drawn storyboards, phys- With the advent of realistic real-time rendering techniques, the ical mockups or more recently 3D modelling and animation tools. film and animation industries have been extending storyboards by However these 3D tools are designed in mind for dedicated ani- exploring the use of 3D virtual environments to prototype movies, mators rather than creatives such as film directors or directors of a technique termed previsualisation (or previs). Previsualisation photography and remain complex to control and master. In this consists in creating a rough 3D mockup of the scene, laying out the paper we propose a VR authoring system which provides intuitive elements, staging the characters, placing the cameras, and creating ways of crafting visual sequences, both for expert animators and a very early edit of the sequence.
    [Show full text]
  • Near-Eye Display and Tracking Technologies for Virtual and Augmented Reality
    EUROGRAPHICS 2019 Volume 38 (2019), Number 2 A. Giachetti and H. Rushmeier STAR – State of The Art Report (Guest Editors) Near-Eye Display and Tracking Technologies for Virtual and Augmented Reality G. A. Koulieris1 , K. Ak¸sit2 , M. Stengel2, R. K. Mantiuk3 , K. Mania4 and C. Richardt5 1Durham University, United Kingdom 2NVIDIA Corporation, United States of America 3University of Cambridge, United Kingdom 4Technical University of Crete, Greece 5University of Bath, United Kingdom Abstract Virtual and augmented reality (VR/AR) are expected to revolutionise entertainment, healthcare, communication and the manufac- turing industries among many others. Near-eye displays are an enabling vessel for VR/AR applications, which have to tackle many challenges related to ergonomics, comfort, visual quality and natural interaction. These challenges are related to the core elements of these near-eye display hardware and tracking technologies. In this state-of-the-art report, we investigate the background theory of perception and vision as well as the latest advancements in display engineering and tracking technologies. We begin our discussion by describing the basics of light and image formation. Later, we recount principles of visual perception by relating to the human visual system. We provide two structured overviews on state-of-the-art near-eye display and tracking technologies involved in such near-eye displays. We conclude by outlining unresolved research questions to inspire the next generation of researchers. 1. Introduction it take to go from VR ‘barely working’ as Brooks described VR’s technological status in 1999, to technologies being seamlessly inte- Near-eye displays are an enabling head-mounted technology that grated in the everyday lives of the consumer, going from prototype immerses the user in a virtual world (VR) or augments the real world to production status? (AR) by overlaying digital information, or anything in between on the spectrum that is becoming known as ‘cross/extended reality’ The shortcomings of current near-eye displays stem from both im- (XR).
    [Show full text]
  • Near-Eye Display and Tracking Technologies for Virtual and Augmented Reality
    EUROGRAPHICS 2019 Volume 38 (2019), Number 2 A. Giachetti and H. Rushmeier STAR – State of The Art Report (Guest Editors) Near-Eye Display and Tracking Technologies for Virtual and Augmented Reality G. A. Koulieris1 , K. Ak¸sit2 , M. Stengel2, R. K. Mantiuk3 , K. Mania4 and C. Richardt5 1Durham University, United Kingdom 2NVIDIA Corporation, United States of America 3University of Cambridge, United Kingdom 4Technical University of Crete, Greece 5University of Bath, United Kingdom Abstract Virtual and augmented reality (VR/AR) are expected to revolutionise entertainment, healthcare, communication and the manufac- turing industries among many others. Near-eye displays are an enabling vessel for VR/AR applications, which have to tackle many challenges related to ergonomics, comfort, visual quality and natural interaction. These challenges are related to the core elements of these near-eye display hardware and tracking technologies. In this state-of-the-art report, we investigate the background theory of perception and vision as well as the latest advancements in display engineering and tracking technologies. We begin our discussion by describing the basics of light and image formation. Later, we recount principles of visual perception by relating to the human visual system. We provide two structured overviews on state-of-the-art near-eye display and tracking technologies involved in such near-eye displays. We conclude by outlining unresolved research questions to inspire the next generation of researchers. 1. Introduction it take to go from VR ‘barely working’ as Brooks described VR’s technological status in 1999, to technologies being seamlessly inte- Near-eye displays are an enabling head-mounted technology that grated in the everyday lives of the consumer, going from prototype immerses the user in a virtual world (VR) or augments the real world to production status? (AR) by overlaying digital information, or anything in between on the spectrum that is becoming known as ‘cross/extended reality’ The shortcomings of current near-eye displays stem from both im- (XR).
    [Show full text]
  • A Review of Camera Viewpoint Automation in Robotic and Laparoscopic Surgery
    Robotics 2014, 3, 310-329; doi:10.3390/robotics3030310 OPEN ACCESS robotics ISSN 2218-6581 www.mdpi.com/journal/robotics Review A Review of Camera Viewpoint Automation in Robotic and Laparoscopic Surgery Abhilash Pandya 1,*, Luke A. Reisner 1, Brady King 2, Nathan Lucas 1, Anthony Composto 1, Michael Klein 2 and Richard Darin Ellis 3 1 Department of Electrical and Computer Engineering, Wayne State University, Detroit, MI 48202, USA; E-Mails: [email protected] (L.A.R.); [email protected] (N.L.); [email protected] (A.C.) 2 Department of Pediatric Surgery, Children’s Hospital of Michigan, Detroit, MI 48201, USA; E-Mails: [email protected] (B.K.); [email protected] (M.K.) 3 Department of Industrial and Systems Engineering, Wayne State University, Detroit, MI 48202, USA; E-Mail: [email protected] * Author to whom correspondence should be addressed; E-Mail: [email protected]. Received: 1 May 2014; in revised form: 18 July 2014 / Accepted: 19 July 2014 / Published: 14 August 2014 Abstract: Complex teleoperative tasks, such as surgery, generally require human control. However, teleoperating a robot using indirect visual information poses many technical challenges because the user is expected to control the movement(s) of the camera(s) in addition to the robot’s arms and other elements. For humans, camera positioning is difficult, error-prone, and a drain on the user’s available resources and attention. This paper reviews the state of the art of autonomous camera control with a focus on surgical applications. We also propose potential avenues of research in this field that will support the transition from direct slaved control to truly autonomous robotic camera systems.
    [Show full text]