SN-Engine, Geometric Modelling Environment for Virtual World Design and Perception
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Seamscape a Procedural Generation System for Accelerated Creation of 3D Landscapes
SeamScape A procedural generation system for accelerated creation of 3D landscapes Bachelor Thesis in Computer Science and Engineering Link to video demonstration: youtu.be/K5yaTmksIOM Sebastian Ekman Anders Hansson Thomas Högberg Anna Nylander Alma Ottedag Joakim Thorén Chalmers University of Technology University of Gothenburg Department of Computer Science and Engineering Gothenburg, Sweden, June 2016 The Authors grants to Chalmers University of Technology and University of Gothenburg the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it accessible on the Internet. The Author warrants that he/she is the author to the Work, and warrants that the Work does not contain text, pictures or other material that violates copyright law. The Author shall, when transferring the rights of the Work to a third party (for example a publisher or a company), acknowledge the third party about this agreement. If the Author has signed a copyright agreement with a third party regarding the Work, the Author warrants hereby that he/she has obtained any necessary permission from this third party to let Chalmers University of Technology and University of Gothenburg store the Work electronically and make it accessible on the Internet. SeamScape A procedural generation system for accelerated creation of 3D landscapes Sebastian Ekman Anders Hansson Thomas Högberg Anna Nylander Alma Ottedag Joakim Thorén c Sebastian Ekman, 2016. c Anders Hansson, 2016. c Thomas Högberg, 2016. c Anna Nylander, 2016. c Alma Ottedag, 2016. -
Model Optimization for Deep Space Exploration Via Simulators and Deep Learning
Model Optimization for Deep Space Exploration via Simulators and Deep Learning James Bird1,∗, Kellan Colburn2,∗, Linda Petzold1, Philip Lubin2 Abstract Machine learning, and eventually true artificial intelligence techniques, are extremely im- portant advancements in astrophysics and astronomy. We explore the application of deep learning using neural networks in order to automate the detection of astronomical bodies for future exploration missions, such as missions to search for signatures or suitability of life. The ability to acquire images, analyze them, and send back those that are important, as determined by the deep learning algorithm, is critical in bandwidth-limited applications. Our previous foundational work solidified the concept of using simulator images and deep learning in order to detect planets. Optimization of this process is of vital importance, as even a small loss in accuracy might be the difference between capturing and completely missing a possibly-habitable nearby planet. Through computer vision, deep learning, and simulators we introduce methods that optimize the detection of exoplanets. We show that maximum achieved accuracy can hit above 98% for multiple model architectures, even with a relatively small training set. Keywords: computer vision, simulator, deep learning, space, universe, exoplanet, object detection, novelty detection, neural network, machine learning, transfer learning, interstellar travel, planet detection 1. Introduction This paper will address some of the challenges and possibilities of exoplanet -
The Uses of Animation 1
The Uses of Animation 1 1 The Uses of Animation ANIMATION Animation is the process of making the illusion of motion and change by means of the rapid display of a sequence of static images that minimally differ from each other. The illusion—as in motion pictures in general—is thought to rely on the phi phenomenon. Animators are artists who specialize in the creation of animation. Animation can be recorded with either analogue media, a flip book, motion picture film, video tape,digital media, including formats with animated GIF, Flash animation and digital video. To display animation, a digital camera, computer, or projector are used along with new technologies that are produced. Animation creation methods include the traditional animation creation method and those involving stop motion animation of two and three-dimensional objects, paper cutouts, puppets and clay figures. Images are displayed in a rapid succession, usually 24, 25, 30, or 60 frames per second. THE MOST COMMON USES OF ANIMATION Cartoons The most common use of animation, and perhaps the origin of it, is cartoons. Cartoons appear all the time on television and the cinema and can be used for entertainment, advertising, 2 Aspects of Animation: Steps to Learn Animated Cartoons presentations and many more applications that are only limited by the imagination of the designer. The most important factor about making cartoons on a computer is reusability and flexibility. The system that will actually do the animation needs to be such that all the actions that are going to be performed can be repeated easily, without much fuss from the side of the animator. -
Game Engine Review
Game Engine Review Mr. Stuart Armstrong 12565 Research Parkway, Suite 350 Orlando FL, 32826 USA [email protected] ABSTRACT There has been a significant amount of interest around the use of Commercial Off The Shelf products to support military training and education. This paper compares a number of current game engines available on the market and assesses them against potential military simulation criteria. 1.0 GAMES IN DEFENSE “A game is a system in which players engage in an artificial conflict, defined by rules, which result in a quantifiable outcome.” The use of games for defence simulation can be broadly split into two categories – the use of game technologies to provide an immersive and flexible military training system and a “serious game” that uses game design principles (such as narrative and scoring) to deliver education and training content in a novel way. This talk and subsequent education notes focus on the use of game technologies, in particular game engines to support military training. 2.0 INTRODUCTION TO GAMES ENGINES “A games engine is a software suite designed to facilitate the production of computer games.” Developers use games engines to create games for games consoles, personal computers and growingly mobile devices. Games engines provide a flexible and reusable development toolkit with all the core functionality required to produce a game quickly and efficiently. Multiple games can be produced from the same games engine, for example Counter Strike Source, Half Life 2 and Left 4 Dead 2 are all created using the Source engine. Equally once created, the game source code can with little, if any modification be abstracted for different gaming platforms such as a Playstation, Personal Computer or Wii. -
Vol. 47, No. 2 June 2018 a New Star Appears in Europe Page 14 Journal
Online PDF: ISSN 233333-9063 Vol. 47, No. 2 June 2018 Journal of the International Planetarium Society A New Star Appears in Europe Page 14 Reach for the stars... and beyond. ZEISS powerdome IV // INSPIRATION MADE BY ZEISS True Hybrid with brilliant stars and perfect renderings from a single source ZEISS powerdome IV brings many new features to your star theater: an integrated planetarium for earthbound and extraterrestrial astronomy with seamless transitions between optical and digital star fields (True Hybrid) | The universe from Earth via the solar system and Milky Way galaxy to the very edge of the observable space | Stereo projection | 8k performance | 10 bit color depth for smooth gradients | HEVC codec for efficient video renderings free of artifacts | All constellation figures, individually and in groups without any mutual overlapping | Telescope function for deep-sky imagery applying Astronomy Visualization Metadata | Complete image set of all Messier objects | Customizable polar lights, comets with gas and dust tails, and shooting stars with a great variety of parameters for location, brightness, colors and appearance | Simulation of day and night with dusk and dawn coloring of sky and panorama images | Customizable weather effects such as clouds, rain, fog, snow, rainbow, halos, air and light pollution effects | Digital rights management to secure your productions | Remote service for quick help, and much more from the only company serving planetariums for nearly a century. www.zeiss.com/planetariums zeiss-ad_pdIV_letter_x3.indd -
Brian Paint Breakdown 1.Qxd
Digital Matte Painting Reel Breakdown Brian LaFrance Run Time: 2 Minutes 949-302-2085 [email protected] Big Hero 6: Baymax and Hiro Flying Sequence Description: Lead Set Extension Artist. helped develop sky pano from source HDR's, which fed lighting dept. 360 degree seaming of ocean/sky horizon, land, atmosphere blending. Painted East Bay city. Made 3d fog volumes in houdini, rendered with scene lighting for reference, which informed the painting of multiple fog lay- ers, which were blended into the scene using zdepth "slices" for holdouts, integrating the fog into the landscape. Software Used: Photoshop, Maya, Nuke, Houdini, Terragen, Hyperion(Disney Prop. Rendering software) Big Hero 6: Bridge Description: Painted sky, ground fog slices and lights, projected in nuke. Software Used: Photoshop, Maya, Nuke, Terragen, Hyperion Big Hero 6: City Description: Painted sky, moving ground fog clouds. Clouds integrated into digital set using zdepth "slices" for holdouts, integrating fog into the landscape. Software Used: Photoshop, Maya, Nuke R.I.P.D.: City Shots Description: Blocked out city compositions with simple geometry, projected texture onto that geometry. Software Used: Photoshop, Rampage (Rhythm and Hues Prop. Projection software) The Seventh Son: Multiple Shots Description: Modeled simple geom, sculpted in zbrush for balcony shot, textured/lit/rendered in mental ray, painted over in photoshop, projected onto modeled or simplified geometry in rampage. Software Used: Photoshop, Maya, Mental Ray, Zbrush, Rampage Elysium: Earth Description: Provided a Terragen "Planet Rig" to Image Engine for them to render views of earth, as well as a large render of whole earth to be used as source for matte painting(s). -
Comparison of Spherical Cube Map Projections Used in Planet-Sized Terrain Rendering
FACTA UNIVERSITATIS (NIS)ˇ Ser. Math. Inform. Vol. 31, No 2 (2016), 259–297 COMPARISON OF SPHERICAL CUBE MAP PROJECTIONS USED IN PLANET-SIZED TERRAIN RENDERING Aleksandar M. Dimitrijevi´c, Martin Lambers and Dejan D. Ranˇci´c Abstract. A wide variety of projections from a planet surface to a two-dimensional map are known, and the correct choice of a particular projection for a given application area depends on many factors. In the computer graphics domain, in particular in the field of planet rendering systems, the importance of that choice has been neglected so far and inadequate criteria have been used to select a projection. In this paper, we derive evaluation criteria, based on texture distortion, suitable for this application domain, and apply them to a comprehensive list of spherical cube map projections to demonstrate their properties. Keywords: Map projection, spherical cube, distortion, texturing, graphics 1. Introduction Map projections have been used for centuries to represent the curved surface of the Earth with a two-dimensional map. A wide variety of map projections have been proposed, each with different properties. Of particular interest are scale variations and angular distortions introduced by map projections – since the spheroidal surface is not developable, a projection onto a plane cannot be both conformal (angle- preserving) and equal-area (constant-scale) at the same time. These two properties are usually analyzed using Tissot’s indicatrix. An overview of map projections and an introduction to Tissot’s indicatrix are given by Snyder [24]. In computer graphics, a map projection is a central part of systems that render planets or similar celestial bodies: the surface properties (photos, digital elevation models, radar imagery, thermal measurements, etc.) are stored in a map hierarchy in different resolutions. -
Maaston Mallintaminen Visualisointikäyttöön
MAASTON MALLINTAMINEN VISUALISOINTIKÄYTTÖÖN LAHDEN AMMATTIKORKEAKOULU Tekniikan ala Mediatekniikan koulutusohjelma Teknisen Visualisoinnin suuntautumisvaihtoehto Opinnäytetyö Kevät 2012 Ilona Moilanen Lahden ammattikorkeakoulu Mediatekniikan koulutusohjelma MOILANEN, ILONA: Maaston mallintaminen visualisointikäyttöön Teknisen Visualisoinnin suuntautumisvaihtoehdon opinnäytetyö, 29 sivua Kevät 2012 TIIVISTELMÄ Maastomallit ovat yleisesti käytössä peli- ja elokuvateollisuudessa sekä arkkitehtuurisissa visualisoinneissa. Mallinnettujen 3D-maastojen käyttö on lisääntynyt sitä mukaa, kun tietokoneista on tullut tehokkaampia. Opinnäytetyössä käydään läpi, millaisia maastonmallintamisen ohjelmia on saatavilla ja osa ohjelmista otetaan tarkempaan käsittelyyn. Opinnäytetyössä käydään myös läpi valittujen ohjelmien hyvät ja huonot puolet. Tarkempaan käsittelyyn otettavat ohjelmat ovat Terragen- sekä 3ds Max - ohjelmat. 3ds Max-ohjelmassa käydään läpi maaston luonti korkeuskartan ja Displace modifier -toiminnon avulla, sekä se miten maaston tuominen onnistuu Google Earth-ohjelmasta Autodeskin tuotteisiin kuten 3ds Max:iin käyttäen apuna Google Sketchup -ohjelmaa. Lopuksi vielä käydään läpi ohjelmien hyvät ja huonot puolet. Casessa mallinnetaan maasto Terragen-ohjelmassa sekä 3ds Max- ohjelmassa korkeuskartan avulla ja verrataan kummalla mallintaminen onnistuu paremmin. Maasto mallinnettiin valituilla ohjelmilla ja käytiin läpi saatavilla olevia maaston mallinnusohjelmia. Lopputuloksena päädyttiin, että valokuvamaisen lopputuloksen saamiseksi Terragen -
Nitin Singh - Senior CG Generalist
Nitin Singh - Senior CG Generalist. Email: [email protected] Montreal, Canada Website: www.NitinSingh.net HONORS & AWARDS * VISUAL EFFECTS SOCIETY AWARDS (VES) 2014 (Outstanding Created Environment in a Commercial or Broadcast Program) for Game Of Thrones ( Project Lead ) “The Climb”. * PRIMETIME EMMY AWARDS 2013 ( as Model and Texture Lead ) for Game of Thrones. “Valar Dohaeris” (Season 03) EXPERIENCE______________________________________________________________________________________________ Environment TD at Framestore, Montreal (Feb.05.2018 - June.09.2018) Projects:- The Aeronauts, Captain Marvel. * procedural texturing and lookDev for full CG environments. * Developing custom calisthenics shaders for procedural environment texturing and look development. * Making clouds procedurally in Houdini, Layout, Lookdev, and rendering of Assets / Shots in FrameStore's proprietary rendering engine. Software's Used: FrameStore's custom texturing and lighting tools, Maya, Arnold, Terragen 4. __________________________________________________________________________________________________________ Environment Pipeline TD at Method Studios (Iloura), Melbourne (Feb.05.2018 - June.09.2018) Projects:- Tomb Raider, Aquaman. * Developing custom pipeline tools for layout and Environment Dept. using Python and PyQt4. * Modeling and texturing full CG environment's with Substance Designer and Zbrush. *Texturing High res. photo-real textures for CG environments and assets. Software's Used: Maya, World Machine, Mari, Zbrush, Mudbox, Nuke, Vray 3.0, Photoshop, -
Candidate Paolo GALLO
POLITECNICO DI TORINO Master’s Degree in Computer Engineering Master’s Degree Thesis Development of a real-time solution for an interactive VR representation of large star catalogues Supervisors Candidate Prof. Andrea SANNA Paolo GALLO April, 2021 Abstract This thesis takes place in the context of Virtual Reality and Data Visualization techniques applied to large astronomical datasets. The goal of this work is to improve and extend the already existing Astra Data Navigator application, built with the Unity game engine, and make it capable of loading and displaying large star catalogues in a realistic and real-time 3D environment. The software was built in theVR laboratory of ALTEC - Aerospace Logistics Technology Engineering Company - while interfacing with other European projects such as NEANIAS and ESA’s Gaia mission, which is particularly relevant to this work. The catalogue of celestial objects observed by the Gaia astrometric satellite is the largest collection of stars available to date, counting over 1.8 billion entries; being able to navigate and interact with this data in 3D would be extremely useful for both scientific and educational purposes, but mostVR tools are limited to a much smaller object count and cannot be extended further. On the data management side, the application (which has an integrated star catalogue but also supports external data sources in the form of CSV files or SQL databases) offers the choice between two different modes: the user can choose to load all of the available data at startup and store it in the system memory, which requires more resources and increased loading time but then provides a seamless navigation of the 3D environment, or he can opt for a dynamic loading solution (better suited for large catalogues), that only selects relevant data based on the current observer position, saving a lot of resources but introducing additional loading times that interrupt the navigation experience. -
Simulated Imagery Rendering Workflow for UAS-Based
remote sensing Article Simulated Imagery Rendering Workflow for UAS-Based Photogrammetric 3D Reconstruction Accuracy Assessments Richard K. Slocum * and Christopher E. Parrish School of Civil and Construction Engineering, Oregon State University, 101 Kearney Hall, 1491 SW Campus Way, Corvallis, OR 97331, USA; [email protected] * Correspondence: [email protected]; Tel.: +1-703-973-1983 Academic Editors: Gonzalo Pajares Martinsanz, Xiaofeng Li and Prasad S. Thenkabail Received: 13 March 2017; Accepted: 19 April 2017; Published: 22 April 2017 Abstract: Structure from motion (SfM) and MultiView Stereo (MVS) algorithms are increasingly being applied to imagery from unmanned aircraft systems (UAS) to generate point cloud data for various surveying and mapping applications. To date, the options for assessing the spatial accuracy of the SfM-MVS point clouds have primarily been limited to empirical accuracy assessments, which involve comparisons against reference data sets, which are both independent and of higher accuracy than the data they are being used to test. The acquisition of these reference data sets can be expensive, time consuming, and logistically challenging. Furthermore, these experiments are also almost always unable to be perfectly replicated and can contain numerous confounding variables, such as sun angle, cloud cover, wind, movement of objects in the scene, and camera thermal noise, to name a few. The combination of these factors leads to a situation in which robust, repeatable experiments are cost prohibitive, and the experiment results are frequently site-specific and condition-specific. Here, we present a workflow to render computer generated imagery using a virtual environment which can mimic the independent variables that would be experienced in a real-world UAS imagery acquisition scenario. -
Cloud-Based Visual Discovery in Astronomy: Novel EOSC Services for Exploring Large Scale Catalogues
Novel EOSC services for Emerging Atmosphere, Underwater and Space Challenges 2021 April Cloud-Based Visual Discovery in Astronomy: Novel EOSC Services for Exploring Large Scale Catalogues Modern radioastronomy is currently undergoing a revolution e.g. with a new generation of interferometers such as the game changing Square-Kilometre Array (SKA)1, which is an international collaboration involving over 100 scientific and technological partners from 20 countries; upon its completion this will be the largest radio interferometer on earth. Such instrumentation infrastructures spearhead the next generation all-sky surveys towards a brand-new paradigm shift as multiwavelength all-sky surveys are expected to replace traditionally executed single-object observations, enabling to perform novel statistical studies and thus revealing previously unknown patterns and trends2. The scientific community is facing an emerging flurry of enormously large, incredibly rich and highly complex data volumes, e.g. when fully operational SKA is expected to generate data rates largely exceeding the global Internet traffic. The associated science-ready data products will pose extremely challenging technological demands on traditional approaches, e.g. analysis and imaging software tools will have to be adapted at various levels, if not completely re-designed from scratch, so as to run efficiently at scale with satisfactory performances. An earlier blog article3 discussed recent technological advances in the game engines and virtual reality fields relating to NEANIAS4 which is exploiting new tools and approaches to prototype services for agile, fit-for-purpose and sustainable offerings accessible through the European Open Science Cloud (EOSC5) to handle emerging, next generation radioastronomy data. The NEANIAS S1 services6 are delivering optimised cloud- based visual discovery for multiwavelength astrophysics datasets, ensuring scalability and exploiting innovative interfaces through virtual reality.