2014 3-4 Acta Graphica.Indd

Total Page:16

File Type:pdf, Size:1020Kb

2014 3-4 Acta Graphica.Indd Vidmar et al.: Performance Assessment of Three Rendering..., acta graphica 25(2014)3–4, 101–114 author viewpoint acta graphica 234 Performance Assessment of Three Rendering Engines in 3D Computer Graphics Software Authors Žan Vidmar, Aleš Hladnik, Helena Gabrijelčič Tomc* University of Ljubljana Faculty of Natural Sciences and Engineering Slovenia *E-mail: [email protected] Abstract: The aim of the research was the determination of testing conditions and visual and numerical evaluation of renderings made with three different rendering engines in Maya software, which is widely used for educational and computer art purposes. In the theoretical part the overview of light phenomena and their simulation in virtual space is presented. This is followed by a detailed presentation of the main rendering methods and the results and limitations of their applications to 3D ob- jects. At the end of the theoretical part the importance of a proper testing scene and especially the role of Cornell box are explained. In the experimental part the terms and conditions as well as hardware and software used for the research are presented. This is followed by a description of the procedures, where we focused on the rendering quality and time, which enabled the comparison of settings of different render engines and determination of conditions for further rendering of testing scenes. The experimental part continued with rendering a variety of simple virtual scenes including Cornell box and virtual object with different materials and colours. Apart from visual evaluation, which was the starting point for comparison of renderings, a procedure for numerical estimation and colour deviations of ren- derings using the selected regions of interest in the final images is presented. Keywords: 3D Computer Graphic, Rendering Algorithm, Testing Conditions Colour Difference, Visualisation 101 Vidmar et al.: Performance Assessment of Three Rendering..., acta graphica 25(2014)3–4, 101–114 1. Introduction acceptable rendering time. Within a simplified scheme the incident light can be divided into The history of investigations in the field of reflected, absorbed, scattered and transmitted 3D computer generated graphics shows that a light, the proportions of which are defined by great effort was invested in the development of the properties and appearance of material. The rendering algorithms, simulation of light inter- simulations of these light phenomena, which are actions with different materials, visual percep- included in rendering equations, also enable the tion of 3D sufrace phenomena and computer reproduction of computer generated light (Birn, graphics (CG) colour reproduction and visuali- 2013; Erzetič, 2010; Kočevar, 2013). Render en- sations in various applications (Goesele, 2004; gine is a piece of programming code, which Paviotti, 2007; Ruppertsberg, 2007; Xiao, 2008; enables the final image output or sequence of Menk, 2013; Fleming, 2003). An evidence for images (renderings) of 3D computer generated this are numerous scientific publications and world. In technical terms rendering includes integration of these algorithms in rendering calculations performed by a render engine, engines, which enable different solutions and which translates the scene from a mathematical levels of photorealism of rendering outputs (Ka- approximation to a 2D image. The techniques jiya, 1980; Jensen, 1996; Lafortune, 1993; Pharr, that are the most widely used and popular in 2010; Murat, 2009). Countless web forums rendering engines are: ray tracing, path trac- present experiments of individual enthusiasts ing, photon mapping, bidirectional path tracing and endless discussions about which engine is and unbiased rendering. Apart from these, light capable of better results for a specific computer tracing, distributed ray tracing, metropolis light generated scene or situation. On the other hand, transport, stochastic progressive photon map- references lack more rigorous approaches, test ping (SPPM), beam tracing, cone tracing have scenes and criteria for analysis that could help a also been used frequently (Pharr, 2010). user to more objectively compare and evaluate final renderings and choose the most suitable In ray tracing, rays start their paths in cam- rendering engine. The aim of the study was to era. The visualisation of virtual scene is gener- define the procedure for evaluation of render- ated in image area, where the ray’s path is cal- ings made with three different rendering en- culated for each pixel. So called primary rays or gines in Maya software (Autodesk). The prob- view rays encounter the virtual objects and are lem was analysed from many aspects: usability reflected (reflective rays) or refracted (refrac- of engines, rendering settings, visual evaluation tive rays). When a ray hits the object in the area, and colourimetrical evaluation (Hunt, 2011). from which it cannot be reflected directly to the light source, a shadow is formed (shadow ray) (Appel, 1968; Whitted 1980). In path tracing, each ray is in its path traced from the camera to 2. Theoretical part the light source. When the light source is found, the total contribution of light reflected by dif- ferent materials and other potential virtual con- From the beginnings of CG, scientists and tributors are calculated on the path. In addition engineers have striven to improve results in to the lights, the background is considered as a the light phenomena simulation. The basic goal light source as well. Path tracing is suitable for was to consider physical, optical and energetic illumination of exteriors and scenes with easily properties of light, which should be somehow accessible light sources (Yafaray, 2009; Krishna- interpreted in the 3D virtual world (Appel, 1968; machari, 2004; Csébfalvi, 1997; Kajiya, 1986). Bouknigh, 1970; Phong, 1975; Blinn, 1976; Whit- ted, 1980). The main task, which is an active field Photon mapping was developed by the Dan- of research also nowadays, is to develop a meth- ish researcher Henrik Wann Jensen, which in od and combination of algorithms that would be his thesis presented a process that in the first able to render photorealistic reproduction in an step generates photon map and in the second 102 Vidmar et al.: Performance Assessment of Three Rendering..., acta graphica 25(2014)3–4, 101–114 one renders it with the use of ray tracing (Jens- spite of, in physical terms, an apparently perfect en, 2001; Jensen, 2002). As the result of the first rendering technique, some serious restrictions step, i.e. photon map, is a grainy visualisation, limit its use. (Pharr, 2010; Kouhzadi, 2013; Max- additional algorithms for corrections such as well, 2014). final gathering are usually needed. In compari- son to ray tracing, photon mapping simulations Cornell box was introduced in 1984 at the of real world phenomena are more accurate. Cornell University, with a purpose of testing the This technique treats light as a group of parti- interaction between diffuse surfaces and light cles (photons), whose calculation starting point and comparing computer generated render- is in the light source. Moreover, photon map- ings with photographic images (Goral, 1984). ping takes into account final gathering, indirect Referential image was usually a photograph of lighting, caustics and dispersion (Yafaray, 2009; real physical objects or settings, while the test Luxrender, 2013; Pharr, 2010; Krishnamachari, image was computer generated. In spite of some 2004). minor changes (original box had red and blue walls and no light source), Cornell box is still Bidirectional path tracing is based on BRDF a very popular test model with one light source (Bidirectional Reflectance Distribution Func- in the middle of the white ceiling, green wall on tion) and is in practical terms a combination of the right, red wall on the left and white back wall positive properties of photon mapping and path and floor. In the scene, 3D objects are positioned tracing. Bidirectional path tracing is a method in the middle of the box. with simultaneous tracking of rays coming from the light source (light path), which travel via camera, and from camera (eye path, camera path), which travel via light source. After track- 3. Experimental part ing the paths of a high number of rays from both sides, the rays are connected and image pixels are generated. Taking into account long render- In the experimental part three rendering en- ing time, quality of renderings is very high and gines in Autodesk Maya 2014´64 (software for the visualisations are physically accurate (Lafor- 3D computer animation, modelling, simulation, tune, 1993; Adamsen, 2009; Pajot, 2011; Bogole- rendering and compositing) were analysed. The pov, 2013). specifications of hardware and software (Max- well, 2014; O’Connor, 2010; Sannino, 2013) used Unbiased rendering - This physically based in the research are presented in Table 1. rendering procedure is defined as a rendering technique that does not introduce any system- The study consisted of five steps. The first step atic error into the radiance approximation and was the scene setting, i.e. modelling, texturing as a result theoretically enables a perfect pho- and lighting of Cornell Box, which provided the torealism. Due to a combination of many algo- same conditions for all three engines. The sec- rithms it contains, such as path tracing, light ond and the third step were the definition of the tracing, bidirectional path tracing, metropolis rendering quality for further testing and correc- light transport
Recommended publications
  • Photon Mapping Assignment
    Photon Mapping Assignment 15-864 Advanced Computer Graphics, Carnegie Mellon University Instructor: Doug L. James TA: Christopher Twigg Introduction sampling to emit photons of equal intensity from the diffuse area light source. Use the ray tracer’s functionality to propagate photons In this assignment you will implement (portions of) a photon map- (reflect, transmit and absorb) throughout the scene. To maintain ping renderer. For simplicity, we will only consider scenes with a photons of similar intensity, use Russian roulette [Arvo and Kirk single area light source, and assume surfaces are diffuse, or purely 1990] to determine if photons are absorbed (diffuse), transmitted specular (e.g., mirror or glass). To generate images for testing and (transparent), or reflected at surfaces. Use Schlick’s approximation grading, a test scene will be provided for you on the class website; to Fresnel’s specular reflection coefficient to determine the prob- this will be a very simple consisting of the Cornell box, an area ability of transmission and reflection at specular interfaces, e.g., light source, and specular spheres. Although a brief explanation of glass. Store the photons in the photon map using Jensen’s kd-tree what needs to be done is given below, further implementation de- data structure implementation (provided on the web page). Once tails can be found in [Jensen 2001; Jensen 1996], as well as other these photons are stored, the data structure can compute the filtered ray tracing [Shirley 2000], Monte Carlo [Jensen 2003], and global irradiance estimates you need later. illumination texts [Dutre´ et al. 2003]. Build the Caustic Photon Map (20 points): The high- Getting Started: Familiarize yourself with resolution caustic photon map represents the LS+D paths, and it the ray tracer is therefore only necessary to emit photons toward specular objects when computing the caustic photon map.
    [Show full text]
  • POV-Ray Reference
    POV-Ray Reference POV-Team for POV-Ray Version 3.6.1 ii Contents 1 Introduction 1 1.1 Notation and Basic Assumptions . 1 1.2 Command-line Options . 2 1.2.1 Animation Options . 3 1.2.2 General Output Options . 6 1.2.3 Display Output Options . 8 1.2.4 File Output Options . 11 1.2.5 Scene Parsing Options . 14 1.2.6 Shell-out to Operating System . 16 1.2.7 Text Output . 20 1.2.8 Tracing Options . 23 2 Scene Description Language 29 2.1 Language Basics . 29 2.1.1 Identifiers and Keywords . 30 2.1.2 Comments . 34 2.1.3 Float Expressions . 35 2.1.4 Vector Expressions . 43 2.1.5 Specifying Colors . 48 2.1.6 User-Defined Functions . 53 2.1.7 Strings . 58 2.1.8 Array Identifiers . 60 2.1.9 Spline Identifiers . 62 2.2 Language Directives . 64 2.2.1 Include Files and the #include Directive . 64 2.2.2 The #declare and #local Directives . 65 2.2.3 File I/O Directives . 68 2.2.4 The #default Directive . 70 2.2.5 The #version Directive . 71 2.2.6 Conditional Directives . 72 2.2.7 User Message Directives . 75 2.2.8 User Defined Macros . 76 3 Scene Settings 81 3.1 Camera . 81 3.1.1 Placing the Camera . 82 3.1.2 Types of Projection . 86 3.1.3 Focal Blur . 88 3.1.4 Camera Ray Perturbation . 89 3.1.5 Camera Identifiers . 89 3.2 Atmospheric Effects .
    [Show full text]
  • Surface Recovery: Fusion of Image and Point Cloud
    Surface Recovery: Fusion of Image and Point Cloud Siavash Hosseinyalamdary Alper Yilmaz The Ohio State University 2070 Neil avenue, Columbus, Ohio, USA 43210 pcvlab.engineering.osu.edu Abstract construct surfaces. Implicit (or volumetric) representation of a surface divides the three dimensional Euclidean space The point cloud of the laser scanner is a rich source of to voxels and the value of each voxel is defined based on an information for high level tasks in computer vision such as indicator function which describes the distance of the voxel traffic understanding. However, cost-effective laser scan- to the surface. The value of every voxel inside the surface ners provide noisy and low resolution point cloud and they has negative sign, the value of the voxels outside the surface are prone to systematic errors. In this paper, we propose is positive and the surface is represented as zero crossing two surface recovery approaches based on geometry and values of the indicator function. Unfortunately, this repre- brightness of the surface. The proposed approaches are sentation is not applicable to open surfaces and some mod- tested in realistic outdoor scenarios and the results show ifications should be applied to reconstruct open surfaces. that both approaches have superior performance over the- The least squares and partial differential equations (PDE) state-of-art methods. based approaches have also been developed to implicitly re- construct surfaces. The moving least squares(MLS) [22, 1] and Poisson surface reconstruction [16], has been used in 1. Introduction this paper for comparison, are particularly popular. Lim and Haron review different surface reconstruction techniques in Point cloud is a valuable source of information for scene more details [20].
    [Show full text]
  • Experimental Validation of Autodesk 3Ds Max Design 2009 and Daysim
    Experimental Validation of Autodesk® 3ds Max® Design 2009 and Daysim 3.0 NRC Project # B3241 Submitted to: Autodesk Canada Co. Media & Entertainment Submitted by: Christoph Reinhart1,2 1) National Research Council Canada - Institute for Research in Construction (NRC-IRC) Ottawa, ON K1A 0R6, Canada (2001-2008) 2) Harvard University, Graduate School of Design Cambridge, MA 02138, USA (2008 - ) February 12, 2009 B3241.1 Page 1 Table of Contents Abstract ....................................................................................................................................... 3 1 Introduction .......................................................................................................................... 3 2 Methodology ......................................................................................................................... 5 2.1 Daylighting Test Cases .................................................................................................... 5 2.2 Daysim Simulations ....................................................................................................... 12 2.3 Autodesk 3ds Max Design Simulations ........................................................................... 13 3 Results ................................................................................................................................ 14 3.1 Façade Illuminances ...................................................................................................... 14 3.2 Base Case (TC1) and Lightshelf (TC2)
    [Show full text]
  • Real-Time Global Illumination with Photon Mapping Niklas Smal and Maksim Aizenshtein UL Benchmarks
    CHAPTER 24 Real-Time Global Illumination with Photon Mapping Niklas Smal and Maksim Aizenshtein UL Benchmarks ABSTRACT Indirect lighting, also known as global illumination, is a crucial effect in photorealistic images. While there are a number of effective global illumination techniques based on precomputation that work well with static scenes, including global illumination for scenes with dynamic lighting and dynamic geometry remains a challenging problem. In this chapter, we describe a real-time global illumination algorithm based on photon mapping that evaluates several bounces of indirect lighting without any precomputed data in scenes with both dynamic lighting and fully dynamic geometry. We explain both the pre- and post-processing steps required to achieve dynamic high-quality illumination within the limits of a real- time frame budget. 24.1 INTRODUCTION As the scope of what is possible with real-time graphics has grown with the advancing capabilities of graphics hardware, scenes have become increasingly complex and dynamic. However, most of the current real-time global illumination algorithms (e.g., light maps and light probes) do not work well with moving lights and geometry due to these methods’ dependence on precomputed data. In this chapter, we describe an approach based on an implementation of photon mapping [7], a Monte Carlo method that approximates lighting by first tracing paths of light-carrying photons in the scene to create a data structure that represents the indirect illumination and then using that structure to estimate indirect light at points being shaded. See Figure 24-1. Photon mapping has a number of useful properties, including that it is compatible with precomputed global illumination, provides a result with similar quality to current static techniques, can easily trade off quality and computation time, and requires no significant artist work.
    [Show full text]
  • Efficient Rendering of Caustics with Streamed Photon Mapping
    BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES, Vol. 65, No. 3, 2017 DOI: 10.1515/bpasts-2017-0040 Efficient rendering of caustics with streamed photon mapping K. GUZEK and P. NAPIERALSKI* Institute of Information Technology, Lodz University of Technology, 215 Wolczanska St., 90-924 Lodz, Poland Abstract. In this paper, we present the streamed photon mapping method for enhancing the rendering of caustics. In order to achieve a realistic caustic effect, global illumination methods require additional data, which are gathered by creating a caustic map or increasing the number of samples used for rendering. Our method employs a stream of photons with a varying luminance level depending on the material properties of the surface. The application of a concentrated photon stream provides the ability to render caustics effectively without increasing the number of photons in a photon map. Such an approach increases visibility of results, while also allowing for faster computations. Key words: rendering, global illumination, photon mapping, caustics. 1. Introduction 2. Rendering caustics The interaction of light with matter in the real world results The first attempt to simulate a natural caustic effect was the in a variety of optical phenomena. Understanding how those path tracing method, introduced by James Kajiya in 1986 [4]. phenomena occur and where to implement them is crucial for However, the method proved to be highly inefficient. The creating realistic image renders. When observing the reflection caustics were poorly rendered, as the light source was obscure. or refraction of light through curved surfaces, one may notice A significant improvement was introduced both and inde- some characteristic patches of light, referred to as caustics.
    [Show full text]
  • Open Source Software for Daylighting Analysis of Architectural 3D Models
    19th International Congress on Modelling and Simulation, Perth, Australia, 12–16 December 2011 http://mssanz.org.au/modsim2011 Open Source Software for Daylighting Analysis of Architectural 3D Models Terrance Mc Minn a a Curtin University of Technology School of Built Environment, Perth, Australia Email: [email protected] Abstract:This paper examines the viability of using open source software for the architectural analysis of solar access and over shading of building projects. For this paper open source software also includes freely available closed source software. The Computer Aided Design software – Google SketchUp (Free) while not open source, is included as it is freely available, though with restricted import and export abilities. A range of software tools are used to provide an effective procedure to aid the Architect in understanding the scope of sun penetration and overshadowing on a site and within a project. The technique can be also used lighting analysis of both external (to the building) as well as for internal spaces. An architectural model built in SketchUp (free) CAD software is exported in two different forms for the Radiance Lighting Simulation Suite to provide the lighting analysis. The different exports formats allow the 3D CAD model to be accessed directly via Radiance for full lighting analysis or via the Blender Animation program for a graphical user interface limited option analysis. The Blender Modelling Environment for Architecture (BlendME) add-on exports the model and runs Radiance in the background. Keywords:Lighting Simulation, Open Source Software 3226 McMinn, T., Open Source Software for Daylighting Analysis of Architectural 3D Models INTRODUCTION The use of daylight in buildings has the potential for reduction in the energy demands and increasing thermal comfort and well being of the buildings occupants (Cutler, Sheng, Martin, Glaser, et al., 2008), (Webb, 2006), (Mc Minn & Karol, 2010), (Yancey, n.d.) and others.
    [Show full text]
  • An Advanced Path Tracing Architecture for Movie Rendering
    RenderMan: An Advanced Path Tracing Architecture for Movie Rendering PER CHRISTENSEN, JULIAN FONG, JONATHAN SHADE, WAYNE WOOTEN, BRENDEN SCHUBERT, ANDREW KENSLER, STEPHEN FRIEDMAN, CHARLIE KILPATRICK, CLIFF RAMSHAW, MARC BAN- NISTER, BRENTON RAYNER, JONATHAN BROUILLAT, and MAX LIANI, Pixar Animation Studios Fig. 1. Path-traced images rendered with RenderMan: Dory and Hank from Finding Dory (© 2016 Disney•Pixar). McQueen’s crash in Cars 3 (© 2017 Disney•Pixar). Shere Khan from Disney’s The Jungle Book (© 2016 Disney). A destroyer and the Death Star from Lucasfilm’s Rogue One: A Star Wars Story (© & ™ 2016 Lucasfilm Ltd. All rights reserved. Used under authorization.) Pixar’s RenderMan renderer is used to render all of Pixar’s films, and by many 1 INTRODUCTION film studios to render visual effects for live-action movies. RenderMan started Pixar’s movies and short films are all rendered with RenderMan. as a scanline renderer based on the Reyes algorithm, and was extended over The first computer-generated (CG) animated feature film, Toy Story, the years with ray tracing and several global illumination algorithms. was rendered with an early version of RenderMan in 1995. The most This paper describes the modern version of RenderMan, a new architec- ture for an extensible and programmable path tracer with many features recent Pixar movies – Finding Dory, Cars 3, and Coco – were rendered that are essential to handle the fiercely complex scenes in movie production. using RenderMan’s modern path tracing architecture. The two left Users can write their own materials using a bxdf interface, and their own images in Figure 1 show high-quality rendering of two challenging light transport algorithms using an integrator interface – or they can use the CG movie scenes with many bounces of specular reflections and materials and light transport algorithms provided with RenderMan.
    [Show full text]
  • Getting Started (Pdf)
    GETTING STARTED PHOTO REALISTIC RENDERS OF YOUR 3D MODELS Available for Ver. 1.01 © Kerkythea 2008 Echo Date: April 24th, 2008 GETTING STARTED Page 1 of 41 Written by: The KT Team GETTING STARTED Preface: Kerkythea is a standalone render engine, using physically accurate materials and lights, aiming for the best quality rendering in the most efficient timeframe. The target of Kerkythea is to simplify the task of quality rendering by providing the necessary tools to automate scene setup, such as staging using the GL real-time viewer, material editor, general/render settings, editors, etc., under a common interface. Reaching now the 4th year of development and gaining popularity, I want to believe that KT can now be considered among the top freeware/open source render engines and can be used for both academic and commercial purposes. In the beginning of 2008, we have a strong and rapidly growing community and a website that is more "alive" than ever! KT2008 Echo is very powerful release with a lot of improvements. Kerkythea has grown constantly over the last year, growing into a standard rendering application among architectural studios and extensively used within educational institutes. Of course there are a lot of things that can be added and improved. But we are really proud of reaching a high quality and stable application that is more than usable for commercial purposes with an amazing zero cost! Ioannis Pantazopoulos January 2008 Like the heading is saying, this is a Getting Started “step-by-step guide” and it’s designed to get you started using Kerkythea 2008 Echo.
    [Show full text]
  • Sony Pictures Imageworks Arnold
    Sony Pictures Imageworks Arnold CHRISTOPHER KULLA, Sony Pictures Imageworks ALEJANDRO CONTY, Sony Pictures Imageworks CLIFFORD STEIN, Sony Pictures Imageworks LARRY GRITZ, Sony Pictures Imageworks Fig. 1. Sony Imageworks has been using path tracing in production for over a decade: (a) Monster House (©2006 Columbia Pictures Industries, Inc. All rights reserved); (b) Men in Black III (©2012 Columbia Pictures Industries, Inc. All Rights Reserved.) (c) Smurfs: The Lost Village (©2017 Columbia Pictures Industries, Inc. and Sony Pictures Animation Inc. All rights reserved.) Sony Imageworks’ implementation of the Arnold renderer is a fork of the and robustness of path tracing indicated to the studio there was commercial product of the same name, which has evolved independently potential to revisit the basic architecture of a production renderer since around 2009. This paper focuses on the design choices that are unique which had not evolved much since the seminal Reyes paper [Cook to this version and have tailored the renderer to the specic requirements of et al. 1987]. lm rendering at our studio. We detail our approach to subdivision surface After an initial period of co-development with Solid Angle, we tessellation, hair rendering, sampling and variance reduction techniques, decided to pursue the evolution of the Arnold renderer indepen- as well as a description of our open source texturing and shading language components. We also discuss some ideas we once implemented but have dently from the commercially available product. This motivation since discarded to highlight the evolution of the software over the years. is twofold. The rst is simply pragmatic: software development in service of lm production must be responsive to tight deadlines CCS Concepts: • Computing methodologies → Ray tracing; (less the lm release date than internal deadlines determined by General Terms: Graphics, Systems, Rendering the production schedule).
    [Show full text]
  • Semantic Segmentation of Surface from Lidar Point Cloud 3
    Noname manuscript No. (will be inserted by the editor) Semantic Segmentation of Surface from Lidar Point Cloud Aritra Mukherjee1 · Sourya Dipta Das2 · Jasorsi Ghosh2 · Ananda S. Chowdhury2 · Sanjoy Kumar Saha1 Received: date / Accepted: date Abstract In the field of SLAM (Simultaneous Localization And Mapping) for robot navigation, mapping the environment is an important task. In this regard the Lidar sensor can produce near accurate 3D map of the environment in the format of point cloud, in real time. Though the data is adequate for extracting information related to SLAM, processing millions of points in the point cloud is computationally quite expensive. The methodology presented proposes a fast al- gorithm that can be used to extract semantically labelled surface segments from the cloud, in real time, for direct navigational use or higher level contextual scene reconstruction. First, a single scan from a spinning Lidar is used to generate a mesh of subsampled cloud points online. The generated mesh is further used for surface normal computation of those points on the basis of which surface segments are estimated. A novel descriptor to represent the surface segments is proposed and utilized to determine the surface class of the segments (semantic label) with the help of classifier. These semantic surface segments can be further utilized for geometric reconstruction of objects in the scene, or can be used for optimized tra- jectory planning by a robot. The proposed methodology is compared with number of point cloud segmentation methods and state of the art semantic segmentation methods to emphasize its efficacy in terms of speed and accuracy.
    [Show full text]
  • Integrating Open Source Distributed Rendering Solutions in Public and Closed Networking Envi- Ronments
    Integrating open source distributed rendering solutions in public and closed networking envi- ronments Seppälä, Heikki & Suomalainen, Niko 2010 Leppävaara Laurea University of Applied Sciences Laurea Leppävaara Integrating open source distributed rendering solutions in public and closed networking environments Heikki Seppälä Niko Suomalainen Information Technology Programme Thesis 02/2010 Laurea-ammattikorkeakoulu Tiivistelmä Laurea Leppävaara Tietojenkäsittelyn koulutusohjelma Yritysten tietoverkot Heikki Seppälä & Niko Suomalainen Avoimen lähdekoodin jaetun renderöinnin ratkaisut julkisiin ja suljettuihin ympäristöihin Vuosi 2010 Sivumäärä 64 Moderni tutkimustiede on yhä enemmän riippuvainen tietokoneista ja niiden tuottamasta laskentatehosta. Tutkimusprojektit kasvavat jatkuvasti, mikä aiheuttaa tarpeen suuremmalle tietokoneteholle ja lisää kustannuksia. Ratkaisuksi tähän ongelmaan tiedemiehet ovat kehittäneet hajautetun laskennan järjestelmiä, joiden tarkoituksena on tarjota vaihtoehto kalliille supertietokoneille. Näiden järjestelmien toiminta perustuu yhteisön lahjoittamaan tietokonetehoon. Open Rendering Environment on Laurea-ammattikorkeakoulun aloittama projekti, jonka tärkein tuotos on yhteisöllinen renderöintipalvelu Renderfarm.fi. Palvelu hyödyntää hajautettua laskentaa nopeuttamaan 3D-animaatioiden renderöintiä. Tämä tarjoaa uusia mahdollisuuksia mallintajille ja animaatioelokuvien tekijöille joilta tavallisesti kuluu paljon aikaa ja tietokoneresursseja töidensä valmiiksi saattamiseksi. Renderfarm.fi-palvelu perustuu BOINC-pohjaiseen
    [Show full text]