Efficient Generation of 3D Surfel Maps Using Rgb–D Sensors

Total Page:16

File Type:pdf, Size:1020Kb

Efficient Generation of 3D Surfel Maps Using Rgb–D Sensors Int. J. Appl. Math. Comput. Sci., 2016, Vol. 26, No. 1, 99–122 DOI: 10.1515/amcs-2016-0007 EFFICIENT GENERATION OF 3D SURFEL MAPS USING RGB–D SENSORS a,∗ b b ARTUR WILKOWSKI ,TOMASZ KORNUTA ,MACIEJ STEFANCZYK´ , b WŁODZIMIERZ KASPRZAK aIndustrial Research Institute for Automation and Measurements Al. Jerozolimskie 202, 02-486 Warsaw, Poland e-mail: [email protected] bInstitute of Control and Computation Engineering Warsaw University of Technology, ul. Nowowiejska 15/19, 00-665 Warsaw, Poland e-mail: {T.Kornuta,M.Stefanczyk,W.Kasprzak}@elka.pw.edu.pl The article focuses on the problem of building dense 3D occupancy maps using commercial RGB-D sensors and the SLAM approach. In particular, it addresses the problem of 3D map representations, which must be able both to store millions of points and to offer efficient update mechanisms. The proposed solution consists of two such key elements, visual odometry and surfel-based mapping, but it contains substantial improvements: storing the surfel maps in octree form and utilizing a frustum culling-based method to accelerate the map update step. The performed experiments verify the usefulness and efficiency of the developed system. Keywords: RGB-D, V-SLAM, surfel map, frustum culling, octree. 1. Introduction i.e., from the wheel odometry of mobile robots or from the direct kinematics of manipulators. However, inaccurate Industrial robots operate in structured, fully deterministic robot motion models (wheel drifts, friction, imprecise environments, thus they usually do not need an explicit kinematic parameters, etc.) lead to faulty odometry representation of the environment. In contrast, the lack readings, resulting in map inconsistencies. To overcome of up-to-date maps of the surroundings of service and these effects, algorithms were developed to estimate the field robots might cause serious effects, including the robot’s trajectory directly from sensor readings. They are damaging of both robots and objects, or even injuring part of a solution named “simultaneous localization and people. Besides, the diversity and complexity of tasks mapping” (SLAM) (Thrun and Leonard, 2008). There are performed by such robots requires several types of maps many flavours of SLAM systems, using diverse methods and different abstraction levels of representation. For of estimation of the robot and environment state, as example, simple geometric (occupancy) maps enable well as applying all kinds and combination of sensors, robots to avoid obstacles and collisions, whereas e.g., monocular vision in an probabilistic, EKF-based semantic maps allow reasoning, planning and navigation framework (Skrzypczynski,´ 2009). In particular, a between different places (Martínez Mozos et al., specialized term “visual SLAM” (V-SLAM in short) was 2007). A semantic map combines a geometric map of coined to distinguish the SLAM systems relying mainly the environment with objects recognized in the image on cameras, which relates to the fact that the estimation of (Kasprzak, 2010; Kasprzak et al., 2015). subsequent poses of the moving camera is called “visual Multiple sensor readings (images, point clouds) are odometry” (VO) (Nistér et al., 2004). required to generate a geometric map of the environment. Theoretically, the coordinate system transformation The recent advent of cheap RGB-D sensors between a pair of readings can be obtained in advance, accelerated the V-SLAM research. An RGB-D sensor provides two concurrent, synchronized image ∗Corresponding author streams—colour images and depth maps, which represent 100 A. Wilkowski et al. Fig. 1. Components of an example RGB-D-SLAM system. the colour of observed scene points and their relative current pose into account, allowing both visibility checks distance to the camera—that subsequently can be and reduction of depth quantization noise—the most combined into point clouds. Those clouds contain prominent type of noise in cheap RGB-D sensors. Finally, typically hundreds of thousands of points, acquired with every surfel can be processed independently, simplifying with a high-framerate, while the mapped areas are usually and speeding up the map update procedure in comparison complex. Thus, several efficiency and quality issues in with a mesh-like representation. V-SLAM must be addressed: The existing solutions for surfel map creation are not • speed-optimized. A single update of the map might take precision and computational efficiency of point cloud even several seconds (Henry et al., 2012). In this paper, registration, two substantial efficiency improvements are proposed: • global coherence of maps, 1. utilization of the “frustum culling” method, based • efficient and precise representation of a huge number on an octree representation, to accelerate surfel of points. updating; In this paper, the focus is on the third issue. There 2. replacement of a dense ICP algorithm by a much is a proposition developed for an efficient representation more efficient sparse-ICP method, as proposed by of a precise three-dimensional map of the environment. Dryanovski et al. (2013). An “efficient representation” means it is compact (there are few spurious points) and can be rapidly accessed and The article is organized as follows. In Section 2 updated. the recent developments in the field of RGB-D-based A 3D occupancy map can be represented in several mapping are presented. A structure of a typical V-SLAM ways: system is described (Section 2.1), followed by a survey of its key steps: the feature extraction (Section 2.2), 1. “voxel” representation—the 3D space is divided into feature matching (Section 2.3) and the estimation of small “cubes” (Marder-Eppstein et al., 2010), camera motion (Section 2.4) in visual odometry. Then, solutions to dense point clouds alignment (Section 2.5) 2. “surfel” representation—the space is modelled by and global map coherence (Section 2.6) are explained. In an arrangement of surface elements (Krainin et al., Section 3 the most popular 3D mapping systems based 2010), on RGD-D V-SLAM are reviewed. Section 4 describes 3. MLS (multi-level surface)—contains a list of the proposed solution, starting from its important data obstacles (occupied voxels) associated with each structures, followed by a short overview of the system and element of a 2D grid (Triebel et al., 2006), operation details. In Section 5, the ROS-based system implementation is shortly outlined. Section 6 presents 4. octal trees (octrees)—constitute an effective experiments performed for the validation of developed implementation of the voxel map using the octal tree solution. Conclusions are given in Section 7. Appendix structure (Wurm et al., 2010), contains a theoretical primer on “frustum culling”. 5. combined voxel-surfel representation (e.g., the multi-resolution surfel map). 2. RGB-D-based V-SLAM The core of the proposed system is a map of 2.1. Structure of the RGB-D-SLAM system. A small surface patches of uniform size (called surfels), state diagram of an example RGB-D-SLAM system integrating readings of an RGB-D sensor. The utilization is presented in Fig. 1. First, the camera motion of a surfel map has several advantages (Weise et al., 2009; (the transition between consecutive poses) is estimated. Henry et al., 2014). Firstly, it is a compact representation Typically, it is based on matching features extracted from of a sequence of point clouds being merged into surfels. both current and previous images, finding parts of both Secondly, the surfel update procedure takes the sensor’s images corresponding to the same scene elements. There Efficient generation of 3D surfel maps using RGB-D sensors 101 are algorithms based only on RGB image features (either cloud). An example of a global feature calculated from the for a single camera (Nistér et al., 2004) or a stereo-camera RGB image is a color histogram of all the object points. A case (Konolige et al., 2011)), but the additional depth good example of a global feature extracted from the point information makes the matching process much easier. cloud is the VFH (viewpoint feature histogram) (Rusu As a result, a sparse 3D cloud of features is generated, et al., 2010), which creates a histogram of angles between allowing robust estimation of the transformation between surface normal vectors for pairs of points constituting the camera poses. object cloud. Usually, a subset of detected features is sufficient to In the case of local features, one should consider estimate the transformation. Thus, when taking numerous keypoint detectors and feature descriptors. Recently subsets, different estimates of the transformation are proposed detectors locating key points in RGB images found. Typically this step is based on the RANSAC include the FAST (features from accelerated segment (random sample consensus) algorithm (Fischler and test) (Rosten and Drummond, 2006) and the AGAST Bolles, 1981), which selects the transformation that is the (adaptive and generic accelerated segment test) (Mair most consistent with the majority of feature pairs. et al., 2010)), whereas sample novel descriptors The next step, transformation refinement is are BRIEFs (binary robust independent elementary typically based on the ICP (iterative closest point) features) (Calonder et al., 2010) and the FREAK (fast algorithm (Zhang, 1994). This is an iteration of retina keypoint) (Alahi et al., 2012). least-square-error (LSE) solutions to the problem of There are also several features having their dense point cloud correspondence. own detectors and descriptors, such as SIFT (Lowe, The four steps above (feature extraction and 1999) or BRISKs (binary robust invariant scalable matching, transform estimation and refinement) are keypoints) (Leutenegger et al., 2011). The latter possess repeated for every next RGB-D image. However, the so-called “binary descriptor”—a vector of boolean the optimal alignments between every two consecutive values instead of real or integer numbers. It is worth images can eventually lead to a map that is globally noting that in recent years binary descriptors have become inconsistent. This is especially visible in the case of more popular, especially in real-time applications due to closed loops in the sensor’s trajectory.
Recommended publications
  • Luna Moth: Supporting Creativity in the Cloud
    Pedro Alfaiate Instituto Superior Técnico / Luna Moth INESC-ID Inês Caetano Instituto Superior Técnico / INESC-ID Supporting Creativity in the Cloud António Leitão Instituto Superior Técnico / INESC-ID 1 ABSTRACT Algorithmic design allows architects to design using a programming-based approach. Current algo- 1 Migration from desktop application rithmic design environments are based on existing computer-aided design applications or building to the cloud. information modeling applications, such as AutoCAD, Rhinoceros 3D, or Revit, which, due to their complexity, fail to give architects the immediate feedback they need to explore algorithmic design. In addition, they do not address the current trend of moving applications to the cloud to improve their availability. To address these problems, we propose a software architecture for an algorithmic design inte- grated development environment (IDE), based on web technologies, that is more interactive than competing algorithmic design IDEs. Besides providing an intuitive editing interface which facilitates programming tasks for architects, its performance can be an order of magnitude faster than current aalgorithmic design IDEs, thus supporting real-time feedback with more complex algorithmic design programs. Moreover, our solution also allows architects to export the generated model to their preferred computer-aided design applications. This results in an algorithmic design environment that is accessible from any computer, while offering an interactive editing environment that inte- grates into the architect’s workflow. 72 INTRODUCTION programming languages that also support traceability between Throughout the years, computers have been gaining more ground the program and the model: when the user selects a component in the field of architecture. In the beginning, they were only used in the program, the corresponding 3D model components are for creating technical drawings using computer-aided design highlighted.
    [Show full text]
  • 3D Modeling: Surfaces
    CS 430/536 Computer Graphics I Overview • 3D model representations 3D Modeling: • Mesh formats Surfaces • Bicubic surfaces • Bezier surfaces Week 8, Lecture 16 • Normals to surfaces David Breen, William Regli and Maxim Peysakhov • Direct surface rendering Geometric and Intelligent Computing Laboratory Department of Computer Science Drexel University 1 2 http://gicl.cs.drexel.edu 1994 Foley/VanDam/Finer/Huges/Phillips ICG 3D Modeling Representing 3D Objects • 3D Representations • Exact • Approximate – Wireframe models – Surface Models – Wireframe – Facet / Mesh – Solid Models – Parametric • Just surfaces – Meshes and Polygon soups – Voxel/Volume models Surface – Voxel – Decomposition-based – Solid Model • Volume info • Octrees, voxels • CSG • Modeling in 3D – Constructive Solid Geometry (CSG), • BRep Breps and feature-based • Implicit Solid Modeling 3 4 Negatives when Representing 3D Objects Representing 3D Objects • Exact • Approximate • Exact • Approximate – Complex data structures – Lossy – Precise model of – A discretization of – Expensive algorithms – Data structure sizes can object topology the 3D object – Wide variety of formats, get HUGE, if you want each with subtle nuances good fidelity – Mathematically – Use simple – Hard to acquire data – Easy to break (i.e. cracks represent all primitives to – Translation required for can appear) rendering – Not good for certain geometry model topology applications • Lots of interpolation and and geometry guess work 5 6 1 Positives when Exact Representations Representing 3D Objects • Exact
    [Show full text]
  • Full Body 3D Scanning
    3D Photography: Final Project Report Full Body 3D Scanning Sam Calabrese Abhishek Gandhi Changyin Zhou fsmc2171, asg2160, [email protected] Figure 1: Our model is a dancer. We capture his full-body 3D model by combining image-based methods and range scanner, and then do an animation of dancing. Abstract Compared with most laser scanners, image-based methods using triangulation principles are much faster and able to provide real- In this project, we are going to build a high-resolution full-body time 3D sensing. These methods include depth from motion [Aloi- 3D model of a live person by combining image-based methods and monos and Spetsakis 1989], shape from shading [Zhang et al. laser scanner methods. A Leica 3D range scanner is used to obtain 1999], depth from defocus/focus [Nayar et al. 1996][Watanabe and four accurate range data of the body from four different perspec- Nayar 1998][Schechner and Kiryati 2000][Zhou and Lin 2007], and tives. We hire a professional model and adopt many measures to structure from stereo [Dhond and Aggarwal 1989]. They often re- minimize the movement during the long laser-scanning. The scan quire the object surface to be textured, non-textured, or lambertian. data is then sequently processed by Cyclone, MeshLab, Scanalyze, These requirements often make them impractical in many cases. VRIP, PlyCrunch and 3Ds Max to obtain our final mesh. We take In addition, image-based methods usually cannot give a precision three images of the face from frontal and left/right side views, and depth estimation since they do patch-based analysis.
    [Show full text]
  • Bonus Ch. 2 More Modeling Techniques
    Bonus Ch. 2 More Modeling Techniques When it comes to modeling in modo, the sky is the limit. This book is designed to show you all of the techniques available to you, through written word and visual examples on the DVD. This chapter will take you into another project, in which you’ll model a landscape. From there, you’ll texture it, and later you’ll add the environment. You’ll see how modo’s micro polygon displacement works and how powerful it is. From there, you’ll create a cool toy gun. The techniques used in this project will show you how to create small details that make the model come to life. Then, you’ll learn to texture the toy gun to look like real plastic. Building a Landscape Landscapes traditionally have been a chore for 3D artists. This is because to prop- erly create them, you need a lot of geometry. A lot of geometry means a lot of polygons, and a lot of polygons means a lot of render time. But the team at Luxology has introduced micro polygon displacement in modo 201/202, allow- ing you to create and work with simple objects, but render with millions of poly- gons. How is this possible? The micro polygon displacement feature generates additional polygons at render time. The goal is that finer details can be achieved without physically modeling them into the base object. You can then add to the details achieved through micro poly displacements with modo’s bump map capabilities and generate some terrific-looking models.
    [Show full text]
  • Digital Sculpting with Zbrush
    DIGITAL SCULPTING WITH ZBRUSH Vincent Wang ENGL 2089 Discourse Analysis 2 ZBrush Analysis Table of Contents Context ........................................................................................................................... 3 Process ........................................................................................................................... 5 Analysis ........................................................................................................................ 13 Application .................................................................................................................. 27 Activity .......................................................................................................................... 32 Works Cited .................................................................................................................. 35 3 Context ZBrush was created by the Pixologic Inc., which was founded by Ofer Alon and Jack Rimokh (Graphics). It was first presented in 1999 at SIGGRAPH (Graphics). Version 1.5 was unveiled at the MacWorld Expo 2002 in New York and SIGGRAPH 2002 in San Antonio (Graphics). Pixologic, the company describes the 3D modeling software as a “digital sculpting and painting program that has revolutionized the 3D industry…” (Pixologic). It utilizes familiar real-world tools in a digital environment, getting rid of steep learning curves and allowing the user to be freely creative instead of figuring out all the technical details. 3D models that are created in
    [Show full text]
  • 3D Modeling and the Role of 3D Modeling in Our Life
    ISSN 2413-1032 COMPUTER SCIENCE 3D MODELING AND THE ROLE OF 3D MODELING IN OUR LIFE 1Beknazarova Saida Safibullaevna 2Maxammadjonov Maxammadjon Alisher o’g’li 2Ibodullayev Sardor Nasriddin o’g’li 1Uzbekistan, Tashkent, Tashkent University of Informational Technologies, Senior Teacher 2Uzbekistan, Tashkent, Tashkent University of Informational Technologies, student Abstract. In 3D computer graphics, 3D modeling is the process of developing a mathematical representation of any three-dimensional surface of an object (either inanimate or living) via specialized software. The product is called a 3D model. It can be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena. The model can also be physically created using 3D printing devices. Models may be created automatically or manually. The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. 3D modeling software is a class of 3D computer graphics software used to produce 3D models. Individual programs of this class are called modeling applications or modelers. Key words: 3D, modeling, programming, unity, 3D programs. Nowadays 3D modeling impacts in every sphere of: computer programming, architecture and so on. Firstly, we will present basic information about 3D modeling. 3D models represent a physical body using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data (points and other information), 3D models can be created by hand, algorithmically (procedural modeling), or scanned. 3D models are widely used anywhere in 3D graphics.
    [Show full text]
  • Using Depth Cameras for Dense 3D Modeling of Indoor Environments
    RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments Peter Henry1, Michael Krainin1, Evan Herbst1, Xiaofeng Ren2, Dieter Fox1;2 Abstract RGB-D cameras are novel sensing systems that capture RGB images along with per-pixel depth information. In this paper we investigate how such cam- eras can be used in the context of robotics, specifically for building dense 3D maps of indoor environments. Such maps have applications in robot navigation, manip- ulation, semantic mapping, and telepresence. We present RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment. Visual and depth information are also combined for view-based loop closure detection, followed by pose optimization to achieve globally consistent maps. We evaluate RGB-D Mapping on two large indoor environments, and show that it effectively combines the visual and shape informa- tion available from RGB-D cameras. 1 Introduction Building rich 3D maps of environments is an important task for mobile robotics, with applications in navigation, manipulation, semantic mapping, and telepresence. Most 3D mapping systems contain three main components: first, the spatial align- ment of consecutive data frames; second, the detection of loop closures; third, the globally consistent alignment of the complete data sequence. While 3D point clouds are extremely well suited for frame-to-frame alignment and for dense 3D reconstruc- tion, they ignore valuable information contained in images. Color cameras, on the other hand, capture rich visual information and are becoming more and more the sensor of choice for loop closure detection [21, 16, 30].
    [Show full text]
  • 3D Computer Graphics Compiled By: H
    animation Charge-coupled device Charts on SO(3) chemistry chirality chromatic aberration chrominance Cinema 4D cinematography CinePaint Circle circumference ClanLib Class of the Titans clean room design Clifford algebra Clip Mapping Clipping (computer graphics) Clipping_(computer_graphics) Cocoa (API) CODE V collinear collision detection color color buffer comic book Comm. ACM Command & Conquer: Tiberian series Commutative operation Compact disc Comparison of Direct3D and OpenGL compiler Compiz complement (set theory) complex analysis complex number complex polygon Component Object Model composite pattern compositing Compression artifacts computationReverse computational Catmull-Clark fluid dynamics computational geometry subdivision Computational_geometry computed surface axial tomography Cel-shaded Computed tomography computer animation Computer Aided Design computerCg andprogramming video games Computer animation computer cluster computer display computer file computer game computer games computer generated image computer graphics Computer hardware Computer History Museum Computer keyboard Computer mouse computer program Computer programming computer science computer software computer storage Computer-aided design Computer-aided design#Capabilities computer-aided manufacturing computer-generated imagery concave cone (solid)language Cone tracing Conjugacy_class#Conjugacy_as_group_action Clipmap COLLADA consortium constraints Comparison Constructive solid geometry of continuous Direct3D function contrast ratioand conversion OpenGL between
    [Show full text]
  • Extents and Limitations of 3D Computer Models for Graphic Analysis of Form and Space in Architecture SALEH UDDIN, Mohammed University of Missouri-Columbia, U.S.A
    Go to contents 18 Extents and Limitations of 3D Computer Models for Graphic Analysis of Form and Space in Architecture SALEH UDDIN, Mohammed University of Missouri-Columbia, U.S.A. http://web.missouri.edu/~envugwww/faculty.html http://web.missouri.edu/~uddinm This paper investigates the strength and limitations of basic 3D diagrammatic models and their related motion capabilities in the context of graphic analysis. The focus of such analysis is to create a computer based environment to represent visual analysis of architectural form and space. The paper highlights the restrictions that were found in a specific 3D-computer model environment to satisfy a basic diagrammatic need for analysis. Motion related features that take into account of parametric changes are also investigated to help enhance representation of analytic models. Acknowledging the restrictions, it can be stated that computational media are the only ones at present that can create an interactive multimedia format using components constructed through various computational techniques Keywords: 3D Model, Analytic Diagram, Motion Model, Form Analysis, Design Principles. Analysis points’ and also supported the theoretical model. Throughout history, the notion of ‘precedent’ has Thus, it can be said that an analysis is an abstract always provided conceptual models to serve the quest process of simplification that reduces a total work into for appropriate architectural forms. It is a relatively its essential elements. In architecture, the primary goal recent phenomenon that considers architecture of a design analysis and its representation is to expose through a veil of literary metaphors. Since the end the underlying concept, organizational pattern, design product of architecture is a built outcome, the most characteristics, and ‘tectonics’ through simplified basic theoretical stance must be supported in turn by diagrams.
    [Show full text]
  • Multi-Resolution Surfel Maps for Efficient Dense 3D Modeling And
    Journal of Visual Communication and Image Representation 25(1):137-147, Springer, January 2014, corrected Table 3 & Sec 6.2. Multi-Resolution Surfel Maps for Efficient Dense 3D Modeling and Tracking J¨orgSt¨uckler and Sven Behnke Autonomous Intelligent Systems, Computer Science Institute VI, University of Bonn, Friedrich-Ebert-Allee 144, 53113 Bonn, Germany fstueckler, [email protected] Abstract Building consistent models of objects and scenes from moving sensors is an important prerequisite for many recognition, manipulation, and navigation tasks. Our approach integrates color and depth measurements seamlessly in a multi-resolution map representation. We process image sequences from RGB-D cameras and consider their typical noise properties. In order to align the images, we register view-based maps efficiently on a CPU using multi- resolution strategies. For simultaneous localization and mapping (SLAM), we determine the motion of the camera by registering maps of key views and optimize the trajectory in a probabilistic framework. We create object models and map indoor scenes using our SLAM approach which includes randomized loop closing to avoid drift. Camera motion relative to the acquired models is then tracked in real-time based on our registration method. We benchmark our method on publicly available RGB-D datasets, demonstrate accuracy, efficiency, and robustness of our method, and compare it with state-of-the- art approaches. We also report on several successful public demonstrations where it was used in mobile manipulation tasks. Keywords: RGB-D image registration, simultaneous localization and mapping, object modeling, pose tracking 1. Introduction Robots performing tasks in unstructured environments must estimate their pose in reference to objects and parts of the environment.
    [Show full text]
  • Autodesk 3Ds Max Design Fundamentals
    Autodesk® 3ds Max® Design Fundamentals Course Length: 4 days The Autodesk® 3ds Max® Fundamentals training course provides a thorough introduction to the Autodesk 3ds Max software that will help new users make the most of this sophisticated application, as well as broaden the horizons of existing, self-taught users. The practices in this training course are geared towards real-world tasks encountered by the primary users of the Autodesk 3ds Max software: professionals in the Architecture, Interior Design, Civil Engineering, and Product Design industries. The main topics include: Introduction to Autodesk 3ds Max Autodesk 3ds Max Interface and Workflow Assembling Files by importing, linking, or merging 3D Modeling with Primitives and 2D Objects Using Modifiers to create and modify 3D objects Materials and Maps Autodesk 3ds Max Lighting Lighting and Rendering with mental ray Rendering and Cameras Animation for Visualization Course description shown for Autodesk 3ds Max 2016.Topics, curriculum, and/or prerequisites may change depending on software version. Prerequisites: Experience with 3D modeling is recommended. Training Guide Contents Chapter 1: Introduction to Autodesk 3ds Max Overview Visualization Workflow The Autodesk 3ds Max Interface Preferences Setting the Project Folder Configure Paths Display Drivers Viewport Display and Labels Chapter 2: Autodesk 3ds Max Configuration Viewport Navigation Viewport Configuration Object Selection Methods Units Setup Layer and Object Properties Chapter 3: Assembling Project Files Data Linking and Importing Linking Files References Chapter 4: Basic Modeling Techniques Model with Primitives Modifiers and Transforms Sub-Object Mode Reference Coordinate Systems and Transform Centers Cloning and Grouping Polygon Modeling Tools in the Ribbon Statistics in Viewport Course description shown for Autodesk 3ds Max 2016.
    [Show full text]
  • Architectural Visualization Produce Accurate, Beautiful, High-Quality Architectural Renderings
    Architectural Visualization Produce accurate, beautiful, high-quality architectural renderings Image courtesy of Valentin Studio “Having such powerful tools places more focus on us to do the best job possible through creating alluring moods and lighting spaces more effectively.” Ben Rappell, FKD Studio Autodesk 3ds Max enables architects, designers, and visualization specialists to fully explore, validate and communicate their creative ideas, from concept models to cinema-quality visualizations. A comprehensive 3D modeling, animation and rendering solution, 3ds Max delivers powerful lighting simulation and analysis technology, advanced rendering capabilities, and a faster, more integrated workflow. 3ds Max provides a powerful way to sharpen your competitive edge, visually communicate your client’s proposed designs, and beautifully tell their story with realistic elements and details. Bring architectural visualizations to life – with everything from interior walkthroughs to exterior environments, fully populated with roads, buildings, and landscapes, and animated with cars, people, and natural elements. Image courtesy of Joe Ong Why you’ll love 3ds Max TRANSFORM TECHNICAL TO VISUAL 3ds Max offers a rich and flexible toolset to model premium architectural designs, allowing full artistic control to translate technical drawings into visual stories. Image courtesy of Valentin Studio Key benefits Inspire, communicate, and sell your vision with detailed environments, objects, and embellishments. HIGH QUALITY IMAGERY Advanced tools in 3ds Max give you the built-in rendering power to create the highest quality photo-real visualizations and create a premium experience for clients to envision their future investment. EMBELLISH EVERY DETAIL 3ds Max allows you to bring to life fine-tuned extra touches such as cars, people, and vegetation, for an enriched and engaging experience that enhances the architecture and its surrounding landscape.
    [Show full text]