Blender for XNA

Total Page:16

File Type:pdf, Size:1020Kb

Blender for XNA Blender for XNA Levi D. Smith @GaTechGrad www.levidsmith.com July 2013 CodeStock Knoxville, TN Blender History • Developed by Ton Roosendaal in 1995 • Funding campaign in 2002 to GPL the software • Blender Foundation • Project Orange, Peach, Mango, Durian • http://www.blender.org/blenderorg/blender-foundation/history/ • http://en.wikipedia.org/wiki/Blender_%28software%29 Professional Use • Can Blender be used for professional quality graphics? Elephants Dream (2006) http://www.youtube.com/watch?v=TLkA0RELQ1g Big Buck Bunny (2008) http://www.youtube.com/watch?v=YE7VzlLtp-4 – Sintel (2010) http://www.youtube.com/watch?v=eRsGyueVLvQ – Tears of Steel (2012) http://www.youtube.com/watch?v=R6MlUcmOul8 What do You Need? • Visual C# Express 2010 o http://www.microsoft.com/visualstudio/eng/downloads#d-2010-express • XNA Game Studio 4.0 o http://www.microsoft.com/en-us/download/details.aspx?id=23714 • Blender 2.67 o http://www.blender.org/ • 2D Texture Editor o Gimp, Paint.NET, Microsoft Paint Views • Selecting the view – Numpad 7: Top View – Numpad 1: Front View – Numpad 3: Right Side View – Numpad 5: Orthographic (see gridlines) – Ctrl + Numpad: Other side (bottom, back, left side) • Note: All commands are for Blender 2.6 Cameras • Numpad 0: Camera view • Helpful for positioning view • Ctrl + 0 select camera when using multiple cameras • Right click on line between panes to split screen – Multiple concurrent views of scene • F12 renders the current scene • F3 saves image Modes • Change modes with Tab • Object mode – Lock location and size for XNA – Location: (0, 0, 0) Size: (1, 1, 1) – Vertex coordinates in XNA are relative to orange anchor dot • Edit mode – Select vertices, edges, faces – Move (translate), rotate, scale Creating an Object • A – Select All / Select None • B – Box select • E – Extrude • X – Delete • These can be applied to vertices, edges, and faces • Ctrl + Mouse button – free select • Z – fill object; select only one side Modifying the Object • G – Translate (move) • R – Rotate • S – Scale • Follow operator by X, Y, or Z to lock movement to that axis • Follow operator by number to translate, rotate, or scale by that value Armature • Add a bone to your object • Select point on bone in edit mode and extrude to add new bones • Make sure your bones are aligned with your object • In object mode, select armature and object • Ctrl + P, with automatic weights to assign armature to object Weight Paint • Select mesh in object mode, then select Weight Paint mode • Used for finer control of assigning vertices to bones • Change selected bone in object browser under Vertex Groups Posing • Pose mode to move armature – Must have armature selected to enter Pose mode • Record button - automatic keyframing – Not recommended • Dope sheet / action editor • I, LockRotScale – insert keyframe – Must be in Pose mode! – Make sure all bones are selected (A) • Can duplicate keyframe using select then Shift + D Animation • Walk cycle • Ctrl + F12 to render images • Images stored in “Output” folder (default C:\tmp) • Select “RGBA” to keep transparency Render a Face • Start with default Cube • Subsurf modifier – High value for CG image, low value for objects to be rendered in real time (games) • Ctrl + R – Loop cut • Mirror modifier – Delete half of the object • Proportional Editing Tool – O Key Render a Body • Set background images as a guide • Export FBX for XNA – Blender Z Up coordinate system – XNA Y Up coordinate system – Rotate 90 degrees on import – Both use “Right Hand Rule” – Select “XNA Strict Options” • Skinned Model Processor • Many models will cause slowdown – Hardware Instancing to draw multiple models efficiently Other Useful Options • F – Make face from selected vertices 3 or 4 selected vertices only • Remove doubles (button) Change merge threshold if no vertices are removed Vertices to be merged must be selected • Shift + D – duplicate object (or vertices/faces) • Ctrl + J – join two objects • Alt + F – fill (generate triangle faces for selected vertices) • Ctrl + T – generate triangles faces from quad faces Texture Mapping • Split View • Change new pane to UV/Image Editor • UV Pane: Image > Open Image > Select your texture • View Pane: Face Select Mode, then select faces • View Pane: Mesh > UV Unwrap > Select unwrap method • UV Pane: Scale and position vertices on texture • Select Texture tab • Set Type to “Image or Movie” Texture Mapping continued • Texture Tab: Under Image, select open and select the texture used • Texture Tab: Under Mapping, select “UV” for coordinates • Texture Tab: Under Mapping, select “UVMap” for map • Press F12 to render the scene and the image should be correctly mapped onto the model • Select specific faces in the View pane to see the corresponding points in the UV Pane XNA Project • Content project > Add Existing • Create Model object • Setup camera • Rotate / Translate / Scale as needed • Add code to loop through meshes and draw 3/12/13 Textures and Animation in XNA • Default Model object will handle non-moving models • Texture must be imported separately • SkinnedModelProcessor must be created to import animation data • SkinnedModelProcessor is not included with GameStudio • XNA will only use first animation defined in an FBX model • http://xbox.create.msdn.com/en-US/education/catalog/sample/skinned_model DepthStencilState • Be sure to set to default when displaying models after displaying sprites Special Techniques • Billboarding Keeps flat objects facing towards the camera Trees in a racing game Text in a 3D world • Atlas Mapping Render many similar objects at once Allows object texture changing without sacrificing processing time What Else Can Blender Do • Particle Systems • Fluid Simulation Sites to Check Out • Blender 3D: Noob to Pro – Free online wiki book – http://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro Blasting Bits My game using Blender models, with development process documented https://blastingbits.wordpress.com/ My website http://www.levidsmith.com Books • Foundation Blender Compositing Author: Roger Wickes http://www.amazon.com/Foundation-Blender-Compositing-Roger-Wickes/dp/1430219769 Covers Blender 2.4, before user interface changes XNA 3D Primer (eBook) Author: Michael Need http://www.amazon.com/gp/product/B003A6RCES?ie=UTF8&tag=thefistsofnia- 20&link_code=as3&camp=211189&creative=373489&creativeASIN=B003A6RCES Pre XNA 4.0 .
Recommended publications
  • Game of Thrones
    2012 ISSN 1433-2620 > B 43362 >> 16. Jahrgang >>> www.digitalproduction.com Deutschland € 14,95 Published by Österreich € 17,– 5 Schweiz sfr 23,– 05|12 MAGAZIN FÜR DIGITALE MEDIENPRODUKTION SEPTEMBER | OKTOBER 05|12 Game of Thrones | ParaNorman | Cinema 4D R14 | Maya 2013 | Element | Cinema 4D R14 Maya | ParaNorman 3DThrones | | SpeedGrade of Game Sony NEX Sony | SynthEyes | HP Z1 & HP Z820 | Tears of Steel | HP Z1 & Z820 Tears | SynthEyes Pixomondo erobert Westeros Game of Thrones Cinema 4D R14 Element 3D Maya 2013 Seit wann kann C4D Animation für Adobes Workflow-Winner oder sculpten? After Effects Pflicht-Update? DDP1205_001-001_U1_TitelP1205_001-001_U1_Titel 1 009.08.20129.08.2012 008:28:148:28:14 AKTUELL FILM & VFX 3D & ANIMATION INTERACTIVE INDUSTRIE DIGITAL ART SCIENCE & EDUCATION SERVICE Alle Bilder: (CC) Blender Foundation | mango.blender.org Matte Painting – WIP einer Matte für die Kuppel, in der die Zeitreise passiert. Tears of Steel – Film als Open Source Ende September soll das neue Open Movie „Tears of Steel“ (Codename: Project Mango) der Blender Foundation fertig gestellt werden. Es handelt sich dabei um einen Film zum Anfassen, zum Untersuchen und zum Studieren. Bis ins kleinste Detail werden die Quellen des Films offengelegt. Für den VFX-Interessierten eine wahre Fundgru- be. Gleichzeitig dient der Film dazu, die Blender-basierte Produktions-Pipeline zu erkunden, zu verbessern und fit zu machen für VFX. DP wagt einen Blick unter die (offene) Haube. von Gottfried Hofmann 104 WWW.DIGITALPRODUCTION.COM DDP1205_104-109_TearsSteelP1205_104-109_TearsSteel 110404 009.08.20129.08.2012 009:56:149:56:14 AUSGABE 05|12 BLENDER | OPEN MOVIE Open Source Footage Da „Tears of Steel“ Open Source ist, werden Sie nach der Fertigstellung das gesamte Material herunterladen können unter http://mango.blender.org/.
    [Show full text]
  • W4: OBJECTIVE QUALITY METRICS 2D/3D Jan Ozer [email protected] 276-235-8542 @Janozer Course Overview
    W4: OBJECTIVE QUALITY METRICS 2D/3D Jan Ozer www.streaminglearningcenter.com [email protected] 276-235-8542 @janozer Course Overview • Section 1: Validating metrics • Section 2: Comparing metrics • Section 3: Computing metrics • Section 4: Applying metrics • Section 5: Using metrics • Section 6: 3D metrics Section 1: Validating Objective Quality Metrics • What are objective quality metrics? • How accurate are they? • How are they used? • What are the subjective alternatives? What Are Objective Quality Metrics • Mathematical formulas that (attempt to) predict how human eyes would rate the videos • Faster and less expensive than subjective tests • Automatable • Examples • Video Multimethod Assessment Fusion (VMAF) • SSIMPLUS • Peak Signal to Noise Ratio (PSNR) • Structural Similarity Index (SSIM) Measure of Quality Metric • Role of objective metrics is to predict subjective scores • Correlation with Human MOS (mean opinion score) • Perfect score - objective MOS matched actual subjective tests • Perfect diagonal line Correlation with Subjective - VMAF VMAF PSNR Correlation with Subjective - SSIMPLUS PSNR SSIMPLUS SSIMPLUS How Are They Used • Netflix • Per-title encoding • Choosing optimal data rate/rez combination • Facebook • Comparing AV1, x265, and VP9 • Researchers • BBC comparing AV1, VVC, HEVC • My practice • Compare codecs and encoders • Build encoding ladders • Make critical configuration decisions Day to Day Uses • Optimize encoding parameters for cost and quality • Configure encoding ladder • Compare codecs and encoders • Evaluate
    [Show full text]
  • Introducing Basic Principles of Haptic Cinematography and Editing
    Eurographics Workshop on Intelligent Cinematography and Editing (2016) M. Christie, Q. Galvane, A. Jhala, and R. Ronfard (Editors) Introducing Basic Principles of Haptic Cinematography and Editing Philippe Guillotely1, Fabien Danieau1, Julien Fleureau1, and Ines Rouxel2 1Technicolor, Cesson-Sévigné, France. 2ESRA, Rennes, France. Abstract Adding the sense of touch to hearing and seeing would be necessary for a true immersive experience. This is the promise of the growing "4D-cinema" based on motion platforms and others sensory effects (water spray, wind, scent, etc.). Touch provides a new dimension for filmmakers and leads to a new creative area, the haptic cinematography. However design rules are required to use this sensorial modality in the right way for increasing the user experience. This paper addresses this issue, by introducing principles of haptic cinematography editing. The proposed elements are based on early feedback from different creative works performed by the authors (including a student in cinema arts), anticipating the role of haptographers, the experts on haptic content creation. Three full short movies have been augmented with haptic feedback and tested by numerous users, in order to provide the inputs for this introductory paper. Categories and Subject Descriptors (according to ACM CCS): 1. Tactile: the perception of vibrations, pressure and temperature H.5.2 [HCI]: User Interfaces—Haptic I/O through the skin; 2. Kinesthetic: the perception of positions and movements of limbs and forces from spindles and tendons; 1. Introduction 3. Proprioception: the perception of position and posture of the Today only two senses are stimulated when being in a movie the- body in space.
    [Show full text]
  • Encoding H.264 Video for Streaming and Progressive Download
    W4: KEY ENCODING SKILLS, TECHNOLOGIES TECHNIQUES STREAMING MEDIA EAST - 2019 Jan Ozer www.streaminglearningcenter.com [email protected]/ 276-235-8542 @janozer Agenda • Introduction • Lesson 5: How to build encoding • Lesson 1: Delivering to Computers, ladder with objective quality metrics Mobile, OTT, and Smart TVs • Lesson 6: Current status of CMAF • Lesson 2: Codec review • Lesson 7: Delivering with dynamic • Lesson 3: Delivering HEVC over and static packaging HLS • Lesson 4: Per-title encoding Lesson 1: Delivering to Computers, Mobile, OTT, and Smart TVs • Computers • Mobile • OTT • Smart TVs Choosing an ABR Format for Computers • Can be DASH or HLS • Factors • Off-the-shelf player vendor (JW Player, Bitmovin, THEOPlayer, etc.) • Encoding/transcoding vendor Choosing an ABR Format for iOS • Native support (playback in the browser) • HTTP Live Streaming • Playback via an app • Any, including DASH, Smooth, HDS or RTMP Dynamic Streaming iOS Media Support Native App Codecs H.264 (High, Level 4.2), HEVC Any (Main10, Level 5 high) ABR formats HLS Any DRM FairPlay Any Captions CEA-608/708, WebVTT, IMSC1 Any HDR HDR10, DolbyVision ? http://bit.ly/hls_spec_2017 iOS Encoding Ladders H.264 HEVC http://bit.ly/hls_spec_2017 HEVC Hardware Support - iOS 3 % bit.ly/mobile_HEVC http://bit.ly/glob_med_2019 Android: Codec and ABR Format Support Codecs ABR VP8 (2.3+) • Multiple codecs and ABR H.264 (3+) HLS (3+) technologies • Serious cautions about HLS • DASH now close to 97% • HEVC VP9 (4.4+) DASH 4.4+ Via MSE • Main Profile Level 3 – mobile HEVC (5+)
    [Show full text]
  • Ulrich Kaiser Die Einheiten Dieses Openbooks Werden Mittelfristig Auch Auf Elmu ( Bereitge- Stellt Werden
    Ulrich Kaiser Die Einheiten dieses OpenBooks werden mittelfristig auch auf elmu (https://elmu.online) bereitge- stellt werden. Die Website elmu ist eine von dem gemeinnützigen Verein ELMU Education e.V. getra- gene Wikipedia zur Musik. Sie sind herzlich dazu eingeladen, in Zukun Verbesse rungen und Aktualisierungen meiner OpenBooks mitzugestalten! Zu diesem OpenBook finden Sie auch Materialien auf musikanalyse.net: • Filmanalyse (Terminologie): http://musikanalyse.net/tutorials/filmanalyse-terminologie/ • Film Sample-Library (CC0): http://musikanalyse.net/tutorials/film-sample-library-cc0/ Meine Open Educational Resources (OER) sind kostenlos erhältlich. Auch öffentliche Auf- führungen meiner Kompositionen und Arrangements sind ohne Entgelt möglich, weil ich durch keine Verwertungsgesellschaft vertreten werde. Gleichwohl kosten Open Educatio- nal Resources Geld, nur werden diese Kosten nicht von Ihnen, sondern von anderen ge- tragen (z.B. von mir in Form meiner Ar beits zeit, den Kosten für die Domains und den Server, die Pflege der Webseiten usw.). Wenn Sie meine Arbeit wertschätzen und über ei- ne Spende unter stützen möchten, bedanke und freue ich mich: Kontoinhaber: Ulrich Kaiser / Institut: ING / Verwendungszweck: OER IBAN: DE425001 0517 5411 1667 49 / BIC: INGDDEFF 1. Auflage: Karlsfeld 2020 Autor: Ulrich Kaiser Umschlag, Layout und Satz Ulrich Kaiser erstellt in Scribus 1.5.5 Dieses Werk wird unter CC BY-SA 4.0 veröffentlicht: http://creativecommons.org/licenses/by-sa/4.0/legalcode Für die Covergestaltung (U1 und U4) wurden verwendet:
    [Show full text]
  • Open Animation Projects
    OPEN ANIMATION PROJECTS State of the art. Problems. Perspectives Julia Velkova & Konstantin Dmitriev Saturday, 10 November 12 Week: 2006 release of ELEPHANT’S DREAM (Blender Foundation) “World’s first open movie” (orange.blender.org) Saturday, 10 November 12 Week: 2007 start of COLLECT PROJECT (?) “a collective world wide "open source" animation project” Status: suspended shortly after launch URL: http://collectproject.blogspot.se/ Saturday, 10 November 12 Week: 2008 release of BIG BUCK BUNNY (Blender Foundation) “a comedy about a fat rabbit taking revenge on three irritating rodents.” URL: http://www.bigbuckbunny.org Saturday, 10 November 12 Week: 2008 release of SITA SINGS THE BLUES (US) “a musical, animated personal interpretation of the Indian epic the Ramayan” URL: http://www.sitasingstheblues.com/ Saturday, 10 November 12 Week: 2008 start of MOREVNA PROJECT (RUSSIA) “an effort to create full-feature anime movie using Open Source software only” URL: morevnaproject.org Saturday, 10 November 12 Week: 2009 start of ARSHIA PROJECT (Tinab pixel studio, IRAN) “the first Persian anime” Suspended in 2010 due to “lack of technical knowledge and resources” URL: http://www.tinabpixel.com Saturday, 10 November 12 Week: 2010 release of PLUMIFEROS (Argentina) “first feature length 3D animation made using Blender” URL: Plumiferos.com Saturday, 10 November 12 Week: 2010 release of LA CHUTE D’UNE PLUME (pèse plus que ta pudeur) - France “a short French speaking movie made in stop motion” URL: http://lachuteduneplume.free.fr/ Saturday, 10 November 12
    [Show full text]
  • Perceptual Compression for Video Storage and Processing Systems
    Perceptual Compression for Video Storage and Processing Systems Amrita Mazumdar Brandon Haynes Magda Balazinska University of Washington University of Washington University of Washington Luis Ceze Alvin Cheung Mark Oskin University of Washington University of California, Berkeley University of Washington ABSTRACT storage production today, and this trend is predicted to acceler- Compressed videos constitute 70% of Internet trafc, and video ate [12, 17, 43]. New domains of video production—e.g., panoramic upload growth rates far outpace compute and storage improvement (360°), stereoscopic, and light feld video for virtual reality (VR)— trends. Past work in leveraging perceptual cues like saliency, i.e., demand higher frame rates and resolutions, as well as increased regions where viewers focus their perceptual attention, reduces dynamic range. Further, the prevalence of mobile devices with compressed video size while maintaining perceptual quality, but high-resolution cameras makes it increasingly easy for humans to requires signifcant changes to video codecs and ignores the data capture and share video. management of this perceptual information. For decades, video codecs have exploited how humans see the In this paper, we propose Vignette, a compression technique and world, for example, by devoting increased dynamic range to spatial storage manager for perception-based video compression in the features (low frequency) or colors (green) we are more likely to ob- cloud. Vignette complements of-the-shelf compression software serve. One such perceptual cue, saliency, describes where in a video and hardware codec implementations. Vignette’s compression tech- frame a user focuses their perceptual attention. As video resolutions nique uses a neural network to predict saliency information used grow, e.g., 360° video and 8K VR displays, the salient regions of a during transcoding, and its storage manager integrates perceptual video shrink to smaller proportion of the video frame [57].
    [Show full text]
  • ILS-SUMM: Iterated Local Search for Unsupervised Video Summarization
    ILS-SUMM: ITERATED LOCAL SEARCH FOR UNSUPERVISED VIDEO SUMMARIZATION Yair Shemer1, Daniel Rotman2, and Nahum Shimkin1 1Faculty of Electrical Engineering, Technion, Haifa, Israel 2IBM Research, Haifa, Israel [email protected], [email protected], [email protected] ABSTRACT In recent years, there has been an increasing interest in building video summariza- tion tools, where the goal is to automatically create a short summary of an input video that properly represents the original content. We consider shot-based video summarization where the summary consists of a subset of the video shots which can be of various lengths. A straightforward approach to maximize the represen- tativeness of a subset of shots is by minimizing the total distance between shots and their nearest selected shots. We formulate the task of video summarization as an optimization problem with a knapsack-like constraint on the total summary duration. Previous studies have proposed greedy algorithms to solve this prob- lem approximately, but no experiments were presented to measure the ability of these methods to obtain solutions with low total distance. Indeed, our experi- ments on video summarization datasets show that the success of current methods in obtaining results with low total distance still has much room for improvement. In this paper, we develop ILS-SUMM, a novel video summarization algorithm to solve the subset selection problem under the knapsack constraint. Our algorithm is based on the well-known metaheuristic optimization framework – Iterated Local Search (ILS), known for its ability to avoid weak local minima and obtain a good near-global minimum. Extensive experiments show that our method finds solu- tions with significantly better total distance than previous methods.
    [Show full text]
  • Cosmos Laundromat
    Cosmos Laundromat Home The Film The Team Sponsors Get Involved Blender Cloud Search Making Of,Production Cosmos Laundromat – First Cycle 10 August, 2015 | 72 Comments | by Ton Roosendaal We’re very proud and happy to share the first 10 minutes of Cosmos Laundromat with you. It’s been more than a year of hard work by many people! Thanks everyone for helping to make it, and thanks to the thousands of supporters and donators for making it possible. Want to see more? Join Blender Cloud and subscribe for just $10 per month, or send us a donation via PayPal to [email protected]. On behalf of everyone on the team, Ton Roosendaal. Share on Twitter 39 Share on Facebook 513 62 2 72 Responses 1. Aaron Carlisle says: 10/08/2015 at 18:25 Amazing really amazing guys and girls can’t wait for more Reply 2. Manuel says: 10/08/2015 at 18:29 Hats off to all of the team! This was outstanding! The quality of the images, the animation, the sound, the smoke, the hair, the composition! A true milestone! I am optimistic this episode ensures the support for the continuation of this fantastic story. :) Reply 3. Lukasz says: 10/08/2015 at 18:36 Open source is getting more and more powerful. Great work, Blender team! Reply 4. Mason Menzies says: 10/08/2015 at 18:37 Do you think i could get the frame for a wallpaper (in the video it’s at 7:50) it’s a fantastic shot. i would love the full res frame of it if that’s ok.
    [Show full text]
  • Open Source Film a Model for Our Future?
    Medientechnik First Bachelor Thesis Open Source Film A model for our future? Completed with the aim of graduating with a Bachelor of Science in Engineering From the St. Pölten University of Applied Sciences Media Technology degree course Under the supervision of FH-Prof. Mag. Markus Wintersberger Completed by Dora Takacs mt081098 St. Pölten, on June 30, 2010 Medientechnik Declaration • the attached research paper is my own, original work undertaken in partial fulfillment of my degree. • I have made no use of sources, materials or assistance other than those which have been openly and fully acknowledged in the text. If any part of another person’s work has been quoted, this either appears in inverted commas or (if beyond a few lines) is indented. • Any direct quotation or source of ideas has been identified in the text by author, date, and page number(s) immediately after such an item, and full details are provided in a reference list at the end of the text. • I understand that any breach of the fair practice regulations may result in a mark of zero for this research paper and that it could also involve other repercussions. • I understand also that too great a reliance on the work of others may lead to a low mark. Day Undersign Takacs, Dora, mt081098 2 Medientechnik Abstract Open source films, which are movies produced and published using open source methods, became increasingly widespread over the past few years. The purpose of my bachelor thesis is to explore the young history of open source filmmaking, its functionality and the simple distribution of such movies.
    [Show full text]
  • Automatic Prediction of Emotions Induced by Movies Yoann Baveye
    Automatic prediction of emotions induced by movies Yoann Baveye To cite this version: Yoann Baveye. Automatic prediction of emotions induced by movies. Other. Ecole Centrale de Lyon, 2015. English. NNT : 2015ECDL0035. tel-01272240 HAL Id: tel-01272240 https://tel.archives-ouvertes.fr/tel-01272240 Submitted on 10 Feb 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Thèse pour obtenir le grade de DOCTEUR DE L’ECOLE CENTRALE DE LYON spécialité “Informatique” préparée au LIRIS Automatic Prediction of Emotions Induced by Movies Ecole Doctorale InfoMaths Thèse soutenue le 12/11/2015 par Yoann BAVEYE devant le jury composé de : Prof. Björn Schuller University of Passau (Rapporteur) Prof. Thierry Pun Université de Genève (Rapporteur) Prof. Patrick Le Callet Université de Nantes (Examinateur) Dr. Mohammad Soleymani Université de Genève (Examinateur) Prof. Liming Chen Ecole Centrale de Lyon (Directeur de thèse) Dr. Emmanuel Dellandréa Ecole Centrale de Lyon (Co-encadrant) Mme Christel Chamaret Technicolor (Co-encadrante) Acknowledgements irst of all, I wish to thank Christel Chamaret for giving me the oppor- F tunity, after two internships, to work on this thesis in Technicolor. She gave me her trust and has always been there to support me.
    [Show full text]
  • “Blender, a Classic Underdog Story, Is the World's Most Widely Used 3D
    The art of open source Open source powers every part of the creative arts. Jim Thacker explores how Blender is conquering animation and movie effects. lender has been used to create It may not be the market leader – animations for national commercial tools, particularly those television channels and developed by Autodesk, are still used for Bcommercials for Coca-Cola, the majority of professional animation, Pizza Hut and BMW. It creates slick visual effects and game development marketing images for brands ranging from projects – in the West, at least. But it is Puma to Philippe Starck. It has even been capable of great work. used on Oscar-nominated movies. And Over the next four pages, we’ll meet best of all, it’s open- source software. “Blender, a classic underdog Blender is a classic underdog story. story, is the world’s most Originally the in-house 3D toolset of a small widely used 3D software.” Dutch animation firm, it has survived early financial hardships and some of the companies using Blender for even the collapse of its original distributor to commercial projects, from illustrations win widespread popular acclaim. With over for cereal boxes to the visual effects four million downloads each year, it is now by for Red Dwarf. We’ll explore how the far the world’s most widely used 3D software. software powers an international But more importantly for the purposes network of animation studios on every of this article, it’s software that commands continent except Antarctica. And we’ll even the respect of professional artists. Once try to answer the question: ‘If Blender is so dismissed as a tool for hobbyists, Blender is great, why doesn’t it get used on more now praised by some of the world’s largest Hollywood movies?’ animation studios.
    [Show full text]