Premultiplied Alpha

Total Page:16

File Type:pdf, Size:1020Kb

Premultiplied Alpha Depth and Transparency Computer Graphics CMU 15-462/15-662 Today: Wrap up the rasterization pipeline! Remember our goal: • Start with INPUTS (triangles) –possibly w/ other data (e.g., colors or texture coordinates) • Apply a series of transformations: STAGES of pipeline • Produce OUTPUT (fnal image) INPUT RASTERIZATION OUTPUT (TRIANGLES) PIPELINE (BITMAP IMAGE) VERTICES A: ( 1, 1, 1 ) E: ( 1, 1,-1 ) B: (-1, 1, 1 ) F: (-1, 1,-1 ) C: ( 1,-1, 1 ) G: ( 1,-1,-1 ) D: (-1,-1, 1 ) H: (-1,-1,-1 ) TRIANGLES EHF, GFH, FGB, CBG, GHC, DCH, ABD, CDB, HED, ADE, EFA, BAF CMU 15-462/662 What we know how to do so far… (w, h) y z x (0, 0) position objects in the world project objects onto the screen sample triangle coverage (3D transformations) (perspective projection) (rasterization) ??? put samples into frame buffer sample texture maps interpolate vertex attributes (depth & alpha) (fltering, mipmapping) (barycentric coodinates) CMU 15-462/662 Occlusion CMU 15-462/662 Occlusion: which triangle is visible at each covered sample point? Opaque Triangles 50% transparent triangles CMU 15-462/662 Sampling Depth Assume we have a triangle given by: – the projected 2D coordinates (xi,yi) of each vertex – the “depth” di of each vertex (i.e., distance from the viewer) screen Q: How do we compute the depth d at a given sample point (x,y)? A: Interpolate it using barycentric coordinates (just like any other attribute that varies linearly over the triangle) CMU 15-462/662 The depth-buffer (Z-buffer) For each coverage sample point, depth-buffer stores the depth of the closest triangle seen so far. Initial state of depth buffer before rendering any triangles (all samples store farthest distance) Grayscale value of sample point used to indicate distance Black = small distance White = large distance (“infnity”) CMU 15-462/662 Depth buffer example CMU 15-462/662 Example: rendering three opaque triangles CMU 15-462/662 Occlusion using the depth-buffer (Z-buffer) Processing yellow triangle: Grayscale value of sample point depth = 0.5 used to indicate distance White = large distance Black = small distance Red = sample passed depth test Color buffer contents Depth buffer contents CMU 15-462/662 Occlusion using the depth-buffer (Z-buffer) After processing yellow triangle: Grayscale value of sample point used to indicate distance White = large distance Black = small distance Red = sample passed depth test Color buffer contents Depth buffer contents CMU 15-462/662 Occlusion using the depth-buffer (Z-buffer) Processing blue triangle: Grayscale value of sample point depth = 0.75 used to indicate distance White = large distance Black = small distance Red = sample passed depth test Color buffer contents Depth buffer contents CMU 15-462/662 Occlusion using the depth-buffer (Z-buffer) After processing blue triangle: Grayscale value of sample point used to indicate distance White = large distance Black = small distance Red = sample passed depth test Color buffer contents Depth buffer contents CMU 15-462/662 Occlusion using the depth-buffer (Z-buffer) Processing red triangle: Grayscale value of sample point depth = 0.25 used to indicate distance White = large distance Black = small distance Red = sample passed depth test Color buffer contents Depth buffer contents CMU 15-462/662 Occlusion using the depth-buffer (Z-buffer) After processing red triangle: Grayscale value of sample point used to indicate distance White = large distance Black = small distance Red = sample passed depth test Color buffer contents Depth buffer contents CMU 15-462/662 Occlusion using the depth buffer bool pass_depth_test(d1, d2) { return d1 < d2; } depth_test(tri_d, tri_color, x, y) { if (pass_depth_test(tri_d, zbuffer[x][y]) { // triangle is closest object seen so far at this // sample point. Update depth and color buffers. zbuffer[x][y] = tri_d; // update zbuffer color[x][y] = tri_color; // update color buffer } } CMU 15-462/662 Does depth-buffer algorithm handle interpenetrating surfaces? Of course! Occlusion test is based on depth of triangles at a given sample point. The relative depth of triangles may be different at different sample points. Green triangle in front of yellow triangle Yellow triangle in front of green triangle CMU 15-462/662 Does depth-buffer algorithm handle interpenetrating surfaces? Of course! Occlusion test is based on depth of triangles at a given sample point. The relative depth of triangles may be different at different sample points. CMU 15-462/662 Does depth buffer work with super sampling? Of course! Occlusion test is per sample, not per pixel! This example: green triangle occludes yellow triangle CMU 15-462/662 Color buffer contents CMU 15-462/662 Color buffer contents (4 samples per pixel) CMU 15-462/662 Final resampled result Note anti-aliasing of edge due to fltering of green and yellow samples. CMU 15-462/662 Summary: occlusion using a depth buffer ▪ Store one depth value per coverage sample (not per pixel!) ▪ Constant space per sample - Implication: constant space for depth buffer ▪ Constant time occlusion test per covered sample - Read-modify write of depth buffer if “pass” depth test - Just a read if “fail” ▪ Not specifc to triangles: only requires that surface depth can be evaluated at a screen sample point But what about semi-transparent surfaces? CMU 15-462/662 Compositing CMU 15-462/662 Representing opacity as alpha Alpha describes the opacity of an object - Fully opaque surface: ! = 1 - 50% transparent surface: ! = 0.5 - Fully transparent surface: ! = 0 Red triangle with decreasing opacity ! = 1 ! = 0.75 ! = 0.5 ! = 0.25 ! =0 CMU 15-462/662 Alpha: additional channel of image (RGBA) ! of foreground object CMU 15-462/662 Over operator: Composite image B with opacity !B over image A with opacity !A A A over B != B over A B B “Over” is not commutative A B over A A over B Koala over NYC CMU 15-462/662 Fringing Poor treatment of color/alpha can yield dark “fringing”: foreground color foreground alpha background color fringing no fringing CMU 15-462/662 No fringing CMU 15-462/662 Fringing (…why does this happen?) CMU 15-462/662 Lecture 5 Math Lecture 5 Math d (x, y)=Ax + By + C wd (x, y)=Ax + By + C w T p0x p0y ,d0 T p0x p0y T ,d0 ⇥p1x p1y⇤ ,d1 T ⇥p1x p1y⇤T ,d1 ⇥p2x p2y⇤ ,d2 LectureT 5 Math ⇥p2x p2y⇤⇤ ,d2 C =⇥↵ B + (1⇤ ↵ )↵ A B − B A C = ↵BB +d (1 ↵B)↵AA Over operator:(−x, y)= non-premultipliedAx + By + C alpha ↵ = ↵ + (1 ↵ )↵ C B w − B A Composite image B with opacity !B over image A with opacity !A ↵C = ↵B + (1 ↵B)↵A A frst attempt: − T T A = Ar Ag pA0xb p0y ,d0 T A = ⇥A A A ⇤ r g⇥ b ⇤T A p1x T p1y ,d1 B B = ⇥Br Bg Bb ⇤ T T B = ⇥B B⇥p2Bx⇤ p2y⇤ ,d2 B over A r g b Appearance of semi- transparent A Composited⇥ color: ⇥ ⇤ ⇤ C = ↵ B + (1 ↵ )↵ A B − B A Appearance of What B lets through semi-transparent B ↵ = ↵ + (1 ↵ )↵ C B − B A CMU 15-462/662 Lecture 5 Math Lecture 5 Math d (x, y)=Ax + By + C wd (x, y)=LectureAx + By + 5C Math w T p0x p0y ,d0 T p d p T ,d ⇥p0x (px,0y y⇤)=,dAx0 + By + C 1xw 1y 1 ⇥ ⇤T pLecture1x p1y T ,d 5 Math1 ⇥p2x p2y⇤ ,d2 ⇥ ⇤T T ⇥p2x pp2y0⇤x ,dp0y2 ,d0 T C =⇥d↵BB +⇥ (1⇤ ↵B)↵⇤AA (x, y)=p1xAx− p+1yBy,d+ C1 COver=w↵ B +operator: (1 ↵ )↵ A premultiplied alpha B ⇥ B ⇤TA ⇥p2x− p2y⇤ ,d2 Composite↵C = ↵B + image (1 ↵ TBB with)↵A opacity B over image A with opacity A p0x⇥ p0−y ,d⇤ 0 ! ! Not premultiplied: ↵C Lecture= ↵B + (1 5↵ MathB)↵A C⇥ = ↵ B−⇤+T (1 ↵ )↵ A Non-premultipliedCp=1x ↵BpB1y + alpha: (1,dT 1↵B)↵AA A = Ar Ag Ab − ↵ = ↵ +T (1 ↵ )↵ ⇥p↵C =p↵B⇤+,d (1T ↵B)↵A dA = ⇥A2rx Ag2y Ab⇤ −2 (x, y)=Ax + By + C T A w A = Ar ATg Ab B B =⇥⇥Br Bg ⇤Bb ⇤ ⇥ T ⇤T BC == ⇥↵BBBB=B+⇥B (1rB ⇤B↵Bg )↵BAbA⇤ two multiplies, one add r g T −b B over A p0x p0y ,d0 (referring to vector ops on colors) Premultiplied: ⇥ ⇥ ⇤ ⇤ C0 = BT+ (1 ↵B)A Premultiplied⇥p1x 0 p1 alpha:y⇤ ,d1 B ↵C = ↵B + (1 ↵−B)↵A − T A0 = ↵AAr ↵TAAg ↵AAb ↵A ⇥p2x p2y⇤ ,d2 T B = ⇥↵ B ↵ B ↵ B ↵ ⇤ T B0 =⇥ ↵BBr ⇤↵BBg ↵BBb ↵B ⇥ ⇤ one multiply, one add C = ↵⇥ B + (1 ↵ )↵ A ⇤ B − B A Composite alpha: Notice premultiplied alpha composites alpha just like how it composites rgb. ↵C = ↵B + (1 ↵B)↵A − Non-premultiplied alpha composites alpha differently than rgb. CMU 15-462/662 A problem with non-premultiplied alpha ! Suppose we upsample an image w/ an alpha mask, then composite it onto a background ! How should we compute the interpolated color/alpha values? ! If we interpolate color and alpha separately, then blend using the non-premultiplied “over” operator, here’s what happens: original original color alpha upsampled upsampled color alpha Notice black “fringe” that occurs because we’re blending, e.g., 50% blue pixels using 50% alpha, rather than, say, 100% blue pixels with 50% alpha. composited onto yellow background CMU 15-462/662 Eliminating fringe w/ premultiplied “over” ! If we instead use the premultiplied “over” operation, we get the correct alpha: (1-alpha) background + = upsampled color (1-alpha)*background composite image w/ no fringe CMU 15-462/662 Eliminating fringe w/ premultiplied “over” ! If we instead use the premultiplied “over” operation, we get the correct alpha: composite image WITH fringe CMU 15-462/662 Similar problem with non-premultiplied alpha Consider pre-fltering (downsampling) a texture with an alpha matte Desired fltered result ! input color input ! fltered color fltered ! fltered result Downsampling non-premultiplied alpha composited over white image results in 50% opaque brown) 0.25 * ((0, 1, 0, 1) + (0, 1, 0, 1) + Result of fltering (0, 0, 0, 0) + (0, 0, 0, 0)) = (0, 0.5, 0, 0.5) premultiplied image CMU 15-462/662 LectureLecture 5 Math 5 Math Lecture 5 Math d d Lecture(x, y)=Ax 5+ MathBy + C (x,wd y)=Ax + By + C w (x, y)=Ax +
Recommended publications
  • Scope and Issues in Alpha Compositing Technology
    International Journal of Innovative Research in Advanced Engineering (IJIRAE) ISSN: 2349-2763 Issue 12, Volume 2 (December 2015) www.ijirae.com Scope and Issues in Alpha Compositing Technology Sudipta Maji Asoke Nath M.Sc. Computer Science Department Of Computer Science Department Of Computer Science St. Xavier's College St. Xavier's College Abstract— Alpha compositing is the process of combining an image with a background to create the appearance of partial transparency. The combining operation takes advantage of an alpha channel, which basically determines how much of a source pixel's color information covers a destination pixel's color information. In this documentation, the authors discuss different types of alpha blending modes which are used to achieve this partial transparency. Alpha compositing is a process which is used to combine two images and there are the ranges of alpha value which is multiplied with the source pixel to generate target pixel.). Alpha values also range from 0 to 255, with 0 being completely transparent (i.e., 0% opaque) and 255 completely opaque (i.e., 100% opaque. Alpha blending is a way of mixing the colors of two images together to form a final image. In the preset paper the authors tried to give a comprehensive review on different issues and scope in Alpha Compositing technology. A good example of naturally occurring alpha blending is a rainbow over a waterfall. The rainbow as one image, and the background waterfall is another, then the final image can be formed by alpha blending the two together. Keywords— Alpha Compositing; pixel; RGB; Android; Blending Equation I.
    [Show full text]
  • Freeimage Documentation Here
    FreeImage a free, open source graphics library Documentation Library version 3.13.1 Contents Introduction 1 Foreword ............................................................................................................................... 1 Purpose of FreeImage ........................................................................................................... 1 Library reference .................................................................................................................. 2 Bitmap function reference 3 General functions .................................................................................................................. 3 FreeImage_Initialise ............................................................................................... 3 FreeImage_DeInitialise .......................................................................................... 3 FreeImage_GetVersion .......................................................................................... 4 FreeImage_GetCopyrightMessage ......................................................................... 4 FreeImage_SetOutputMessage ............................................................................... 4 Bitmap management functions ............................................................................................. 5 FreeImage_Allocate ............................................................................................... 6 FreeImage_AllocateT ............................................................................................
    [Show full text]
  • Alpha Compositing Alpha Compositing the Alpha Channel
    Alpha Compositing • alpha compositing is the process of combining an image with a background to create the appearance of partial transparency. • In order to correctly combine these image elements, it is Alpha Compositing necessary to keep an associated matte for each element. • This matte contains the coverage information - the shape of the geometry being drawn • This allows us to distinguish between parts of the image where the geometry was actually drawn and other parts of the image which are empty. The Alpha Channel The Alpha Channel “A separate component is needed to retain the matte information, the extent of coverage of an element at a pixel. • To store this matte information, the concept of an alpha In a full colour rendering of an element, the RGB components retain only the channel was introduced by A. R. Smith (1970s) colour. In order to place the element over an arbitrary background, a mixing factor is required at every pixel to control the linear interpolation of foreground and • Porter and Duff then expanded this to give us the basic background colours. algebra of Compositing in the paper "Compositing Digital In general, there is no way to encode this component as part of the colour Images" in 1984. information. For anti-aliasing purposes, this mixing factor needs to be of comparable resolution to the colour channels. Let us call this an alpha channel, and let us treat an alpha of 0 to indicate no coverage, 1 to mean full coverage, with fractions corresponding to partial coverage.” Porter & Duff 84 Alpha Channel Pre multiplied alpha • In a 2D image element which stores a colour for each pixel, an additional value is “What is the meaning of the quadruple (r,g,b,a) at a pixel? stored in the alpha channel containing a value ranging from 0 to 1.
    [Show full text]
  • CS6640 Computational Photography 15. Matting and Compositing
    CS6640 Computational Photography 15. Matting and compositing © 2012 Steve Marschner 1 Final projects • Flexible group size • This weekend: group yourselves and send me: a one-paragraph description of your idea if you are fixed on one one-sentence descriptions of 3 ideas if you are looking for one • Next week: project proposal one-page description plan for mid-project milestone • Before thanksgiving: milestone report • December 5 (day of scheduled final exam): final presentations Cornell CS6640 Fall 2012 2 Compositing ; DigitalDomain; vfxhq.com] DigitalDomain; ; Titanic [ Cornell CS4620 Spring 2008 • Lecture 17 © 2008 Steve Marschner • Foreground and background • How we compute new image varies with position use background use foreground [Chuang et al. / Corel] / et al. [Chuang • Therefore, need to store some kind of tag to say what parts of the image are of interest Cornell CS4620 Spring 2008 • Lecture 17 © 2008 Steve Marschner • Binary image mask • First idea: store one bit per pixel – answers question “is this pixel part of the foreground?” [Chuang et al. / Corel] / et al. [Chuang – causes jaggies similar to point-sampled rasterization – same problem, same solution: intermediate values Cornell CS4620 Spring 2008 • Lecture 17 © 2008 Steve Marschner • Partial pixel coverage • The problem: pixels near boundary are not strictly foreground or background – how to represent this simply? – interpolate boundary pixels between the fg. and bg. colors Cornell CS4620 Spring 2008 • Lecture 17 © 2008 Steve Marschner • Alpha compositing • Formalized in 1984 by Porter & Duff • Store fraction of pixel covered, called α A covers area α B shows through area (1 − α) – this exactly like a spatially varying crossfade • Convenient implementation – 8 more bits makes 32 – 2 multiplies + 1 add per pixel for compositing Cornell CS4620 Spring 2008 • Lecture 17 © 2008 Steve Marschner • Alpha compositing—example [Chuang et al.
    [Show full text]
  • Textalive: Integrated Design Environment for Kinetic Typography
    TextAlive: Integrated Design Environment for Kinetic Typography Jun Kato Tomoyasu Nakano Masataka Goto National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan {jun.kato, t.nakano. m.goto}@aist.go.jp ABSTRACT After Effects [1]). While such tools provide a fine granularity This paper presents TextAlive, a graphical tool that allows of control over the animations, producing the final result interactive editing of kinetic typography videos in which takes long time even for the experienced user because lyrics or transcripts are animated in synchrony with the manual synchronization of the text animation and audio is corresponding music or speech. While existing systems have tedious. In addition, general tools do not provide any allowed the designer and casual user to create animations, guidelines to help novice users produce convincing effects. most of them do not take into account synchronization with Tools specifically designed for making kinetic typography audio signals. They allow predefined motions to be applied videos (e.g., ActiveText [18] or Kinedit [7]) provide to objects and parameters to be tweaked, but it is usually predefined templates of effective animations, from which the impossible to extend the predefined set of motion algorithms user can choose. Such domain-specific tools, though, have within these systems. We therefore propose an integrated had a certain limitation in their flexibility. That is, the design environment featuring (1) GUIs that designers can use programming needed to create new animation templates has to create and edit animations synchronized with audio signals, to be done in separate development environments. (2) integrated tools that programmers can use to implement animation algorithms, and (3) a framework for bridging the With the goal of pushing forward the state of the art in kinetic interfaces for designers and programmers.
    [Show full text]
  • The Mystery of Blend- and Landwatermasks
    The Mystery of Blend- and LandWaterMasks Introduction Blend- and LandWaterMask’s are essential for Photoreal Scenery Creation. This process is also the most time consuming and tedious part if done seriously and exactly. For this process sophisticated Painting programs are required like Adobe Photoshop, Corel Painter, Corel Photo Paint Pro, Autodesk SketchBook Pro, GNU Image Manipulation Program (GIMP), IrfanView, Paint.NET or any Photo Painting Program which supports Layers and Alpha-channels can be used, depending on the taste and requirements of the user. Searching the Internet reveals that a lot of people have problems – including the Author of this document himself – with the above mentioned subject. One important note: This is not a tutorial on how to create Blend- and LandWaterMasks because the Internet contains quiet a lot of tutorials in written form and in video form, some are good, some are bullshit. FSDeveloper.com provides an exhausting competent and comprehensive lot of information regarding this subject. The Author of this document gained a lot of this information from this source! Another requirement is a good understanding of the Photo Paint Program you are using. Also a thorough knowledge of the Flight Simulator Terrain SDK is required or is even essential for the process. You need also a basic understanding of technical terms used in the Computer Graphic Environment. Technical Background and Terms As usual, there is a bit of technical explanations and technical term's required. I will keep this very very short, because the Internet, i.e. Wikipedia provides exhausting information regarding this subjects, I will provide the relevant Hyperlinks so you can navigate to the pertinent Web-pages.
    [Show full text]
  • The Gameduino 2 Tutorial, Reference and Cookbook
    The Gameduino 2 Tutorial, Reference and Cookbook James Bowman [email protected] Excamera Labs, Pescadero CA, USA c 2013, James Bowman All rights reserved First edition: December 2013 Bowman, James. The Gameduino 2 Tutorial, Reference and Cookbook / James Bowman. – 1st Excamera Labs ed. 200 p. Includes illustrations, bibliographical references and index. ISBN 978-1492888628 1. Microcontrollers – Programming I. Title Contents I Tutorial 9 1. Plug in. Power up. Play something 11 2. Quick start 13 2.1. Hello world...................................... 14 2.2. Circles are large points.............................. 16 2.3. Color and transparency.............................. 18 2.4. Demo: fizz....................................... 20 2.5. Playing notes..................................... 21 2.6. Touch tags....................................... 22 2.7. Game: Simon..................................... 24 3. Bitmaps 29 3.1. Loading a JPEG................................... 30 3.2. Bitmap size...................................... 31 3.3. Bitmap handles.................................... 34 3.4. Bitmap pixel formats................................ 36 3.5. Bitmap color...................................... 38 3.6. Converting graphics................................ 39 3.7. Bitmap cells...................................... 40 3.8. Rotation, zoom and shrink............................ 42 4. More on graphics 47 4.1. Lines........................................... 48 4.2. Rectangles....................................... 50 4.3. Gradients.......................................
    [Show full text]
  • Text Hiding in RGBA Images Using the Alpha Channel and the Indicator Method
    Text Hiding in RGBA Images Using the Alpha Channel and the Indicator Method إﺧﻔﺎء اﻟﻨﺼﻮص ﻓﻲ ﺻﻮر ﻣﻦ ﻧﻮع RGBA ﺑﺎﺳﺘﺨﺪام ﻗﻨﺎة أﻟﻔﺎ و ﻃﺮﻳﻘﺔ اﻟﻤﺆﺷﺮ By Ghaith Salem Sarayreh (400910328) Supervisor Dr. Mudhafar Al-Jarrah A thesis Submitted in partial Fulfillment of the requirements of the Master Degree in Computer Science Faculty of Information Technology Middle East University Amman – Jordan Jan, 2014 II III IV V DEDICATION I dedicate this thesis to my parents, who first planted the seeds of knowledge and wisdom in me. From my birth and throughout the development of my life, they have encouraged me with love and care to seek out knowledge and excellence. They challenged me to pursue my dreams, which led me to the completion of this endeavor. VI ACKNOWLEDGEMENTS At first I would like to acknowledge my supervisor Dr. Modhafar Al- jarrah who offered me guidance and assistance throughout my study. I wish to thank all my friends and family, who helped me by contributing in many ways. VII Abstract Steganography is the art and science of protecting confidential information through concealing its existence. It is an ancient art of hiding secret messages in forms or media that cannot be observed by an adversary. The field of Steganography has been revived lately to become an important tool for secure transmission of secret messages. This thesis investigates the hiding of text messages and documents within the alpha channel of RGBA color images. The proposed model consists of two phases. In the first phase the secret text is stored as bits in the LSB part of the alpha channel.
    [Show full text]
  • Computer Graphics CMU 15-462/15-662, Spring 2018 What You Know How to Do (At This Point in the Course)
    Lecture 8: The Rasterization Pipeline (and its implementation on GPUs) Computer Graphics CMU 15-462/15-662, Spring 2018 What you know how to do (at this point in the course) y (w, h) z y x z x (0, 0) Position objects and the Determine the position of Project objects onto camera in the world objects relative to the camera the screen Sample triangle coverage Compute triangle attribute Sample texture maps values at covered sample points CMU 15-462/662, Spring 2018 What else do you need to know to render a picture like this? Surface representation How to represent complex surfaces? Occlusion Determining which surface is visible to the camera at each sample point Lighting/materials Describing lights in scene and how materials reflect light. CMU 15-462/662, Spring 2018 Course roadmap Introduction Drawing a triangle (by sampling) Drawing Things Transforms and coordinate spaces Key concepts: Perspective projection and texture sampling Sampling (and anti-aliasing) Coordinate Spaces and Transforms Today: putting it all together: end-to-end rasterization pipeline Geometry Materials and Lighting CMU 15-462/662, Spring 2018 Occlusion CMU 15-462/662, Spring 2018 Occlusion: which triangle is visible at each covered sample point? Opaque Triangles 50% transparent triangles CMU 15-462/662, Spring 2018 Review from lastLecture class 5 Math Assume we have a triangle defined(x, by y) the screen-space 2D position and distance (“depth”) from the camera of each vertex. T p0x p0y ,d0 T ⇥p1x p1y⇤ ,d1 Lecture 5 Math T ⇥p2x p2y⇤ ,d2 ⇥ ⇤ How do we compute the depth of the triangle at covered sample point ( x, y ) ? Interpolate it just like any other attribute that varies linearly overp the0x surfacep0y ,d 0 of the triangle.
    [Show full text]
  • Openshot Video Editor Manual – V2.0.0 Introduction
    OpenShot Video Editor Manual – v2.0.0 Introduction OpenShot Video Editor is a program designed to create videos on Linux. It can easily combine multiple video clips, audio clips, and images into a single project, and then export the video into many common video formats. OpenShot is a non-linear video editor, which means any frame of video can be accessed at any time, and thus the video clips can be layered, mixed, and arranged in very creative ways. All video clip edits (trimming, cutting, etc...) are non-destructive, meaning that the original video clips are never modified. You can use OpenShot to create photo slide shows, edit home videos, create television commercials and on-line films, or anything else you can dream up. Features Overview: ● Support for many video, audio, and image formats (based on FFmpeg) ● Gnome integration (drag and drop support) ● Multiple tracks ● Clip resizing, trimming, snapping, and cutting ● Video transitions with real-time previews ● Compositing, image overlays, watermarks ● Title templates, title creation ● SVG friendly, to create and include titles and credits ● Scrolling motion picture credits ● Solid color clips (including alpha compositing) ● Support for Rotoscoping / Image sequences ● Drag and drop timeline ● Frame stepping, key-mappings: J,K, and L keys ● Video encoding (based on FFmpeg) ● Key Frame animation ● Digital zooming of video clips ● Speed changes on clips (slow motion etc) ● Custom transition lumas and masks ● Re-sizing of clips (frame size) ● Audio mixing and editing ● Presets for key frame animations and layout ● Ken Burns effect (making video by panning over an image) ● Digital video effects, including brightness, gamma, hue, greyscale, chroma key (bluescreen / greenscreen), and over 20 other video effects Screenshot Getting Started To Launch OpenShot Video Editor You can start OpenShot in the following ways: Applications menu Choose Sound & Video > OpenShot Video Editor.
    [Show full text]
  • Pre-Multiplied Alpha Sandeep Kumar Budakoti, Sandeep Sagar, Praveen Talwar Uttaranchal University, Dehradun
    International Journal of Electronics Engineering (ISSN: 0973-7383) Volume 11 • Issue 1 pp. 592-596 Jan 2019-June 2019 www.csjournals.com Pre-Multiplied Alpha Sandeep Kumar Budakoti, Sandeep Sagar, Praveen Talwar Uttaranchal University, Dehradun Abstract: Alpha Compositing Is Defined As How An Image Is Combined With An Image Is Combined With An Immediate In Order To Create The Look Of Partial Self-Expression. The Method Uses Alpha Channel Evaluates That How Much Basis Pixel‟s Color In Order Covers The Target Pixel‟s Color Information. This Document Includes Speaking About Various Type Of Alpha-Blending Mode That Is Used To Produce This Fractional Lucidity. Alpha-Compositing Is Defined By Which Two Images Can Be Combined And There Are Ranges Of Alpha-Values Which Can Be Multiplied To The Source Pixel That Produce Target Pixel. 0-255 Is Also Range Of Alpha-Values Where 0 Is Completely Transparent (I.E., Opacity =0%) As Well As 255 Is Completely Opaque (I.E., Opacity =100%). Alpha-Blending Is The Process By Mixing The Two Image Color In Order To Outline A Final Image. This Paper Includes Comprehensive Study On Various Alpha-Compositing Techniques. Rainbow Above A Waterfall Is A Good Example Logically Happening Alpha-Blending. In This Example, Rainbow Is Considered As One Image And Background Cascade As Another Then So As To Get Produce Recent Image Alpha, Alpha-Blending Of These Two Images Done Together. Introduction The Least Factor Of The Picture Is Called Pixels A Piece Of Each Pixel‟s Data Constitutes 4-Channels, Which Is For Red, Blue, And Green As Well As One 8-Bit Alpha Channel.
    [Show full text]
  • Gicv3/4 Software Overview
    Real-Time 3D Art Best Practices: Advanced VR graphics techniques Version 1.0 102073 _0100_00_en Real-Time 3D Art Best Practices: Advanced VR graphics Version 1.0 techniques Real-Time 3D Art Best Practices: Advanced VR graphics techniques Copyright © 2020 Arm Limited (or its affiliates). All rights reserved. Release Information Document History Version Date Confidentiality Change 1.0 31 Mar 2020 Non-confidential First release. Non-Confidential Proprietary Notice This document is protected by copyright and other related rights and the practice or implementation of the information contained in this document may be protected by one or more patents or pending patent applications. No part of this document may be reproduced in any form by any means without the express prior written permission of Arm. No license, express or implied, by estoppel or otherwise to any intellectual property rights is granted by this document unless specifically stated. Your access to the information in this document is conditional upon your acceptance that you will not use or permit others to use the information for the purposes of determining whether implementations infringe any third party patents. THIS DOCUMENT IS PROVIDED “AS IS”. ARM PROVIDES NO REPRESENTATIONS AND NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY, SATISFACTORY QUALITY, NON-INFRINGEMENT OR FITNESS FOR A PARTICULAR PURPOSE WITH RESPECT TO THE DOCUMENT. For the avoidance of doubt, Arm makes no representation with respect to, and has undertaken no analysis to identify or understand the scope and content of, patents, copyrights, trade secrets, or other rights. This document may include technical inaccuracies or typographical errors.
    [Show full text]