CEC — Econtact! 15.4 — Digitized Direct Animation: Creating Mat… Music Using 8 Mm Film by Jonathan Weinel and Stuart Cunningham 01/03/2019, 09 16
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Understanding Jitter and Wander Measurements and Standards Second Edition Contents
Understanding Jitter and Wander Measurements and Standards Second Edition Contents Page Preface 1. Introduction to Jitter and Wander 1-1 2. Jitter Standards and Applications 2-1 3. Jitter Testing in the Optical Transport Network 3-1 (OTN) 4. Jitter Tolerance Measurements 4-1 5. Jitter Transfer Measurement 5-1 6. Accurate Measurement of Jitter Generation 6-1 7. How Tester Intrinsics and Transients 7-1 affect your Jitter Measurement 8. What 0.172 Doesn’t Tell You 8-1 9. Measuring 100 mUIp-p Jitter Generation with an 0.172 Tester? 9-1 10. Faster Jitter Testing with Simultaneous Filters 10-1 11. Verifying Jitter Generator Performance 11-1 12. An Overview of Wander Measurements 12-1 Acronyms Welcome to the second edition of Agilent Technologies Understanding Jitter and Wander Measurements and Standards booklet, the distillation of over 20 years’ experience and know-how in the field of telecommunications jitter testing. The Telecommunications Networks Test Division of Agilent Technologies (formerly Hewlett-Packard) in Scotland introduced the first jitter measurement instrument in 1982 for PDH rates up to E3 and DS3, followed by one of the first 140 Mb/s jitter testers in 1984. SONET/SDH jitter test capability followed in the 1990s, and recently Agilent introduced one of the first 10 Gb/s Optical Channel jitter test sets for measurements on the new ITU-T G.709 frame structure. Over the years, Agilent has been a significant participant in the development of jitter industry standards, with many contributions to ITU-T O.172/O.173, the standards for jitter test equipment, and Telcordia GR-253/ ITU-T G.783, the standards for operational SONET/SDH network equipment. -
Motion Denoising with Application to Time-Lapse Photography
IEEE Computer Vision and Pattern Recognition (CVPR), June 2011 Motion Denoising with Application to Time-lapse Photography Michael Rubinstein1 Ce Liu2 Peter Sand Fredo´ Durand1 William T. Freeman1 1MIT CSAIL 2Microsoft Research New England {mrub,sand,fredo,billf}@mit.edu [email protected] Abstract t Motions can occur over both short and long time scales. We introduce motion denoising, which treats short-term x changes as noise, long-term changes as signal, and re- Input renders a video to reveal the underlying long-term events. y We demonstrate motion denoising for time-lapse videos. One of the characteristics of traditional time-lapse imagery t is stylized jerkiness, where short-term changes in the scene x t (Time) appear as small and annoying jitters in the video, often ob- x Motion-denoised fuscating the underlying temporal events of interest. We ap- ply motion denoising for resynthesizing time-lapse videos showing the long-term evolution of a scene with jerky short- term changes removed. We show that existing filtering ap- proaches are often incapable of achieving this task, and present a novel computational approach to denoise motion without explicit motion analysis. We demonstrate promis- InputDisplacement Result ing experimental results on a set of challenging time-lapse Figure 1. A time-lapse video of plants growing (sprouts). XT sequences. slices of the video volumes are shown for the input sequence and for the result of our motion denoising algorithm (top right). The motion-denoised sequence is generated by spatiotemporal rear- 1. Introduction rangement of the pixels in the input sequence (bottom center; spa- tial and temporal displacement on top and bottom respectively, fol- Short-term, random motions can distract from the lowing the color coding in Figure 5). -
DISTRIBUTED RAY TRACING – Some Implementation Notes Multiple Distributed Sampling Jitter - to Break up Patterns
DISTRIBUTED RAY TRACING – some implementation notes multiple distributed sampling jitter - to break up patterns DRT - theory v. practice brute force - generate multiple rays at every sampling opportunity alternative: for each subsample, randomize at each opportunity DRT COMPONENTS anti-aliasing and motion blur supersampling - in time and space: anti-aliasing jitter sample in time and space: motion blur depth of field - sample lens: blurs shadows – sample light source: soft shadows reflection – sample reflection direction: rough surface transparency – sample transmission direction: translucent surface REPLACE CAMERA MODEL shift from pinhole camera model to lens camera model picture plane at -w, not +w camera position becomes lens center picture plane is behind 'pinhole' negate u, v, w, trace ray from pixel to camera ANTI-ALIASING: ORGANIZING subpixel samples Options 1. Do each subsample in raster order 2. do each pixel in raster order, do each subsample in raster order 3. do each pixel in raster order, do all subsamples in temporal order 4. keep framebuffer, do all subsamples in temporal order SPATIAL JITTERING for each pixel 200x200, i,j for each subpixel sample 4x4 s,t JITTERED SAMPLE jitter s,t MOTION BLUR – TEMPORAL JITTERING for subsample get delta time from table jitter delta +/- 1/2 time division move objects to that instant in time DEPTH OF FIELD generate ray from subsample through lens center to focal plane generate random sample on lens disk - random in 2D u,v generate ray from this point to focal plane point VISIBILITY - as usual intersect ray with environment find first intersection at point p on object o with normal n SHADOWS generate random vector on surface of light - random on sphere REFLECTIONS computer reflection vector generate random sample in sphere at end of R TRANSPARENCY compute transmission vector generate random sample in sphere at end of T SIDE NOTE randomize n instead of ramdomize R and T . -
Research on Feature Point Registration Method for Wireless Multi-Exposure Images in Mobile Photography Hui Xu1,2
Xu EURASIP Journal on Wireless Communications and Networking (2020) 2020:98 https://doi.org/10.1186/s13638-020-01695-4 RESEARCH Open Access Research on feature point registration method for wireless multi-exposure images in mobile photography Hui Xu1,2 Correspondence: [email protected] 1School of Computer Software, Abstract Tianjin University, Tianjin 300072, China In the mobile shooting environment, the multi-exposure is easy to occur due to the 2College of Information impact of the jitter and the sudden change of ambient illumination, so it is necessary Engineering, Henan Institute of to deal with the feature point registration of the multi-exposure image under mobile Science and Technology, Xinxiang 453003, China photography to improve the image quality. A feature point registration technique is proposed based on white balance offset compensation. The global motion estimation of the image is carried out, and the spatial neighborhood information is integrated into the amplitude detection of the multi-exposureimageundermobilephotography,andthe amplitude characteristics of the multi-exposure image under the mobile shooting are extracted. The texture information of the multi-exposure image is compared to that of a global moving RGB 3D bit plane random field, and the white balance deviation of the multi-exposure image is compensated. At different scales, suitable white balance offset compensation function is used to describe the feature points of the multi-exposure image, the parallax analysis and corner detection of the target pixel of the multi-exposure image are carried out, and the image stabilization is realized by combining the feature registration method. The simulation results show that the proposed method has high accuracy and good registration performance for multi-exposure image feature points under mobile photography, and the image quality is improved. -
Persistence of Vision: the Value of Invention in Independent Art Animation
Virginia Commonwealth University VCU Scholars Compass Kinetic Imaging Publications and Presentations Dept. of Kinetic Imaging 2006 Persistence of Vision: The alueV of Invention in Independent Art Animation Pamela Turner Virginia Commonwealth University, [email protected] Follow this and additional works at: http://scholarscompass.vcu.edu/kine_pubs Part of the Film and Media Studies Commons, Fine Arts Commons, and the Interdisciplinary Arts and Media Commons Copyright © The Author. Originally presented at Connectivity, The 10th ieB nnial Symposium on Arts and Technology at Connecticut College, March 31, 2006. Downloaded from http://scholarscompass.vcu.edu/kine_pubs/3 This Presentation is brought to you for free and open access by the Dept. of Kinetic Imaging at VCU Scholars Compass. It has been accepted for inclusion in Kinetic Imaging Publications and Presentations by an authorized administrator of VCU Scholars Compass. For more information, please contact [email protected]. Pamela Turner 2220 Newman Road, Richmond VA 23231 Virginia Commonwealth University – School of the Arts 804-222-1699 (home), 804-828-3757 (office) 804-828-1550 (fax) [email protected], www.people.vcu.edu/~ptturner/website Persistence of Vision: The Value of Invention in Independent Art Animation In the practice of art being postmodern has many advantages, the primary one being that the whole gamut of previous art and experience is available as influence and inspiration in a non-linear whole. Music and image can be formed through determined methods introduced and delightfully disseminated by John Cage. Medieval chants can weave their way through hip-hopped top hits or into sound compositions reverberating in an art gallery. -
Live Performance Where Congruent Musical, Visual, and Proprioceptive Stimuli Fuse to Form a Combined Aesthetic Narrative
Soma: live performance where congruent musical, visual, and proprioceptive stimuli fuse to form a combined aesthetic narrative Ilias Bergstrom UCL A thesis submitted for the degree of Doctor of Philosophy 2010 1 I, Ilias Bergstrom, confirm that the work presented in this thesis is my own. Where information has been derived from other sources, I confirm that this has been indicated in the thesis. 2 Abstract Artists and scientists have long had an interest in the relationship between music and visual art. Today, many occupy themselves with correlated animation and music, called ‗visual music‘. Established tools and paradigms for performing live visual music however, have several limitations: Virtually no user interface exists, with an expressivity comparable to live musical performance. Mappings between music and visuals are typically reduced to the music‘s beat and amplitude being statically associated to the visuals, disallowing close audiovisual congruence, tension and release, and suspended expectation in narratives. Collaborative performance, common in other live art, is mostly absent due to technical limitations. Preparing or improvising performances is complicated, often requiring software development. This thesis addresses these, through a transdisciplinary integration of findings from several research areas, detailing the resulting ideas, and their implementation in a novel system: Musical instruments are used as the primary control data source, accurately encoding all musical gestures of each performer. The advanced embodied knowledge musicians have of their instruments, allows increased expressivity, the full control data bandwidth allows high mapping complexity, while musicians‘ collaborative performance familiarity may translate to visual music performance. The conduct of Mutable Mapping, gradually creating, destroying and altering mappings, may allow for a narrative in mapping during performance. -
Idmaa BB Catalog Optimized.Pdf
Dena Elisabeth Eber / Bowling Green University iDEAs 07: Beyond Boundaries, which coincides with the International Digi- Randall E. Hoyt / University of Connecticut tal Media and Arts Association conference, addresses the notion that the Rejane Spitz / Rio de Janeiro Catholic University, Brazil nature of digital arts is to embrace the new technologies of the time while Kenneth A. Huff / Savannah College of Art and Design perfecting those of the past. As such, it tends to sit outside the margin of discipline, thus deifying boundaries within its own set of standards. This art form is complex and multifaceted, however, it embraces two distinct aspects that are currently, and will continue to be, characterized by flux and defined by the rate of change of technology. The first exists on the fringe of the art world, embracing, exploring, incorporating, and translating the new- est digital media for the given time. The second is the molding of yesterday’s digital “edge” into traditional art forms or into mature and unique works that often do not fit within traditional aesthetics. Regardless of whether the art includes older or newer technologies, digital media art sits outside of tradition. It is a lightning rod for new media and acts as a disseminator of language and implications connected with it. Although traditional boundaries do not apply, grounding in artistic practice does, albeit in flux. The iDEAs 07 exhibition not only displays art that goes beyond bound- aries, but shows work that does not even consider them as the art reflects a discipline that seeks to find its own foundation. -
A Low-Cost Flash Photographic System for Visualization of Droplets
Journal of Imaging Science and Technology R 62(6): 060502-1–060502-9, 2018. c Society for Imaging Science and Technology 2018 A Low-Cost Flash Photographic System for Visualization of Droplets in Drop-on-Demand Inkjet Huicong Jiang, Brett Andrew Merritt, and Hua Tan School of Engineering and Computer Science, Washington State University Vancouver, 14204 NE Salmon Creek Ave, Vancouver, WA 98686, USA E-mail: [email protected] the extreme, it tries to reduce the flash time of the light source Abstract. Flash photography has been widely used to study which is much easier than improving the shutter speed of the droplet dynamics in drop-on-demand (DoD) inkjet due to its distinct advantages in cost and image quality. However, the a camera and can be compatible with most of the cameras. whole setup, typically comprising the mounting platform, flash Although only one image can be taken from an event through light source, inkjet system, CCD camera, magnification lens and this method, by leveraging the high reproducibility of the pulse generator, still costs tens of thousands of dollars. To reduce DoD inkjet and delaying the flash, a sequence of images the cost of visualization for DoD inkjet droplets, we proposed to replace the expensive professional pulse generator with a freezing different moments can be acquired from a series of low-cost microcontroller board in the flash photographic system. identical events and then yields a video recording the whole The temporal accuracy of the microcontroller was measured by an process. It is noteworthy that flash photography should be oscilloscope. -
Jittered Exposures for Image Super-Resolution
Jittered Exposures for Image Super-Resolution Nianyi Li1 Scott McCloskey2 Jingyi Yu3,1 1University of Delaware, Newark, DE, USA. [email protected] 2Honeywell ACST, Golden Valley, MN, USA. [email protected] 3ShanghaiTech University, Shanghai, China. [email protected] Abstract The process can be formulated using an observation mod- el that takes low-resolution (LR) images L as the blurred Camera design involves tradeoffs between spatial and result of a high resolution image H: temporal resolution. For instance, traditional cameras pro- vide either high spatial resolution (e.g., DSLRs) or high L = W H + n (1) frame rate, but not both. Our approach exploits the optical stabilization hardware already present in commercial cam- where W = DBM, in which M is the warp matrix (transla- eras and increasingly available in smartphones. Whereas tion, rotation, etc.), B is the blur matrix, D is the decimation single image super-resolution (SR) methods can produce matrix and n is the noise. convincing-looking images and have recently been shown Most state-of-the-art image SR methods consist of three to improve the performance of certain vision tasks, they stages, i.e., registration, interpolation and restoration (an in- are still limited in their ability to fully recover informa- verse procedure). These steps can be implemented separate- tion lost due to under-sampling. In this paper, we present ly or simultaneously according to the reconstruction meth- a new imaging technique that efficiently trades temporal ods adopted. Registration refers to the procedure of estimat- resolution for spatial resolution in excess of the sensor’s ing the motion information or the warp function M between pixel count without attenuating light or adding additional H and L. -
CHAPTER 4. Film Sound Presentation: Space and Institutional Context
CHAPTER 4. Film Sound Presentation: Space and Institutional Context 4.1 Film Sound Presentation In the previous chapter I examined the preservation of early sound systems, namely the Biophon, Chronophone, Phono-Cinéma-Théâtre, and Vitaphone systems. In the analysis of preservation practices, I highlighted the importance of relevant elements of film sound: the material carriers, the technological devices and human actors, the dispositif situation, the film text, the screening performance, and the history of exhibition. These elements of preservation practices contribute to understanding and defining what film sound is and how it relates to film image. The analysis of the preservation of these early sound systems further demonstrated how preservation and presentation are two highly interconnected activities. As the shown by the example of the Tonbidler, the choice of a certain carrier in the preservation process is often incited by the context of presentation. As I will now argue in this chapter, decisions about the presentation of film sound similarly depend on decisions made during preservation. Preservation and presentation practices, intended as the core activities of film heritage institutions, point to a number of factors important in defining film sound and its core dimensions. The preservation practices examined in the cases of early sound systems highlighted in particular the three dimensions of film that I defined as carrier, dispositif, and text. The presentation practices that will be analyzed in this chapter will bring attention to other dimensions of film sound, namely the spatial and institutional dimensions, which are less evident in preservation practices. Film presentation concerns the screening situation, where the dispositif and the audience are central. -
Iconoclasm in Visual Music
EMMANOUIL KANELLOS Iconoclasm in Visual Music Abstract From the earliest experimental film works to today’s contemporary and diverse use of moving image platforms, the notion of visual music is considered synonymous with abstract animation, because in part, abstract imagery is employed across the vast majority of musical visualisation. The purpose of this paper is to explore how the absence of figuration and representation in visual music can now reengage with the problematic debate of representation versus abstraction - a debate that has taken place in other art forms and movements in the past. Introduction Visual music is a type of audio-visual art, which employs the use of moving abstract images that are synchronised with music/sound. This approach to moving image works started prior to Modernism. However, it was during this period that it developed significantly and continues to expand the field in a variety of expressive ways in our current digital era. Although the subject area of this paper is visual music it will start with an outline of several known abstraction-representation dialectics in art and more specifically, the rejection of representation caused by different ideologies, political dogmas, and spiritual beliefs. In doing so, it will demonstrate the lack of representation in the creation of artworks throughout history and focus on the past and current situation in visual music. The abstraction-representation debate in visual arts is deep-rooted and has triggered some of the greatest conflicts in the history of art. Such examples include: two iconoclasms in Byzantine Empire in the eighth and ninth centuries; the label ‘Degenerate Art’ to describe Modern art during the Nazi regime; and the violent reactions in Syria, Jordan, Lebanon and Pakistan in 2006 over the cartoons that depict Muhammad. -
The Expanded Image: on the Musicalization of the Visual Arts in the Twentieth Century Sandra Naumann
The Expanded Image: On the Musicalization of the Visual Arts in the Twentieth Century Sandra Naumann Sandra Naumann, The Expanded Image: On the Musicalization of the Visual Arts in the Twentieth Century, in: Dieter Daniels, Sandra Naumann (eds.), Audiovisuology, A Reader, Vol. 1: Compendium, Vol. 2: Essays, Verlag Walther König, Köln 2015, pp. 504-533. 505 Exposition Until well into the nineteenth century, the experience of audiovisual arts was bound to a unity of space and time (and action, too, in a certain sense). The technical media of photography, gramophone recording, silent film, talking film, and video made it possible to reproduce sounds and images, but they also separated them only to slowly reunite them again. These media evolved from devices used purely for storage and reproduction into performative instru- ments for creating new forms of audiovisual experience in real time, a process reinforced through numerous efforts to synthesize or expand the arts by incor- porating or transferring concepts and techniques from different art forms. Thus, musical theories and techniques were adopted to explain developments in the visual arts, and vice versa. Against this general background, this essay aims to identify strategies which the visual arts borrowed from music while changing and expanding compul- sively during the twentieth century. The focus will not be on image/sound com- binations, although sound does often play a part in the works that will be dis- cussed in the following. Instead, this text will deal with the musicalization of the image in a broader sense. These endeavors first culminated in the 1910s and 1920s, then again in the 1960s and 1970s, and for the third time from the 1990s until today.