Format Independence Provision of Audio and Video Data in Multimedia Database Management Systems

Total Page:16

File Type:pdf, Size:1020Kb

Format Independence Provision of Audio and Video Data in Multimedia Database Management Systems FORMAT INDEPENDENCE PROVISION OF AUDIO AND VIDEO DATA IN MULTIMEDIA DATABASE MANAGEMENT SYSTEMS Der Technischen Fakultät der Universität Erlangen-Nürnberg zur Erlangung des Grades D O K T O R – I N G E N I E U R vorgelegt von Maciej Suchomski Erlangen – 2008 Als Dissertation genehmigt von der Technischen Fakultät der Universität Erlangen-Nürnberg Tag der Einreichung: 13.05.2008 Tag der Promotion: 31.10.2008 Dekan: Prof. Dr.-Ing. habil. Johannes Huber Berichterstatter: Prof. Dr.-Ing. Klaus Meyer-Wegener, Vizepräsident der FAU Prof. Dr. Andreas Henrich BEREITSTELLUNG DER FORMATUNABHÄNGIGKEIT VON AUDIO- UND VIDEODATEN IN MULTIMEDIALEN DATENBANKVERWALTUNGSSYSTEMEN Der Technischen Fakultät der Universität Erlangen-Nürnberg zur Erlangung des Grades D O K T O R – I N G E N I E U R vorgelegt von Maciej Suchomski Erlangen – 2008 Als Dissertation genehmigt von der Technischen Fakultät der Universität Erlangen-Nürnberg Tag der Einreichung: 13.05.2008 Tag der Promotion: 31.10.2008 Dekan: Prof. Dr.-Ing. habil. Johannes Huber Berichterstatter: Prof. Dr.-Ing. Klaus Meyer-Wegener, Vizepräsident der FAU Prof. Dr. Andreas Henrich To My Love Parents Dla Moich Kochanych Rodziców Abstract ABSTRACT Since late 90s there is a noticeable revolution in the consumption of multimedia data being analogical to the electronic data processing revolution in 80s and 90s. The multimedia revolution covers different aspects such as multimedia production, storage, and delivery, but as well triggers completely new solutions on consumer market such as multifunction handheld devices, digital and internet TV, or home cinemas. It brings however also new problems. The multimedia format variety is on one hand an advantage but on the other one of the problems, since every consumer has to understand the data in a specific format in order to consume them. On the other hand, the database management systems have been responsible for providing the data to the consumers and applications regardless the format and storage characteristics. However in case of multimedia data, the MMDBMSes have failed to provide data independence due to complexity in “translation process”, especially when considering continuous data such as audio and video. There are many reasons of such situation: the time characteristic of the continuous data (processing according to functional correctness but also to time correctness), the complexity of conversion algorithms (especially compression), and the demand of processing resources varying in time (due to the dependence on content) thus requiring sophisticated resource allocation algorithms. This work focuses on a proposal of the conceptual model of the real-time audio-video conversion (RETAVIC) architecture in order to diminish existing problems in the multimedia format translation process, and thus, to allow the format independence provision of audio and video data. The data processing within the RETAVIC architecture has been divided in four phases: capturing, analysis, storage and delivery. The key assumption is the meta-data-based real- time transcoding in the delivery phase, where quality-adaptive decoding and encoding employing Hard-Real-Time Adaptive model occurs. Besides, the Layered Lossless Video format (LLV1) has been implemented within this project and the analysis of format independence approaches and support in current multimedia management systems has been conducted. The prototypical real-time implementation of the critical parts of the transcoding chain for video provides the functional, quantitative and qualitative evaluation. i Abstract ii Kurzfassung KURZFASSUNG Seit den späten 1990er Jahren gibt es eine wahrnehmbare Revolution im Konsumverhalten von Multimediadaten, welche analog der Revolution der elektronischen Datenverarbeitung in 1980er und 1990er Jahren ist. Diese Multimediarevolution umfasst verschiedene Aspekte wie Multimediaproduktion, -speicherung und -verteilung, sie bedingt außerdem vollständig neue Lösungen auf dem Absatzmarkt für Konsumgüter wie mobile Endgeräte, digitales und Internet- Fernsehen oder Heimkinosystemen. Sie ist jedoch ebenfalls Auslöser bis dato unbekannter Probleme. Die Multimediaformatvielzahl ist einerseits ein Vorteil, auf der anderen Seite aber eines dieser Probleme, da jeder Verbraucher die Daten in einem spezifischen Format „verstehen“ muss, um sie konsumieren zu können. Andererseits sind die Datenbankverwaltungssysteme aber auch dafür verantwortlich, dass die Daten unabhängig von Format- und Speichereigenschaften für die Verbraucher und für die Anwendungen zur Verfügung stehen. Im Falle der Multimediadaten jedoch haben die MMDBVSe die Datenunabhängigkeit wegen der Komplexität „im Übersetzungsprozess“ nicht zur Verfügung stellen können, insbesondere wenn es sich um kontinuierliche Datenströme wie Audiodaten und Videodaten handelt. Es gibt viele Gründe solcher Phänomene, die Zeiteigenschaften von den kontinuierlichen Daten (die Verarbeitung entsprechend der Funktionskorrektheit aber auch entsprechend der Zeitkorrektheit), die Komplexität der Umwandlungsalgorithmen (insbesondere jene der Kompression) und die Anforderungen an die Verarbeitungsressourcen, die in der Zeit schwanken (wegen der Inhaltsabhängigkeit), die daher anspruchsvolle Ressourcenzuweisungsalgorithmen erforderlich machen. Die vorliegende Arbeit konzentriert sich auf einen Vorschlag des Begriffsmodells der Echtzeitumwandlungsarchitektur der Audio- und Videodaten (RETAVIC), um vorhandene Probleme im Multimediaformat-Übersetzungsprozess zu mindern und somit die Bereitstellung der Formatunabhängigkeit von Audio- und Videodaten zu erlauben. Die Datenverarbeitung innerhalb der RETAVIC-Architektur ist in vier Phasen unterteilt worden: Erfassung, Analyse, Speicherung und Anlieferung. Die Haupthypothese ist die metadaten-bezogene Echtzeittranskodierung in der Anlieferungsphase, in der die qualitätsanpassungsfähige Decodierung und Enkodierung mit dem Einsatz des „Hard-Real-Time Adaptive (Hart-Echtzeit- iii Kurzfassung Adaptiv-)-Modells“ auftritt. Außerdem ist das „Layered Lossless Video Format“ (Geschichtetes Verlustfreies Videoformat) innerhalb dieses Projektes implementiert worden, eine Analyse der Formatunabhängigkeitsansätze sowie der -unterstützung in den gegenwärtigen Multimedia- Managementsystemen wurde geführt. Die prototypische Echtzeit-Implementierung der kritischen Teile der Transkodierungskette für Video ermöglicht die funktionale, quantitative und qualitative Auswertung. iv Acknowledgements ACKNOWLEDGEMENTS First and foremost, I would like to thank my supervisor Prof. Klaus Meyer-Wegener. It was a great pleasure to work under his supervision. He was always able to find time for me and conduct stimulating discussions. His advices and suggestions at a crossroads allowed me to choose correct path and bring my research forward keeping me right on to the end of the road. His great patience, tolerance, and understanding helped in conducting the research and testing new ideas. His great wisdom and active support are undoubted facts. Prof. Meyer-Wegener spent not only days but also nights on co-authoring the papers published in the time of research on this work. Without him beginning and completion of this thesis would never be possible. Next, I would like to express my gratitude to Prof. Hartmut Wedekind for his great advices, shared spiritual experiences during our stay in Sudety Mountains and for accepting the chairman position during the viva voce examination. I also want to thank Prof. Andreas Henrich for many fruitful discussions during the workshops of the MMIS Group. Moreover, I am very happy that Prof. Henrich has committed himself to be the reviewer of my dissertation and I will never forget these efforts. Subsequently, I give my great thanks to Prof. André Kaup for his participation in the PhD-examination procedure. The expression of enjoyment deriving from the cooperative and personal work, from meetings “in the kitchen”, and from the funny every-day situations goes to all my colleagues at the chair. Particularly I would like to thank few of them. First, I wan to give my thanks to our ladies: to Mrs. Knechtel for his organizational help and warm welcome at the university, and to Mrs. Braun and to Mrs. Stoyan for keeping the hardware infrastructure up and running allowing me to work without additional worries. My appreciations are directed to Prof. Jablonski for many scholar advices and for the smooth and unproblematic collaboration during preparation of the exercises. I give my great thanks to Dr. Marcus Meyerhöfer – I have really enjoyed the time with you not only in the shared office but also outside the university, and there is no time spent together that will be forgotten. Finally, I would like to thank other colleagues: Michael Daum, Dr. Ilia Petrov, Dr. Rainer Lay, Dr. Sascha Müller, Dr. Udo Mayer, Florian Irmert and Robert Nagy. They spent willingly the time with me also outside the office and brought me closer not only to the German culture but also to the night life fun. v Acknowledgements I am also grateful to all students supervised by me, which have done their study projects and master theses supporting the RETAVIC project. Their contribution including among other things discussions on architecture issues, writing the code, benchmarking and evaluation allowed refining the concepts and clarifying the doubts. Especially, the best-effort converter prototypes and then their real-time implementations have made proving the assumed hypotheses possible – great thanks to my developers and active discussion partners. I
Recommended publications
  • Lossless Audio Codec Comparison
    Contents Introduction 3 1 CD-audio test 4 1.1 CD's used . .4 1.2 Results all CD's together . .4 1.3 Interesting quirks . .7 1.3.1 Mono encoded as stereo (Dan Browns Angels and Demons) . .7 1.3.2 Compressibility . .9 1.4 Convergence of the results . 10 2 High-resolution audio 13 2.1 Nine Inch Nails' The Slip . 13 2.2 Howard Shore's soundtrack for The Lord of the Rings: The Return of the King . 16 2.3 Wasted bits . 18 3 Multichannel audio 20 3.1 Howard Shore's soundtrack for The Lord of the Rings: The Return of the King . 20 A Motivation for choosing these CDs 23 B Test setup 27 B.1 Scripting and graphing . 27 B.2 Codecs and parameters used . 27 B.3 MD5 checksumming . 28 C Revision history 30 Bibliography 31 2 Introduction While testing the efficiency of lossy codecs can be quite cumbersome (as results differ for each person), comparing lossless codecs is much easier. As the last well documented and comprehensive test available on the internet has been a few years ago, I thought it would be a good idea to update. Beside comparing with CD-audio (which is often done to assess codec performance) and spitting out a grand total, this comparison also looks at extremes that occurred during the test and takes a look at 'high-resolution audio' and multichannel/surround audio. While the comparison was made to update the comparison-page on the FLAC website, it aims to be fair and unbiased.
    [Show full text]
  • An Introduction to Mpeg-4 Audio Lossless Coding
    « ¬ AN INTRODUCTION TO MPEG-4 AUDIO LOSSLESS CODING Tilman Liebchen Technical University of Berlin ABSTRACT encoding process has to be perfectly reversible without loss of in- formation, several parts of both encoder and decoder have to be Lossless coding will become the latest extension of the MPEG-4 implemented in a deterministic way. audio standard. In response to a call for proposals, many com- The MPEG-4 ALS codec uses forward-adaptive Linear Pre- panies have submitted lossless audio codecs for evaluation. The dictive Coding (LPC) to reduce bit rates compared to PCM, leav- codec of the Technical University of Berlin was chosen as refer- ing the optimization entirely to the encoder. Thus, various encoder ence model for MPEG-4 Audio Lossless Coding (ALS), attaining implementations are possible, offering a certain range in terms of working draft status in July 2003. The encoder is based on linear efficiency and complexity. This section gives an overview of the prediction, which enables high compression even with moderate basic encoder and decoder functionality. complexity, while the corresponding decoder is straightforward. The paper describes the basic elements of the codec, points out 2.1. Encoder Overview envisaged applications, and gives an outline of the standardization process. The MPEG-4 ALS encoder (Figure 1) typically consists of these main building blocks: • 1. INTRODUCTION Buffer: Stores one audio frame. A frame is divided into blocks of samples, typically one for each channel. Lossless audio coding enables the compression of digital audio • Coefficients Estimation and Quantization: Estimates (and data without any loss in quality due to a perfect reconstruction quantizes) the optimum predictor coefficients for each of the original signal.
    [Show full text]
  • Download Media Player Codec Pack Version 4.1 Media Player Codec Pack
    download media player codec pack version 4.1 Media Player Codec Pack. Description: In Microsoft Windows 10 it is not possible to set all file associations using an installer. Microsoft chose to block changes of file associations with the introduction of their Zune players. Third party codecs are also blocked in some instances, preventing some files from playing in the Zune players. A simple workaround for this problem is to switch playback of video and music files to Windows Media Player manually. In start menu click on the "Settings". In the "Windows Settings" window click on "System". On the "System" pane click on "Default apps". On the "Choose default applications" pane click on "Films & TV" under "Video Player". On the "Choose an application" pop up menu click on "Windows Media Player" to set Windows Media Player as the default player for video files. Footnote: The same method can be used to apply file associations for music, by simply clicking on "Groove Music" under "Media Player" instead of changing Video Player in step 4. Media Player Codec Pack Plus. Codec's Explained: A codec is a piece of software on either a device or computer capable of encoding and/or decoding video and/or audio data from files, streams and broadcasts. The word Codec is a portmanteau of ' co mpressor- dec ompressor' Compression types that you will be able to play include: x264 | x265 | h.265 | HEVC | 10bit x265 | 10bit x264 | AVCHD | AVC DivX | XviD | MP4 | MPEG4 | MPEG2 and many more. File types you will be able to play include: .bdmv | .evo | .hevc | .mkv | .avi | .flv | .webm | .mp4 | .m4v | .m4a | .ts | .ogm .ac3 | .dts | .alac | .flac | .ape | .aac | .ogg | .ofr | .mpc | .3gp and many more.
    [Show full text]
  • Encoding H.264 Video for Streaming and Progressive Download
    What is Tuning? • Disable features that: • Improve subjective video quality but • Degrade objective scores • Example: adaptive quantization – changes bit allocation over frame depending upon complexity • Improves visual quality • Looks like “error” to metrics like PSNR/VMAF What is Tuning? • Switches in encoding string that enables tuning (and disables these features) ffmpeg –input.mp4 –c:v libx264 –tune psnr output.mp4 • With x264, this disables adaptive quantization and psychovisual optimizations Why So Important • Major point of contention: • “If you’re running a test with x264 or x265, and you wish to publish PSNR or SSIM scores, you MUST use –tune PSNR or –tune SSIM, or your results will be completely invalid.” • http://x265.org/compare-video-encoders/ • Absolutely critical when comparing codecs because some may or may not enable these adjustments • You don’t have to tune in your tests; but you should address the issue and explain why you either did or didn’t Does Impact Scores • 3 mbps football (high motion, lots of detail) • PSNR • No tuning – 32.00 dB • Tuning – 32.58 dB • .58 dB • VMAF • No tuning – 71.79 • Tuning – 75.01 • Difference – over 3 VMAF points • 6 is JND, so not a huge deal • But if inconsistent between test parameters, could incorrectly show one codec (or encoding configuration) as better than the other VQMT VMAF Graph Red – tuned Green – not tuned Multiple frames with 3-4-point differentials Downward spikes represent untuned frames that metric perceives as having lower quality Tuned Not tuned Observations • Tuning
    [Show full text]
  • Implementing Object-Based Audio in Radio Broadcasting
    Object-based Audio in Radio Broadcast Implementing Object-based audio in radio broadcasting Diplomarbeit Ausgeführt zum Zweck der Erlangung des akademischen Grades Dipl.-Ing. für technisch-wissenschaftliche Berufe am Masterstudiengang Digitale Medientechnologien and der Fachhochschule St. Pölten, Masterkalsse Audio Design von: Baran Vlad DM161567 Betreuer/in und Erstbegutachter/in: FH-Prof. Dipl.-Ing Franz Zotlöterer Zweitbegutacher/in:FH Lektor. Dipl.-Ing Stefan Lainer [Wien, 09.09.2019] I Ehrenwörtliche Erklärung Ich versichere, dass - ich diese Arbeit selbständig verfasst, andere als die angegebenen Quellen und Hilfsmittel nicht benutzt und mich auch sonst keiner unerlaubten Hilfe bedient habe. - ich dieses Thema bisher weder im Inland noch im Ausland einem Begutachter/einer Begutachterin zur Beurteilung oder in irgendeiner Form als Prüfungsarbeit vorgelegt habe. Diese Arbeit stimmt mit der vom Begutachter bzw. der Begutachterin beurteilten Arbeit überein. .................................................. ................................................ Ort, Datum Unterschrift II Kurzfassung Die Wissenschaft der objektbasierten Tonherstellung befasst sich mit einer neuen Art der Übermittlung von räumlichen Informationen, die sich von kanalbasierten Systemen wegbewegen, hin zu einem Ansatz, der Ton unabhängig von dem Gerät verarbeitet, auf dem es gerendert wird. Diese objektbasierten Systeme behandeln Tonelemente als Objekte, die mit Metadaten verknüpft sind, welche ihr Verhalten beschreiben. Bisher wurde diese Forschungen vorwiegend
    [Show full text]
  • Making Speech Recognition Work on the Web Christopher J. Varenhorst
    Making Speech Recognition Work on the Web by Christopher J. Varenhorst Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Masters of Engineering in Computer Science and Engineering at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY May 2011 c Massachusetts Institute of Technology 2011. All rights reserved. Author.................................................................... Department of Electrical Engineering and Computer Science May 20, 2011 Certified by . James R. Glass Principal Research Scientist Thesis Supervisor Certified by . Scott Cyphers Research Scientist Thesis Supervisor Accepted by . Christopher J. Terman Chairman, Department Committee on Graduate Students Making Speech Recognition Work on the Web by Christopher J. Varenhorst Submitted to the Department of Electrical Engineering and Computer Science on May 20, 2011, in partial fulfillment of the requirements for the degree of Masters of Engineering in Computer Science and Engineering Abstract We present an improved Audio Controller for Web-Accessible Multimodal Interface toolkit { a system that provides a simple way for developers to add speech recognition to web pages. Our improved system offers increased usability and performance for users and greater flexibility for developers. Tests performed showed a %36 increase in recognition response time in the best possible networking conditions. Preliminary tests shows a markedly improved users experience. The new Wowza platform also provides a means of upgrading other Audio Controllers easily. Thesis Supervisor: James R. Glass Title: Principal Research Scientist Thesis Supervisor: Scott Cyphers Title: Research Scientist 2 Contents 1 Introduction and Background 7 1.1 WAMI - Web Accessible Multimodal Toolkit . 8 1.1.1 Existing Java applet . 11 1.2 SALT .
    [Show full text]
  • A Forensic Database for Digital Audio, Video, and Image Media
    THE “DENVER MULTIMEDIA DATABASE”: A FORENSIC DATABASE FOR DIGITAL AUDIO, VIDEO, AND IMAGE MEDIA by CRAIG ANDREW JANSON B.A., University of Richmond, 1997 A thesis submitted to the Faculty of the Graduate School of the University of Colorado Denver in partial fulfillment of the requirements for the degree of Master of Science Recording Arts Program 2019 This thesis for the Master of Science degree by Craig Andrew Janson has been approved for the Recording Arts Program by Catalin Grigoras, Chair Jeff M. Smith Cole Whitecotton Date: May 18, 2019 ii Janson, Craig Andrew (M.S., Recording Arts Program) The “Denver Multimedia Database”: A Forensic Database for Digital Audio, Video, and Image Media Thesis directed by Associate Professor Catalin Grigoras ABSTRACT To date, there have been multiple databases developed for use in many forensic disciplines. There are very large and well-known databases like CODIS (DNA), IAFIS (fingerprints), and IBIS (ballistics). There are databases for paint, shoeprint, glass, and even ink; all of which catalog and maintain information on all the variations of their specific subject matter. Somewhat recently there was introduced the “Dresden Image Database” which is designed to provide a digital image database for forensic study and contains images that are generic in nature, royalty free, and created specifically for this database. However, a central repository is needed for the collection and study of digital audios, videos, and images. This kind of database would permit researchers, students, and investigators to explore the data from various media and various sources, compare an unknown with knowns with the possibility of discovering the likely source of the unknown.
    [Show full text]
  • Ardour Export Redesign
    Ardour Export Redesign Thorsten Wilms [email protected] Revision 2 2007-07-17 Table of Contents 1 Introduction 4 4.5 Endianness 8 2 Insights From a Survey 4 4.6 Channel Count 8 2.1 Export When? 4 4.7 Mapping Channels 8 2.2 Channel Count 4 4.8 CD Marker Files 9 2.3 Requested File Types 5 4.9 Trimming 9 2.4 Sample Formats and Rates in Use 5 4.10 Filename Conflicts 9 2.5 Wish List 5 4.11 Peaks 10 2.5.1 More than one format at once 5 4.12 Blocking JACK 10 2.5.2 Files per Track / Bus 5 4.13 Does it have to be a dialog? 10 2.5.3 Optionally store timestamps 5 5 Track Export 11 2.6 General Problems 6 6 MIDI 12 3 Feature Requests 6 7 Steps After Exporting 12 3.1 Multichannel 6 7.1 Normalize 12 3.2 Individual Files 6 7.2 Trim silence 13 3.3 Realtime Export 6 7.3 Encode 13 3.4 Range ad File Export History 7 7.4 Tag 13 3.5 Running a Script 7 7.5 Upload 13 3.6 Export Markers as Text 7 7.6 Burn CD / DVD 13 4 The Current Dialog 7 7.7 Backup / Archiving 14 4.1 Time Span Selection 7 7.8 Authoring 14 4.2 Ranges 7 8 Container Formats 14 4.3 File vs Directory Selection 8 8.1 libsndfile, currently offered for Export 14 4.4 Container Types 8 8.2 libsndfile, also interesting 14 8.3 libsndfile, rather exotic 15 12 Specification 18 8.4 Interesting 15 12.1 Core 18 8.4.1 BWF – Broadcast Wave Format 15 12.2 Layout 18 8.4.2 Matroska 15 12.3 Presets 18 8.5 Problematic 15 12.4 Speed 18 8.6 Not of further interest 15 12.5 Time span 19 8.7 Check (Todo) 15 12.6 CD Marker Files 19 9 Encodings 16 12.7 Mapping 19 9.1 Libsndfile supported 16 12.8 Processing 19 9.2 Interesting 16 12.9 Container and Encodings 19 9.3 Problematic 16 12.10 Target Folder 20 9.4 Not of further interest 16 12.11 Filenames 20 10 Container / Encoding Combinations 17 12.12 Multiplication 20 11 Elements 17 12.13 Left out 21 11.1 Input 17 13 Credits 21 11.2 Output 17 14 Todo 22 1 Introduction 4 1 Introduction 2 Insights From a Survey The basic purpose of Ardour's export functionality is I conducted a quick survey on the Linux Audio Users to create mixdowns of multitrack arrangements.
    [Show full text]
  • Installation Manual
    CX-20 Installation manual ENABLING BRIGHT OUTCOMES Barco NV Beneluxpark 21, 8500 Kortrijk, Belgium www.barco.com/en/support www.barco.com Registered office: Barco NV President Kennedypark 35, 8500 Kortrijk, Belgium www.barco.com/en/support www.barco.com Copyright © All rights reserved. No part of this document may be copied, reproduced or translated. It shall not otherwise be recorded, transmitted or stored in a retrieval system without the prior written consent of Barco. Trademarks Brand and product names mentioned in this manual may be trademarks, registered trademarks or copyrights of their respective holders. All brand and product names mentioned in this manual serve as comments or examples and are not to be understood as advertising for the products or their manufacturers. Trademarks USB Type-CTM and USB-CTM are trademarks of USB Implementers Forum. HDMI Trademark Notice The terms HDMI, HDMI High Definition Multimedia Interface, and the HDMI Logo are trademarks or registered trademarks of HDMI Licensing Administrator, Inc. Product Security Incident Response As a global technology leader, Barco is committed to deliver secure solutions and services to our customers, while protecting Barco’s intellectual property. When product security concerns are received, the product security incident response process will be triggered immediately. To address specific security concerns or to report security issues with Barco products, please inform us via contact details mentioned on https://www.barco.com/psirt. To protect our customers, Barco does not publically disclose or confirm security vulnerabilities until Barco has conducted an analysis of the product and issued fixes and/or mitigations. Patent protection Please refer to www.barco.com/about-barco/legal/patents Guarantee and Compensation Barco provides a guarantee relating to perfect manufacturing as part of the legally stipulated terms of guarantee.
    [Show full text]
  • Ffmpeg Documentation Table of Contents
    ffmpeg Documentation Table of Contents 1 Synopsis 2 Description 3 Detailed description 3.1 Filtering 3.1.1 Simple filtergraphs 3.1.2 Complex filtergraphs 3.2 Stream copy 4 Stream selection 5 Options 5.1 Stream specifiers 5.2 Generic options 5.3 AVOptions 5.4 Main options 5.5 Video Options 5.6 Advanced Video options 5.7 Audio Options 5.8 Advanced Audio options 5.9 Subtitle options 5.10 Advanced Subtitle options 5.11 Advanced options 5.12 Preset files 6 Tips 7 Examples 7.1 Preset files 7.2 Video and Audio grabbing 7.3 X11 grabbing 7.4 Video and Audio file format conversion 8 Syntax 8.1 Quoting and escaping 8.1.1 Examples 8.2 Date 8.3 Time duration 8.3.1 Examples 8.4 Video size 8.5 Video rate 8.6 Ratio 8.7 Color 8.8 Channel Layout 9 Expression Evaluation 10 OpenCL Options 11 Codec Options 12 Decoders 13 Video Decoders 13.1 rawvideo 13.1.1 Options 14 Audio Decoders 14.1 ac3 14.1.1 AC-3 Decoder Options 14.2 ffwavesynth 14.3 libcelt 14.4 libgsm 14.5 libilbc 14.5.1 Options 14.6 libopencore-amrnb 14.7 libopencore-amrwb 14.8 libopus 15 Subtitles Decoders 15.1 dvdsub 15.1.1 Options 15.2 libzvbi-teletext 15.2.1 Options 16 Encoders 17 Audio Encoders 17.1 aac 17.1.1 Options 17.2 ac3 and ac3_fixed 17.2.1 AC-3 Metadata 17.2.1.1 Metadata Control Options 17.2.1.2 Downmix Levels 17.2.1.3 Audio Production Information 17.2.1.4 Other Metadata Options 17.2.2 Extended Bitstream Information 17.2.2.1 Extended Bitstream Information - Part 1 17.2.2.2 Extended Bitstream Information - Part 2 17.2.3 Other AC-3 Encoding Options 17.2.4 Floating-Point-Only AC-3 Encoding
    [Show full text]
  • Capabilities of the Horchow Auditorium and the Orientation
    Performance Capabilities of Horchow Auditorium and Atrium at the Dallas Museum of Art Horchow Auditorium Capacity and Stage: The auditorium seats 333 people (with a 12 removable chair option in the back), maxing out the capacity at 345). The stage is 45’ X 18’and the screen is 27’ X 14’. A height adjustable podium, microphone, podium clock and light are standard equipment available. Installed/Available Equipment Sound: Lighting: 24 channel sound board 24 fixed lights 4 stage monitors (with up to 4 Mixes) 5 movers (these give a wide array of lighting looks) 6 hardwired microphones 4 wireless lavaliere microphones 2 handheld wireless microphones (with headphone option) 9-foot Steinway Concert Grand Piano 3 Bose towers (these have been requested by Acoustic performers before and work very well) Music stands Projection Panasonic PTRQ32 4K 20,000 Lumen Laser Projector Preferred Video Formats in Horchow Blu Ray DVD Apple ProRes 4:2:2 Standard in a .mov wrapper H.264 in a .mov wrapper Formats we can use, but are not optimal MPEG-1/2 Dirac / VC-2 DivX® (1/2/3/4/5/6) MJPEG (A/B) MPEG-4 ASP WMV 1/2 XviD WMV 3 / WMV-9 / VC-1 3ivX D4 Sorenson 1/3 H.261/H.263 / H.263i DV H.264 / MPEG-4 AVC On2 VP3/VP5/VP6 Cinepak Indeo Video v3 (IV32) Theora Real Video (1/2/3/4) Atrium Capacity and Stage: The Atrium seats up to 500 people (chair rental required). The stage available to be installed in the Atrium is 16’ x 12’ x 1’.
    [Show full text]
  • (A/V Codecs) REDCODE RAW (.R3D) ARRIRAW
    What is a Codec? Codec is a portmanteau of either "Compressor-Decompressor" or "Coder-Decoder," which describes a device or program capable of performing transformations on a data stream or signal. Codecs encode a stream or signal for transmission, storage or encryption and decode it for viewing or editing. Codecs are often used in videoconferencing and streaming media solutions. A video codec converts analog video signals from a video camera into digital signals for transmission. It then converts the digital signals back to analog for display. An audio codec converts analog audio signals from a microphone into digital signals for transmission. It then converts the digital signals back to analog for playing. The raw encoded form of audio and video data is often called essence, to distinguish it from the metadata information that together make up the information content of the stream and any "wrapper" data that is then added to aid access to or improve the robustness of the stream. Most codecs are lossy, in order to get a reasonably small file size. There are lossless codecs as well, but for most purposes the almost imperceptible increase in quality is not worth the considerable increase in data size. The main exception is if the data will undergo more processing in the future, in which case the repeated lossy encoding would damage the eventual quality too much. Many multimedia data streams need to contain both audio and video data, and often some form of metadata that permits synchronization of the audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data stream to be useful in stored or transmitted form, they must be encapsulated together in a container format.
    [Show full text]