Ffmpeg Documentation Table of Contents

Total Page:16

File Type:pdf, Size:1020Kb

Ffmpeg Documentation Table of Contents ffmpeg Documentation Table of Contents 1 Synopsis 2 Description 3 Detailed description 3.1 Filtering 3.1.1 Simple filtergraphs 3.1.2 Complex filtergraphs 3.2 Stream copy 4 Stream selection 5 Options 5.1 Stream specifiers 5.2 Generic options 5.3 AVOptions 5.4 Main options 5.5 Video Options 5.6 Advanced Video options 5.7 Audio Options 5.8 Advanced Audio options 5.9 Subtitle options 5.10 Advanced Subtitle options 5.11 Advanced options 5.12 Preset files 5.12.1 ffpreset files 5.12.2 avpreset files 6 Examples 6.1 Video and Audio grabbing 6.2 X11 grabbing 6.3 Video and Audio file format conversion 7 Syntax 7.1 Quoting and escaping 7.1.1 Examples 7.2 Date 7.3 Time duration 7.3.1 Examples 7.4 Video size 7.5 Video rate 7.6 Ratio 7.7 Color 7.8 Channel Layout 8 Expression Evaluation 9 Codec Options 10 Decoders 11 Video Decoders 11.1 rawvideo 11.1.1 Options 12 Audio Decoders 12.1 ac3 12.1.1 AC-3 Decoder Options 12.2 flac 12.2.1 FLAC Decoder options 12.3 ffwavesynth 12.4 libcelt 12.5 libgsm 12.6 libilbc 12.6.1 Options 12.7 libopencore-amrnb 12.8 libopencore-amrwb 12.9 libopus 13 Subtitles Decoders 13.1 dvbsub 13.1.1 Options 13.2 dvdsub 13.2.1 Options 13.3 libzvbi-teletext 13.3.1 Options 14 Encoders 15 Audio Encoders 15.1 aac 15.1.1 Options 15.2 ac3 and ac3_fixed 15.2.1 AC-3 Metadata 15.2.1.1 Metadata Control Options 15.2.1.2 Downmix Levels 15.2.1.3 Audio Production Information 15.2.1.4 Other Metadata Options 15.2.2 Extended Bitstream Information 15.2.2.1 Extended Bitstream Information - Part 1 15.2.2.2 Extended Bitstream Information - Part 2 15.2.3 Other AC-3 Encoding Options 15.2.4 Floating-Point-Only AC-3 Encoding Options 15.3 flac 15.3.1 Options 15.4 opus 15.4.1 Options 15.5 libfdk_aac 15.5.1 Options 15.5.2 Examples 15.6 libmp3lame 15.6.1 Options 15.7 libopencore-amrnb 15.7.1 Options 15.8 libopus 15.8.1 Option Mapping 15.9 libshine 15.9.1 Options 15.10 libtwolame 15.10.1 Options 15.11 libvo-amrwbenc 15.11.1 Options 15.12 libvorbis 15.12.1 Options 15.13 libwavpack 15.13.1 Options 15.14 mjpeg 15.14.1 Options 15.15 wavpack 15.15.1 Options 15.15.1.1 Shared options 15.15.1.2 Private options 16 Video Encoders 16.1 Hap 16.1.1 Options 16.2 jpeg2000 16.2.1 Options 16.3 libkvazaar 16.3.1 Options 16.4 libopenh264 16.4.1 Options 16.5 libtheora 16.5.1 Options 16.5.2 Examples 16.6 libvpx 16.6.1 Options 16.7 libwebp 16.7.1 Pixel Format 16.7.2 Options 16.8 libx264, libx264rgb 16.8.1 Supported Pixel Formats 16.8.2 Options 16.9 libx265 16.9.1 Options 16.10 libxvid 16.10.1 Options 16.11 mpeg2 16.11.1 Options 16.12 png 16.12.1 Private options 16.13 ProRes 16.13.1 Private Options for prores-ks 16.13.2 Speed considerations 16.14 QSV encoders 16.15 snow 16.15.1 Options 16.16 VAAPI encoders 16.17 vc2 16.17.1 Options 17 Subtitles Encoders 17.1 dvdsub 17.1.1 Options 18 Bitstream Filters 18.1 aac_adtstoasc 18.2 chomp 18.3 dca_core 18.4 dump_extra 18.5 eac3_core 18.6 extract_extradata 18.7 filter_units 18.8 hapqa_extract 18.9 h264_metadata 18.10 h264_mp4toannexb 18.11 h264_redundant_pps 18.12 hevc_metadata 18.13 hevc_mp4toannexb 18.14 imxdump 18.15 mjpeg2jpeg 18.16 mjpegadump 18.17 mov2textsub 18.18 mp3decomp 18.19 mpeg2_metadata 18.20 mpeg4_unpack_bframes 18.21 noise 18.22 null 18.23 remove_extra 18.24 text2movsub 18.25 trace_headers 18.26 vp9_superframe 18.27 vp9_superframe_split 18.28 vp9_raw_reorder 19 Format Options 19.1 Format stream specifiers 20 Demuxers 20.1 aa 20.2 applehttp 20.3 apng 20.4 asf 20.5 concat 20.5.1 Syntax 20.5.2 Options 20.5.3 Examples 20.6 dash 20.7 flv, live_flv 20.8 gif 20.9 hls 20.10 image2 20.10.1 Examples 20.11 libgme 20.12 libopenmpt 20.13 mov/mp4/3gp/QuickTime 20.14 mpegts 20.15 mpjpeg 20.16 rawvideo 20.17 sbg 20.18 tedcaptions 21 Muxers 21.1 aiff 21.1.1 Options 21.2 asf 21.2.1 Options 21.3 avi 21.3.1 Options 21.4 chromaprint 21.4.1 Options 21.5 crc 21.5.1 Examples 21.6 flv 21.7 dash 21.8 framecrc 21.8.1 Examples 21.9 framehash 21.9.1 Examples 21.10 framemd5 21.10.1 Examples 21.11 gif 21.12 hash 21.12.1 Examples 21.13 hls 21.13.1 Options 21.14 ico 21.15 image2 21.15.1 Examples 21.15.2 Options 21.16 matroska 21.16.1 Metadata 21.16.2 Options 21.17 md5 21.17.1 Examples 21.18 mov, mp4, ismv 21.18.1 Options 21.18.2 Example 21.18.3 Audible AAX 21.19 mp3 21.20 mpegts 21.20.1 Options 21.20.2 Example 21.21 mxf, mxf_d10 21.21.1 Options 21.22 null 21.23 nut 21.24 ogg 21.25 segment, stream_segment, ssegment 21.25.1 Options 21.25.2 Examples 21.26 smoothstreaming 21.27 fifo 21.27.1 Examples 21.28 tee 21.28.1 Examples 21.29 webm_dash_manifest 21.29.1 Options 21.29.2 Example 21.30 webm_chunk 21.30.1 Options 21.30.2 Example 22 Metadata 23 Protocol Options 24 Protocols 24.1 async 24.2 bluray 24.3 cache 24.4 concat 24.5 crypto 24.6 data 24.7 file 24.8 ftp 24.9 gopher 24.10 hls 24.11 http 24.11.1 HTTP Cookies 24.12 Icecast 24.13 mmst 24.14 mmsh 24.15 md5 24.16 pipe 24.17 prompeg 24.18 rtmp 24.19 rtmpe 24.20 rtmps 24.21 rtmpt 24.22 rtmpte 24.23 rtmpts 24.24 libsmbclient 24.25 libssh 24.26 librtmp rtmp, rtmpe, rtmps, rtmpt, rtmpte 24.27 rtp 24.28 rtsp 24.28.1 Examples 24.29 sap 24.29.1 Muxer 24.29.2 Demuxer 24.30 sctp 24.31 srt 24.32 srtp 24.33 subfile 24.34 tee 24.35 tcp 24.36 tls 24.37 udp 24.37.1 Examples 24.38 unix 25 Device Options 26 Input Devices 26.1 alsa 26.1.1 Options 26.2 android_camera 26.2.1 Options 26.3 avfoundation 26.3.1 Options 26.3.2 Examples 26.4 bktr 26.4.1 Options 26.5 decklink 26.5.1 Options 26.5.2 Examples 26.6 kmsgrab 26.6.1 Options 26.6.2 Examples 26.7 libndi_newtek 26.7.1 Options 26.7.2 Examples 26.8 dshow 26.8.1 Options 26.8.2 Examples 26.9 fbdev 26.9.1 Options 26.10 gdigrab 26.10.1 Options 26.11 iec61883 26.11.1 Options 26.11.2 Examples 26.12 jack 26.12.1 Options 26.13 lavfi 26.13.1 Options 26.13.2 Examples 26.14 libcdio 26.14.1 Options 26.15 libdc1394 26.16 openal 26.16.1 Options 26.16.2 Examples 26.17 oss 26.17.1 Options 26.18 pulse 26.18.1 Options 26.18.2 Examples 26.19 sndio 26.19.1 Options 26.20 video4linux2, v4l2 26.20.1 Options 26.21 vfwcap 26.21.1 Options 26.22 x11grab 26.22.1 Options 27 Output Devices 27.1 alsa 27.1.1 Examples 27.2 caca 27.2.1 Options 27.2.2 Examples 27.3 decklink 27.3.1 Options 27.3.2 Examples 27.4 libndi_newtek 27.4.1 Options 27.4.2 Examples 27.5 fbdev 27.5.1 Options 27.5.2 Examples 27.6 opengl 27.6.1 Options 27.6.2 Examples 27.7 oss 27.8 pulse 27.8.1 Options 27.8.2 Examples 27.9 sdl 27.9.1 Options 27.9.2 Interactive commands 27.9.3 Examples 27.10 sndio 27.11 xv 27.11.1 Options 27.11.2 Examples 28 Resampler Options 29 Scaler Options 30 Filtering Introduction 31 graph2dot 32 Filtergraph description 32.1 Filtergraph syntax 32.2 Notes on filtergraph escaping 33 Timeline editing 34 Options for filters with several inputs (framesync) 35 Audio Filters 35.1 acompressor 35.2 acontrast 35.3 acopy 35.4 acrossfade 35.4.1 Examples 35.5 acrusher 35.6 adelay 35.6.1 Examples 35.7 aecho 35.7.1 Examples 35.8 aemphasis 35.9 aeval 35.9.1 Examples 35.10 afade 35.10.1 Examples 35.11 afftfilt 35.11.1 Examples 35.12 afir 35.12.1 Examples 35.13 aformat 35.14 agate 35.15 aiir 35.15.1 Examples 35.16 alimiter 35.17 allpass 35.17.1 Commands 35.18 aloop 35.19 amerge 35.19.1 Examples 35.20 amix 35.21 anequalizer 35.21.1 Examples 35.21.2 Commands 35.22 anull 35.23 apad 35.23.1 Examples 35.24 aphaser 35.25 apulsator 35.26 aresample 35.26.1 Examples 35.27 areverse 35.27.1 Examples 35.28 asetnsamples 35.29 asetrate 35.30 ashowinfo 35.31 astats 35.32 atempo 35.32.1 Examples 35.33 atrim 35.34 bandpass 35.34.1 Commands 35.35 bandreject 35.35.1 Commands 35.36 bass 35.36.1 Commands 35.37 biquad 35.37.1 Commands 35.38 bs2b 35.39 channelmap 35.39.1 Examples 35.40 channelsplit 35.40.1 Examples 35.41 chorus 35.41.1 Examples 35.42 compand 35.42.1 Examples 35.43 compensationdelay 35.44 crossfeed 35.45 crystalizer 35.46 dcshift 35.47 drmeter 35.48 dynaudnorm 35.49 earwax 35.50 equalizer 35.50.1 Examples 35.50.2 Commands 35.51 extrastereo 35.52 firequalizer 35.52.1 Examples 35.53 flanger 35.54 haas 35.55 hdcd 35.56 headphone 35.56.1 Examples 35.57 highpass 35.57.1 Commands 35.58 join 35.59 ladspa 35.59.1 Examples 35.59.2 Commands 35.60 loudnorm 35.61 lowpass 35.61.1 Examples 35.61.2 Commands 35.62 lv2 35.62.1 Examples 35.63 mcompand 35.64 pan 35.64.1 Mixing examples 35.64.2 Remapping examples 35.65 replaygain 35.66 resample 35.67 rubberband 35.68 sidechaincompress 35.68.1 Examples 35.69 sidechaingate 35.70 silencedetect 35.70.1 Examples 35.71 silenceremove 35.71.1 Examples 35.72 sofalizer 35.72.1 Examples 35.73 stereotools 35.73.1 Examples 35.74 stereowiden 35.75 superequalizer 35.76 surround 35.77 treble 35.77.1 Commands 35.78 tremolo 35.79 vibrato 35.80 volume 35.80.1 Commands 35.80.2 Examples 35.81 volumedetect 35.81.1 Examples 36 Audio Sources 36.1 abuffer 36.1.1 Examples 36.2 aevalsrc 36.2.1 Examples 36.3 anullsrc 36.3.1 Examples 36.4 flite 36.4.1 Examples 36.5 anoisesrc 36.5.1 Examples 36.6 hilbert 36.7 sine 36.7.1 Examples 37 Audio Sinks 37.1 abuffersink 37.2 anullsink 38 Video Filters 38.1 alphaextract 38.2 alphamerge 38.3 ass 38.4 atadenoise 38.5 avgblur 38.6 bbox 38.7 bitplanenoise 38.8 blackdetect 38.9 blackframe 38.10 blend, tblend 38.10.1 Examples 38.11 boxblur 38.11.1 Examples 38.12 bwdif 38.13 chromakey 38.13.1 Examples 38.14 ciescope 38.15 codecview 38.15.1 Examples 38.16 colorbalance 38.16.1 Examples 38.17 colorkey 38.17.1 Examples 38.18 colorlevels 38.18.1 Examples 38.19 colorchannelmixer 38.19.1 Examples 38.20 colormatrix 38.21 colorspace 38.22 convolution 38.22.1 Examples 38.23 convolve 38.24 copy 38.25 coreimage 38.25.1 Examples 38.26 crop 38.26.1 Examples 38.26.2 Commands 38.27 cropdetect 38.28 curves 38.28.1 Examples
Recommended publications
  • Edge Detection of Noisy Images Using 2-D Discrete Wavelet Transform Venkata Ravikiran Chaganti
    Florida State University Libraries Electronic Theses, Treatises and Dissertations The Graduate School 2005 Edge Detection of Noisy Images Using 2-D Discrete Wavelet Transform Venkata Ravikiran Chaganti Follow this and additional works at the FSU Digital Library. For more information, please contact [email protected] THE FLORIDA STATE UNIVERSITY FAMU-FSU COLLEGE OF ENGINEERING EDGE DETECTION OF NOISY IMAGES USING 2-D DISCRETE WAVELET TRANSFORM BY VENKATA RAVIKIRAN CHAGANTI A thesis submitted to the Department of Electrical Engineering in partial fulfillment of the requirements for the degree of Master of Science Degree Awarded: Spring Semester, 2005 The members of the committee approve the thesis of Venkata R. Chaganti th defended on April 11 , 2005. __________________________________________ Simon Y. Foo Professor Directing Thesis __________________________________________ Anke Meyer-Baese Committee Member __________________________________________ Rodney Roberts Committee Member Approved: ________________________________________________________________________ Leonard J. Tung, Chair, Department of Electrical and Computer Engineering Ching-Jen Chen, Dean, FAMU-FSU College of Engineering The office of Graduate Studies has verified and approved the above named committee members. ii Dedicate to My Father late Dr.Rama Rao, Mother, Brother and Sister-in-law without whom this would never have been possible iii ACKNOWLEDGEMENTS I thank my thesis advisor, Dr.Simon Foo, for his help, advice and guidance during my M.S and my thesis. I also thank Dr.Anke Meyer-Baese and Dr. Rodney Roberts for serving on my thesis committee. I would like to thank my family for their constant support and encouragement during the course of my studies. I would like to acknowledge support from the Department of Electrical Engineering, FAMU-FSU College of Engineering.
    [Show full text]
  • Study and Comparison of Different Edge Detectors for Image
    Global Journal of Computer Science and Technology Graphics & Vision Volume 12 Issue 13 Version 1.0 Year 2012 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals Inc. (USA) Online ISSN: 0975-4172 & Print ISSN: 0975-4350 Study and Comparison of Different Edge Detectors for Image Segmentation By Pinaki Pratim Acharjya, Ritaban Das & Dibyendu Ghoshal Bengal Institute of Technology and Management Santiniketan, West Bengal, India Abstract - Edge detection is very important terminology in image processing and for computer vision. Edge detection is in the forefront of image processing for object detection, so it is crucial to have a good understanding of edge detection operators. In the present study, comparative analyses of different edge detection operators in image processing are presented. It has been observed from the present study that the performance of canny edge detection operator is much better then Sobel, Roberts, Prewitt, Zero crossing and LoG (Laplacian of Gaussian) in respect to the image appearance and object boundary localization. The software tool that has been used is MATLAB. Keywords : Edge Detection, Digital Image Processing, Image segmentation. GJCST-F Classification : I.4.6 Study and Comparison of Different Edge Detectors for Image Segmentation Strictly as per the compliance and regulations of: © 2012. Pinaki Pratim Acharjya, Ritaban Das & Dibyendu Ghoshal. This is a research/review paper, distributed under the terms of the Creative Commons Attribution-Noncommercial 3.0 Unported License http://creativecommons.org/licenses/by-nc/3.0/), permitting all non-commercial use, distribution, and reproduction inany medium, provided the original work is properly cited. Study and Comparison of Different Edge Detectors for Image Segmentation Pinaki Pratim Acharjya α, Ritaban Das σ & Dibyendu Ghoshal ρ Abstract - Edge detection is very important terminology in noise the Canny edge detection [12-14] operator has image processing and for computer vision.
    [Show full text]
  • Feature-Based Image Comparison and Its Application in Wireless Visual Sensor Networks
    University of Tennessee, Knoxville TRACE: Tennessee Research and Creative Exchange Doctoral Dissertations Graduate School 5-2011 Feature-based Image Comparison and Its Application in Wireless Visual Sensor Networks Yang Bai University of Tennessee Knoxville, Department of EECS, [email protected] Follow this and additional works at: https://trace.tennessee.edu/utk_graddiss Part of the Other Computer Engineering Commons, and the Robotics Commons Recommended Citation Bai, Yang, "Feature-based Image Comparison and Its Application in Wireless Visual Sensor Networks. " PhD diss., University of Tennessee, 2011. https://trace.tennessee.edu/utk_graddiss/946 This Dissertation is brought to you for free and open access by the Graduate School at TRACE: Tennessee Research and Creative Exchange. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of TRACE: Tennessee Research and Creative Exchange. For more information, please contact [email protected]. To the Graduate Council: I am submitting herewith a dissertation written by Yang Bai entitled "Feature-based Image Comparison and Its Application in Wireless Visual Sensor Networks." I have examined the final electronic copy of this dissertation for form and content and recommend that it be accepted in partial fulfillment of the equirr ements for the degree of Doctor of Philosophy, with a major in Computer Engineering. Hairong Qi, Major Professor We have read this dissertation and recommend its acceptance: Mongi A. Abidi, Qing Cao, Steven Wise Accepted for the Council: Carolyn R. Hodges Vice Provost and Dean of the Graduate School (Original signatures are on file with official studentecor r ds.) Feature-based Image Comparison and Its Application in Wireless Visual Sensor Networks A Dissertation Presented for the Doctor of Philosophy Degree The University of Tennessee, Knoxville Yang Bai May 2011 Copyright c 2011 by Yang Bai All rights reserved.
    [Show full text]
  • Computer Vision: Edge Detection
    Edge Detection Edge detection Convert a 2D image into a set of curves • Extracts salient features of the scene • More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity Edges are caused by a variety of factors Edge detection How can you tell that a pixel is on an edge? Profiles of image intensity edges Edge detection 1. Detection of short linear edge segments (edgels) 2. Aggregation of edgels into extended edges (maybe parametric description) Edgel detection • Difference operators • Parametric-model matchers Edge is Where Change Occurs Change is measured by derivative in 1D Biggest change, derivative has maximum magnitude Or 2nd derivative is zero. Image gradient The gradient of an image: The gradient points in the direction of most rapid change in intensity The gradient direction is given by: • how does this relate to the direction of the edge? The edge strength is given by the gradient magnitude The discrete gradient How can we differentiate a digital image f[x,y]? • Option 1: reconstruct a continuous image, then take gradient • Option 2: take discrete derivative (finite difference) How would you implement this as a cross-correlation? The Sobel operator Better approximations of the derivatives exist • The Sobel operators below are very commonly used -1 0 1 1 2 1 -2 0 2 0 0 0 -1 0 1 -1 -2 -1 • The standard defn. of the Sobel operator omits the 1/8 term – doesn’t make a difference for edge detection – the 1/8 term is needed to get the right gradient
    [Show full text]
  • Features Extraction in Context Based Image Retrieval
    =================================================================== Engineering & Technology in India www.engineeringandtechnologyinindia.com Vol. 1:2 March 2016 =================================================================== FEATURES EXTRACTION IN CONTEXT BASED IMAGE RETRIEVAL A project report submitted by ARAVINDH A S J CHRISTY SAMUEL in partial fulfilment for the award of the degree of BACHELOR OF TECHNOLOGY in ELECTRONICS AND COMMUNICATION ENGINEERING under the supervision of Mrs. M. A. P. MANIMEKALAI, M.E. SCHOOL OF ELECTRICAL SCIENCES KARUNYA UNIVERSITY (Karunya Institute of Technology and Sciences) (Declared as Deemed to be University under Sec-3 of the UGC Act, 1956) Karunya Nagar, Coimbatore - 641 114, Tamilnadu, India APRIL 2015 Engineering & Technology in India www.engineeringandtechnologyinindia.com Vol. 1:2 March 2016 Aravindh A S. and J. Christy Samuel Features Extraction in Context Based Image Retrieval 3 BONA FIDE CERTIFICATE Certified that this project report “Features Extraction in Context Based Image Retrieval” is the bona fide work of ARAVINDH A S (UR11EC014), and CHRISTY SAMUEL J. (UR11EC026) who carried out the project work under my supervision during the academic year 2014-2015. Mrs. M. A. P. MANIMEKALAI, M.E SUPERVISOR Assistant Professor Department of ECE School of Electrical Sciences Dr. SHOBHA REKH, M.E., Ph.D., HEAD OF THE DEPARTMENT Professor Department of ECE School of Electrical Sciences Engineering & Technology in India www.engineeringandtechnologyinindia.com Vol. 1:2 March 2016 Aravindh A S. and J. Christy Samuel Features Extraction in Context Based Image Retrieval 4 ACKNOWLEDGEMENT First and foremost, we praise and thank ALMIGHTY GOD whose blessings have bestowed in us the will power and confidence to carry out our project. We are grateful to our most respected Founder (late) Dr.
    [Show full text]
  • Computer Vision Based Human Detection Md Ashikur
    Computer Vision Based Human Detection Md Ashikur. Rahman To cite this version: Md Ashikur. Rahman. Computer Vision Based Human Detection. International Journal of Engineer- ing and Information Systems (IJEAIS), 2017, 1 (5), pp.62 - 85. hal-01571292 HAL Id: hal-01571292 https://hal.archives-ouvertes.fr/hal-01571292 Submitted on 2 Aug 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. International Journal of Engineering and Information Systems (IJEAIS) ISSN: 2000-000X Vol. 1 Issue 5, July– 2017, Pages: 62-85 Computer Vision Based Human Detection Md. Ashikur Rahman Dept. of Computer Science and Engineering Shaikh Burhanuddin Post Graduate College Under National University, Dhaka, Bangladesh [email protected] Abstract: From still images human detection is challenging and important task for computer vision-based researchers. By detecting Human intelligence vehicles can control itself or can inform the driver using some alarming techniques. Human detection is one of the most important parts in image processing. A computer system is trained by various images and after making comparison with the input image and the database previously stored a machine can identify the human to be tested. This paper describes an approach to detect different shape of human using image processing.
    [Show full text]
  • Ffmpeg Filters Documentation Table of Contents
    FFmpeg Filters Documentation Table of Contents 1 Description 2 Filtering Introduction 3 graph2dot 4 Filtergraph description 4.1 Filtergraph syntax 4.2 Notes on filtergraph escaping 5 Timeline editing 6 Options for filters with several inputs (framesync) 7 Audio Filters 7.1 acompressor 7.2 acopy 7.3 acrossfade 7.3.1 Examples 7.4 acrusher 7.5 adelay 7.5.1 Examples 7.6 aecho 7.6.1 Examples 7.7 aemphasis 7.8 aeval 7.8.1 Examples 7.9 afade 7.9.1 Examples 7.10 afftfilt 7.10.1 Examples 7.11 afir 7.11.1 Examples 7.12 aformat 7.13 agate 7.14 alimiter 7.15 allpass 7.16 aloop 7.17 amerge 7.17.1 Examples 7.18 amix 7.19 anequalizer 7.19.1 Examples 7.19.2 Commands 7.20 anull 7.21 apad 7.21.1 Examples 7.22 aphaser 7.23 apulsator 7.24 aresample 7.24.1 Examples 7.25 areverse 7.25.1 Examples 7.26 asetnsamples 7.27 asetrate 7.28 ashowinfo 7.29 astats 7.30 atempo 7.30.1 Examples 7.31 atrim 7.32 bandpass 7.33 bandreject 7.34 bass 7.35 biquad 7.36 bs2b 7.37 channelmap 7.38 channelsplit 7.39 chorus 7.39.1 Examples 7.40 compand 7.40.1 Examples 7.41 compensationdelay 7.42 crossfeed 7.43 crystalizer 7.44 dcshift 7.45 dynaudnorm 7.46 earwax 7.47 equalizer 7.47.1 Examples 7.48 extrastereo 7.49 firequalizer 7.49.1 Examples 7.50 flanger 7.51 haas 7.52 hdcd 7.53 headphone 7.53.1 Examples 7.54 highpass 7.55 join 7.56 ladspa 7.56.1 Examples 7.56.2 Commands 7.57 loudnorm 7.58 lowpass 7.58.1 Examples 7.59 pan 7.59.1 Mixing examples 7.59.2 Remapping examples 7.60 replaygain 7.61 resample 7.62 rubberband 7.63 sidechaincompress 7.63.1 Examples 7.64 sidechaingate 7.65 silencedetect
    [Show full text]
  • Edge Detection 1-D &
    Edge Detection CS485/685 Computer Vision Dr. George Bebis Modified by Guido Gerig for CS/BIOEN 6640 U of Utah www.cse.unr.edu/~bebis/CS485/Lectures/ EdgeDetection.ppt Edge detection is part of segmentation image human segmentation gradient magnitude • Berkeley segmentation database: http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench/ Goal of Edge Detection • Produce a line “drawing” of a scene from an image of that scene. Why is Edge Detection Useful? • Important features can be extracted from the edges of an image (e.g., corners, lines, curves). • These features are used by higher-level computer vision algorithms (e.g., recognition). Modeling Intensity Changes • Step edge: the image intensity abruptly changes from one value on one side of the discontinuity to a different value on the opposite side. Modeling Intensity Changes (cont’d) • Ramp edge: a step edge where the intensity change is not instantaneous but occur over a finite distance. Modeling Intensity Changes (cont’d) • Ridge edge: the image intensity abruptly changes value but then returns to the starting value within some short distance (i.e., usually generated by lines). Modeling Intensity Changes (cont’d) • Roof edge: a ridge edge where the intensity change is not instantaneous but occur over a finite distance (i.e., usually generated by the intersection of two surfaces). Edge Detection Using Derivatives • Often, points that lie on an edge are detected by: (1) Detecting the local maxima or minima of the first derivative. 1st derivative (2) Detecting the zero-crossings of the second derivative. 2nd derivative Image Derivatives • How can we differentiate a digital image? – Option 1: reconstruct a continuous image, f(x,y), then compute the derivative.
    [Show full text]
  • Edge Detection (Trucco, Chapt 4 and Jain Et Al., Chapt 5)
    Edge detection (Trucco, Chapt 4 AND Jain et al., Chapt 5) • Definition of edges -Edges are significant local changes of intensity in an image. -Edges typically occur on the boundary between twodifferent regions in an image. • Goal of edge detection -Produce a line drawing of a scene from an image of that scene. -Important features can be extracted from the edges of an image (e.g., corners, lines, curves). -These features are used by higher-levelcomputer vision algorithms (e.g., recogni- tion). -2- • What causes intensity changes? -Various physical events cause intensity changes. -Geometric events *object boundary (discontinuity in depth and/or surface color and texture) *surface boundary (discontinuity in surface orientation and/or surface color and texture) -Non-geometric events *specularity (direct reflection of light, such as a mirror) *shadows (from other objects or from the same object) *inter-reflections • Edge descriptors Edge normal: unit vector in the direction of maximum intensity change. Edge direction: unit vector to perpendicular to the edge normal. Edge position or center: the image position at which the edge is located. Edge strength: related to the local image contrast along the normal. -3- • Modeling intensity changes -Edges can be modeled according to their intensity profiles. Step edge: the image intensity abruptly changes from one value to one side of the discontinuity to a different value on the opposite side. Ramp edge: astep edge where the intensity change is not instantaneous but occur overafinite distance. Ridge edge: the image intensity abruptly changes value but then returns to the starting value within some short distance (generated usually by lines).
    [Show full text]
  • Sift Based Image Stitching
    Journal of Computing Technologies (2278 – 3814) / # 14 / Volume 3 Issue 3 SIFT BASED IMAGE STITCHING Amol Tirlotkar, Swapnil S. Thakur, Gaurav Kamble, Swati Shishupal Department of Information Technology Atharva College of Engineering, University of Mumbai, India. { amoltirlotkar, thakur.swapnil09, gauravkamble293 , shishupal.swati }@gmail.com Abstract - This paper concerns the problem of image Camera. Image stitching is one of the methods that can be stitching which applies to stitch the set of images to form used to create large image by the use of overlapping FOV a large image. The process to generate one large [8]. The drawback is the memory requirement and the panoramic image from a set of small overlapping images amount of computations for image stitching is very high. In is called Image stitching. Stitching algorithm implements this project, this problem is resolved by performing the a seamless stitching or connection between two images image stitching by reducing the amount of required key having its overlapping part to get better resolution or points. First, the stitching key points are determined by viewing angle in image. Stitched images are used in transmitting two reference images which are to be merged various applications such as creating geographic maps or together. in medical fields. Most of the existing methods of image stitching either It uses a method based on invariant features to stitch produce a ‘rough’ stitch or produce a ghosting or blur effect. image which includes image matching and image For region-based image stitching algorithm detect the image merging. Image stitching represents different stages to edge, prepare for the extraction of feature points.
    [Show full text]
  • Edge Detection CS 111
    Edge Detection CS 111 Slides from Cornelia Fermüller and Marc Pollefeys Edge detection • Convert a 2D image into a set of curves – Extracts salient features of the scene – More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity • Edges are caused by a variety of factors Edge detection 1.Detection of short linear edge segments (edgels) 2.Aggregation of edgels into extended edges 3.Maybe parametric description Edge is Where Change Occurs • Change is measured by derivative in 1D • Biggest change, derivative has maximum magnitude •Or 2nd derivative is zero. Image gradient • The gradient of an image: • The gradient points in the direction of most rapid change in intensity • The gradient direction is given by: – Perpendicular to the edge •The edge strength is given by the magnitude How discrete gradient? • By finite differences f(x+1,y) – f(x,y) f(x, y+1) – f(x,y) The Sobel operator • Better approximations of the derivatives exist –The Sobel operators below are very commonly used -1 0 1 121 -2 0 2 000 -1 0 1 -1 -2 -1 – The standard defn. of the Sobel operator omits the 1/8 term • doesn’t make a difference for edge detection • the 1/8 term is needed to get the right gradient value, however Gradient operators (a): Roberts’ cross operator (b): 3x3 Prewitt operator (c): Sobel operator (d) 4x4 Prewitt operator Finite differences responding to noise Increasing noise -> (this is zero mean additive gaussian noise) Solution: smooth first • Look for peaks in Derivative
    [Show full text]
  • Edge Detection
    Edge detection Digital Image Processing, K. Pratt, Chapter 15 Edge detection • Goal: identify objects in images – but also feature extraction, multiscale analysis, 3D reconstruction, motion recognition, image restoration, registration • Classical definition of the edge detection problem: localization of large local changes in the grey level image → large graylevel gradients – This definition does not apply to apparent edges, which require a more complex definition – Extension to color images • Contours are very important perceptual cues! – They provide a first saliency map for the interpretation of image semantics Contours as perceptual cues Contours as perceptual cues What do we detect? • Depending on the impulse response of the filter, we can detect different types of graylevel discontinuities – Isolate points (pixels) – Lines with a predefined slope – Generic contours • However, edge detection implies the evaluation of the local gradient and corresponds to a (directional) derivative Detection of Discontinuities • Point Detection Detected point Detection of Discontinuities • Line detection R1 R2 R3 R4 Detection of Discontinuities • Line Detection Example: Edge detection • mage locations with abrupt I changes → differentiation → high pass filtering f[,] m n Intensity profile n m ∂ f[,] m n ∂n n Types of edges Continuous domain edge models 2D discrete domain single pixel spot models Discrete domain edge models b a Profiles of image intensity edges Types of edge detectors • Unsupervised or autonomous: only rely on local image features
    [Show full text]