Neural Video Coding Using Multiscale Motion Compensation and Spatiotemporal Context Model

Total Page:16

File Type:pdf, Size:1020Kb

Neural Video Coding Using Multiscale Motion Compensation and Spatiotemporal Context Model 1 Neural Video Coding using Multiscale Motion Compensation and Spatiotemporal Context Model Haojie Liu1, Ming Lu1, Zhan Ma1, Fan Wang2, Zhihuang Xie2, Xun Cao1, and Yao Wang3 1 Nanjing University, 2 OPPO.Inc, 3 New York University Abstract—Over the past two decades, traditional block-based bit rate consumption at the same reconstruction quality. This video coding has made remarkable progress and spawned a is well formulated as the minimization of Lagrangian cost J series of well-known standards such as MPEG-4, H.264/AVC of rate-distortion optimization (RDO) that is widely adopted and H.265/HEVC. On the other hand, deep neural networks (DNNs) have shown their powerful capacity for visual content in existing video coders, e.g., understanding, feature extraction and compact representation. min J = R + λ · D; (1) Some previous works have explored the learnt video coding algorithms in an end-to-end manner, which show the great with R and D represent the compressed bit rates and recon- potential compared with traditional methods. In this paper, we structed distortion respectively. propose an end-to-end deep neural video coding framework (NVC), which uses variational autoencoders (VAEs) with joint Motivation. Over the past three decades, video compres- spatial and temporal prior aggregation (PA) to exploit the sion technologies have been evolving and adapting con- correlations in intra-frame pixels, inter-frame motions and inter- stantly with coding efficiency improved by several folds, frame compensation residuals, respectively. Novel features of mostly driven under the efforts from the experts in ISO/IEC NVC include: 1) To estimate and compensate motion over a Moving Picture Experts Group (MPEG), ITU-T Video Cod- large range of magnitudes, we propose an unsupervised multi- scale motion compensation network (MS-MCN) together with a ing Experts Group (VCEG) and their joint task forces. It pyramid decoder in the VAE for coding motion features that leads to several popular video coding standards, including generates multiscale flow fields, 2) we design a novel adaptive the H.264/Advanced Video Coding (H.264/AVC) [1], High- spatiotemporal context model for efficient entropy coding for Efficiency Video Coding (HEVC) [2] and emerging versatile motion information, 3) we adopt nonlocal attention modules video coding (VVC) [3]. These standards share the similar (re- (NLAM) at the bottlenecks of the VAEs for implicit adaptive feature extraction and activation, leveraging its high transforma- cursive) block-based hybrid prediction/transform framework tion capacity and unequal weighting with joint global and local where individual coding tools, such as the intra/inter predic- information, and 4) we introduce multi-module optimization and tion, integer transforms, context-adaptive entropy coding, etc, a multi-frame training strategy to minimize the temporal error are intensively handcrafted to optimize the overall efficiency. propagation among P-frames. NVC is evaluated for the low-delay Among them, pixel-domain predictive coding is one of the causal settings and compared with H.265/HEVC, H.264/AVC and the other learnt video compression methods following the most important factors, contributing to the major performance common test conditions, demonstrating consistent gains across all gains [4]. For example, pixel-domain intra prediction was popular test sequences for both PSNR and MS-SSIM distortion officially adopted into the H.264/AVC and later extended with metrics. the support of recursive block-sizes and abundant predictive Index Terms—Neural video coding, neural network, multiscale directions for efficiently exploiting spatial structures; recursive motion compensation, pyramid decoder, multiscale compressed and even non-squared blocks are extensively used in inter flows, nonlocal attention, spatiotemporal priors, temporal error prediction to remove temporal coherency. Basically, conven- propagation. tional video coding methods leverage the spatiotemporal pixel arXiv:2007.04574v1 [eess.IV] 9 Jul 2020 neighbors (as well as their linear combinations) for predictive I. INTRODUCTION signal construction, resulting in corresponding residuals for OMPRESSED video, a dominant media representation subsequent transform, quantization, and entropy coding for C across the entire Internet, occupies more than 70% total more compact representation. Optimal coding mode with ap- traffic volume nowadays for entertainment (e.g., YouTube), propriate block size and orientation (e.g., intra direction, inter productivity (e.g., tele-education), security (e.g., surveillance) motion vectors) is selected via computational RDO process, etc. It still keeps growing explosively. Thus, in pursuit of utilizing `1-norm (e.g., mean absolute error - MAE) or `2- efficient storage and network transmission, and pristine quality norm (e.g., mean squared error - MSE) as the distortion metric. of experience (QoE) with higher resolution content (e.g., 2K, Though recursive block-based pixel prediction shows its 4K and even 8K video with frame rate at 30 Hz, 60 Hz or even great success, it is mainly due to the hardware advancements more), a better compression approach is greatly and continu- in past decades, by which we can exhaustively search for the ously desired. In principle, the key problem in video coding is best prediction. It, however, is more and more challenging to how to efficiently exploit visual signal redundancy using prior simply trade computational resources for efficiency improve- information, spatially (e.g., intra prediction, transform), tem- ment because Moore’s Law does not hold any more [5]. It porally (e.g., inter prediction), and statistically (e.g., entropy therefore calls for innovative methodologies and architectures context adaptation) for more compact representations with less of video coding to further improve the coding efficiency, in 2 response to the ever increasing users’ requirement for video TABLE I: Abbreviations and Notations resolution and quality. Such pixel prediction strategy, either abber. description intra or inter, mostly relies on the physical coherence of NLAM NonLocal Attention Module LAM Local Attention Module video signal and applies the mathematical tools (e.g., linear MS-MCN Multi-scale Motion Compensation Network weighting, orthogonal transform, Lagrangian optimization) for SS-MCN Single-scale Motion Compensation Network signal energy compaction. PA Prior Aggregation MCF Multiscale Compressed Flow Our Approach. We propose an end-to-end neural video neuro-Intra Neural intra coding coding framework (NVC), which codes intra-frame pixels neuro-Motion Neural motion coding (called neuro-Intra), inter-frame motion (called neuro-Motion), neuro-Res Neural residual coding MSE Mean Squared Error and inter-frame residual (called neuro-Res) using separate MAE Mean Absolute Error variational autoencoders (VAE), as shown in Fig.1. A mul- PSNR Peak signal-to-noise ratio tiscale motion compensation network (MS-MCN) works to- MS-SSIM Multiscale Structural Similarity gether with neuro-Motion to generate multiscale optical flows and perform multiscale motion-compensated prediction of the current frame from the previous frame. The sparse image neuro-Intra differences between past and present frame, e.g., residuals, Intra Intra Intra Encoder Decoder Compressed are then encoded to obtain the final reconstruction; All Coding Binary three VAEs, e.g., neuro-Intra, neuro-Motion, neuro-Res for Features neuro-Res compressing intra-pixel, inter-motion and inter-residual, are neuro-Motion Residual Residual engineered together with MS-MCN in an end-to-end learning Motion Motion - Encoder Decoder manner. Note that neuro-Intra takes a native image frame as Encoder Decoder input, neuro-Motion uses the current and past reconstructed MS-MCN Inter Coding Multi-scale Motion + frame, generating multiscale compressed flows (MCFs), MS- Compensation Network MCN uses these generated MCFs for motion compensation to obtain the inter-predicted frame and neuro-Res encodes Reference Frame Buffer the difference between the current frame and its prediction for the final reconstruction. Additionally, joint spatiotemporal Fig. 1: Neural Video Coding (NVC). The modules neuro- and hyper priors are aggregated for efficient and adaptive Intra, neuro-Res, and neuro-Motion follow the general model context modeling of latent features to improve entropy coding architecture in Fig.2 for efficient representations of intra efficiency for the motion field. pixels, displaced inter residuals, and inter motions. The neuro- We have evaluated the efficiency of the proposed NVC Motion uses a pyramid decoder for the main decoder as for the low-delay causal settings against well-known HEVC, discussed in Sec. III-D2. H.264/AVC and other learnt video compression methods fol- lowing the common test conditions. The NVC demonstrated the leading performance with consistent gains across all pop- • We propose an end-to-end deep neural video coding ular test sequences for both PSNR (Peak signal-to-noise ratio) framework (NVC), leveraging learnt feature domain and MS-SSIM (multiscale structural similarity) [6] distortion representations for intra-pixel, inter-motion and inter- metrics. Using the H.264/AVC as a common anchor, our NVC residual, respectively for compression; presents 35% BD-Rate (Bjontegaard Delta Rate) [7] gains, • neuro-Motion and multiscale motion compensation net- while HEVC and DVC (Deep Video Coding) [8] offer 30% work (MS-MCN) are employed together to capture and 22% gains, respectively, when the distortion
Recommended publications
  • Kulkarni Uta 2502M 11649.Pdf
    IMPLEMENTATION OF A FAST INTER-PREDICTION MODE DECISION IN H.264/AVC VIDEO ENCODER by AMRUTA KIRAN KULKARNI Presented to the Faculty of the Graduate School of The University of Texas at Arlington in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE IN ELECTRICAL ENGINEERING THE UNIVERSITY OF TEXAS AT ARLINGTON May 2012 ACKNOWLEDGEMENTS First and foremost, I would like to take this opportunity to offer my gratitude to my supervisor, Dr. K.R. Rao, who invested his precious time in me and has been a constant support throughout my thesis with his patience and profound knowledge. His motivation and enthusiasm helped me in all the time of research and writing of this thesis. His advising and mentoring have helped me complete my thesis. Besides my advisor, I would like to thank the rest of my thesis committee. I am also very grateful to Dr. Dongil Han for his continuous technical advice and financial support. I would like to acknowledge my research group partner, Santosh Kumar Muniyappa, for all the valuable discussions that we had together. It helped me in building confidence and motivated towards completing the thesis. Also, I thank all other lab mates and friends who helped me get through two years of graduate school. Finally, my sincere gratitude and love go to my family. They have been my role model and have always showed me right way. Last but not the least; I would like to thank my husband Amey Mahajan for his emotional and moral support. April 20, 2012 ii ABSTRACT IMPLEMENTATION OF A FAST INTER-PREDICTION MODE DECISION IN H.264/AVC VIDEO ENCODER Amruta Kulkarni, M.S The University of Texas at Arlington, 2011 Supervising Professor: K.R.
    [Show full text]
  • Versatile Video Coding – the Next-Generation Video Standard of the Joint Video Experts Team
    31.07.2018 Versatile Video Coding – The Next-Generation Video Standard of the Joint Video Experts Team Mile High Video Workshop, Denver July 31, 2018 Gary J. Sullivan, JVET co-chair Acknowledgement: Presentation prepared with Jens-Rainer Ohm and Mathias Wien, Institute of Communication Engineering, RWTH Aachen University 1. Introduction Versatile Video Coding – The Next-Generation Video Standard of the Joint Video Experts Team 1 31.07.2018 Video coding standardization organisations • ISO/IEC MPEG = “Moving Picture Experts Group” (ISO/IEC JTC 1/SC 29/WG 11 = International Standardization Organization and International Electrotechnical Commission, Joint Technical Committee 1, Subcommittee 29, Working Group 11) • ITU-T VCEG = “Video Coding Experts Group” (ITU-T SG16/Q6 = International Telecommunications Union – Telecommunications Standardization Sector (ITU-T, a United Nations Organization, formerly CCITT), Study Group 16, Working Party 3, Question 6) • JVT = “Joint Video Team” collaborative team of MPEG & VCEG, responsible for developing AVC (discontinued in 2009) • JCT-VC = “Joint Collaborative Team on Video Coding” team of MPEG & VCEG , responsible for developing HEVC (established January 2010) • JVET = “Joint Video Experts Team” responsible for developing VVC (established Oct. 2015) – previously called “Joint Video Exploration Team” 3 Versatile Video Coding – The Next-Generation Video Standard of the Joint Video Experts Team Gary Sullivan | Jens-Rainer Ohm | Mathias Wien | July 31, 2018 History of international video coding standardization
    [Show full text]
  • 1. in the New Document Dialog Box, on the General Tab, for Type, Choose Flash Document, Then Click______
    1. In the New Document dialog box, on the General tab, for type, choose Flash Document, then click____________. 1. Tab 2. Ok 3. Delete 4. Save 2. Specify the export ________ for classes in the movie. 1. Frame 2. Class 3. Loading 4. Main 3. To help manage the files in a large application, flash MX professional 2004 supports the concept of _________. 1. Files 2. Projects 3. Flash 4. Player 4. In the AppName directory, create a subdirectory named_______. 1. Source 2. Flash documents 3. Source code 4. Classes 5. In Flash, most applications are _________ and include __________ user interfaces. 1. Visual, Graphical 2. Visual, Flash 3. Graphical, Flash 4. Visual, AppName 6. Test locally by opening ________ in your web browser. 1. AppName.fla 2. AppName.html 3. AppName.swf 4. AppName 7. The AppName directory will contain everything in our project, including _________ and _____________ . 1. Source code, Final input 2. Input, Output 3. Source code, Final output 4. Source code, everything 8. In the AppName directory, create a subdirectory named_______. 1. Compiled application 2. Deploy 3. Final output 4. Source code 9. Every Flash application must include at least one ______________. 1. Flash document 2. AppName 3. Deploy 4. Source 10. In the AppName/Source directory, create a subdirectory named __________. 1. Source 2. Com 3. Some domain 4. AppName 11. In the AppName/Source/Com directory, create a sub directory named ______ 1. Some domain 2. Com 3. AppName 4. Source 12. A project is group of related _________ that can be managed via the project panel in the flash.
    [Show full text]
  • Overview of Technologies for Immersive Visual Experiences: Capture, Processing, Compression, Standardization and Display
    Overview of technologies for immersive visual experiences: capture, processing, compression, standardization and display Marek Domański Poznań University of Technology Chair of Multimedia Telecommunications and Microelectronics Poznań, Poland 1 Immersive visual experiences • Arbitrary direction of viewing System : 3DoF • Arbitrary location of viewer - virtual navigation - free-viewpoint television • Both System : 6DoF Virtual viewer 2 Immersive video content Computer-generated Natural content Multiple camera around a scene also depth cameras, light-field cameras Camera(s) located in a center of a scene 360-degree cameras Mixed e.g.: wearable cameras 3 Video capture Common technical problems • Synchronization of cameras Camera hardware needs to enable synchronization Shutter release error < Exposition interval • Frame rate High frame rate needed – Head Mounted Devices 4 Common task for immersive video capture: Calibration Camera parameter estimation: • Intrinsic – the parameters of individual cameras – remain unchanged by camera motion • Extrinsic – related to camera locations in real word – do change by camera motion (even slight !) Out of scope of standardization • Improved methods developed 5 Depth estimation • By video analysis – from at least 2 video sequences – Computationally heavy – Huge progress recently • By depth cameras – diverse products – Infrared illuminate of the scene – Limited resolution mostly Out of scope of standardization 6 MPEG Test video sequences • Large collection of video sequences with depth information
    [Show full text]
  • Video Coding Standards
    Video coding standards Video signals represent sequences of images or frames which can be transmitted with a rate from 15 to 60 frames per second (fps), that provides the illusion of motion in the displayed signal. Unlike images they contain the so-called temporal redundancy. Temporal redundancy arises from repeated objects in consecutive frames of the video sequence. Such objects can remain, they can move horizontally, vertically, or any combination of directions (translation movement), they can fade in and out, and they can disappear from the image as they move out of view. Temporal redundancy Motion compensation A motion compensation technique is used to compensate the temporal redundancy of video sequence. The main idea of the this method is to predict the displacement of group of pixels (usually block of pixels) from their position in the previous frame. Information about this displacement is represented by motion vectors which are transmitted together with the DCT coded difference between the predicted and the original images. Motion compensation Image VLC DCT Quantizer Buffer - Dequantizer Coded DCT coefficients. IDCT Motion compensated predictor Motion Motion vectors estimation Decoded VL image Buffer Dequantizer IDCT decoder Motion compensated predictor Motion vectors Motion compensation Previous frame Current frame Set of macroblocks of the previous frame used to predict the selected macroblock of the current frame Motion compensation Previous frame Current frame Macroblocks of previous frame used to predict current frame Motion compensation Each 16x16 pixel macroblock in the current frame is compared with a set of macroblocks in the previous frame to determine the one that best predicts the current macroblock.
    [Show full text]
  • Versatile Video Coding (VVC)
    Versatile Video Coding (VVC) on the final stretch Benjamin Bross Fraunhofer Heinrich Hertz Institute, Berlin ITU Workshop on “The future of media” © Geneva, Switzerland, 8 October 2019 Versatile Video Coding (VVC) Joint ITU-T (VCEG) and ISO/IEC (MPEG) project Coding Efficiency Versatility 50% over H.265/HEVC Screen content HD / UHD / 8K resolutions Adaptive resolution change 10bit / HDR Independent sub-pictures 2 VVC – Coding Efficiency History of Video Coding Standards H.261 JPEG H.265 / H.264 / H.262 / (1991) (1990) PSNR MPEG-HEVC MPEG-4 AVC MPEG-2 (dB) (2013) (2003) (1995) 40 38 Bit-rate Reduction: 50% 35 36 34 32 30 28 0 100 200 300 bit rate (kbit/s) 3 VVC – Coding Efficiency History of Video Coding Standards H.261 JPEG H.265 / H.264 / H.262 / (1991) (1990) PSNR MPEG-HEVC MPEG-4 AVC MPEG-2 (dB) (2013) (2003) (1995) 40 38 36 Do we need more efficient video coding? 34 32 30 28 0 100 200 300 bit rate (kbit/s) 4 VVC – Coding Efficiency Jevons Paradox "The efficiency with which a resource is used tends to increase (rather than decrease) the rate of consumption of that resource." 5 VVC – Coding Efficiency Target for the final VVC standard H.262 / H.261 JPEG H.??? / H.265 / H.264 / MPEG-2 (1991) (1990) PSNR MPEG-VVC MPEG-HEVC MPEG-4 AVC (dB) (2013) (2003) (1995) 40 38 36 35 Bit-rate Reduction Target: 50% 34 32 30 28 0 100 200 300 bit rate (kbit/s) 6 VVC – Timeline 2015 Oct. – Exploration Phase • Joint Video Exploration Team (JVET) of ITU-T VCEG and ISO/IEC MPEG established October ‘15 in Geneva • Joint Video Exploration Model (JEM) as software playground to explore new coding tools • 34% bitrate savings for JEM relative to HEVC provided evidence to start a new joint standardization activity with a… 2017 Oct.
    [Show full text]
  • CHAPTER 10 Basic Video Compression Techniques Contents
    Multimedia Network Lab. CHAPTER 10 Basic Video Compression Techniques Multimedia Network Lab. Prof. Sang-Jo Yoo http://multinet.inha.ac.kr The Graduate School of Information Technology and Telecommunications, INHA University http://multinet.inha.ac.kr Multimedia Network Lab. Contents 10.1 Introduction to Video Compression 10.2 Video Compression with Motion Compensation 10.3 Search for Motion Vectors 10.4 H.261 10.5 H.263 The Graduate School of Information Technology and Telecommunications, INHA University 2 http://multinet.inha.ac.kr Multimedia Network Lab. 10.1 Introduction to Video Compression A video consists of a time-ordered sequence of frames, i.e.,images. Why we need a video compression CIF (352x288) : 35 Mbps HDTV: 1Gbps An obvious solution to video compression would be predictive coding based on previous frames. Compression proceeds by subtracting images: subtract in time order and code the residual error. It can be done even better by searching for just the right parts of the image to subtract from the previous frame. Motion estimation: looking for the right part of motion Motion compensation: shifting pieces of the frame The Graduate School of Information Technology and Telecommunications, INHA University 3 http://multinet.inha.ac.kr Multimedia Network Lab. 10.2 Video Compression with Motion Compensation Consecutive frames in a video are similar – temporal redundancy Frame rate of the video is relatively high (30 frames/sec) Camera parameters (position, angle) usually do not change rapidly. Temporal redundancy is exploited so that not every frame of the video needs to be coded independently as a new image. The difference between the current frame and other frame(s) in the sequence will be coded – small values and low entropy, good for compression.
    [Show full text]
  • Multiple Reference Motion Compensation: a Tutorial Introduction and Survey Contents
    Foundations and TrendsR in Signal Processing Vol. 2, No. 4 (2008) 247–364 c 2009 A. Leontaris, P. C. Cosman and A. M. Tourapis DOI: 10.1561/2000000019 Multiple Reference Motion Compensation: A Tutorial Introduction and Survey By Athanasios Leontaris, Pamela C. Cosman and Alexis M. Tourapis Contents 1 Introduction 248 1.1 Motion-Compensated Prediction 249 1.2 Outline 254 2 Background, Mosaic, and Library Coding 256 2.1 Background Updating and Replenishment 257 2.2 Mosaics Generated Through Global Motion Models 261 2.3 Composite Memories 264 3 Multiple Reference Frame Motion Compensation 268 3.1 A Brief Historical Perspective 268 3.2 Advantages of Multiple Reference Frames 270 3.3 Multiple Reference Frame Prediction 271 3.4 Multiple Reference Frames in Standards 277 3.5 Interpolation for Motion Compensated Prediction 281 3.6 Weighted Prediction and Multiple References 284 3.7 Scalable and Multiple-View Coding 286 4 Multihypothesis Motion-Compensated Prediction 290 4.1 Bi-Directional Prediction and Generalized Bi-Prediction 291 4.2 Overlapped Block Motion Compensation 294 4.3 Hypothesis Selection Optimization 296 4.4 Multihypothesis Prediction in the Frequency Domain 298 4.5 Theoretical Insight 298 5 Fast Multiple-Frame Motion Estimation Algorithms 301 5.1 Multiresolution and Hierarchical Search 302 5.2 Fast Search using Mathematical Inequalities 303 5.3 Motion Information Re-Use and Motion Composition 304 5.4 Simplex and Constrained Minimization 306 5.5 Zonal and Center-biased Algorithms 307 5.6 Fractional-pixel Texture Shifts or Aliasing
    [Show full text]
  • Overview: Motion-Compensated Coding
    Overview: motion-compensated coding Motion-compensated prediction Motion-compensated hybrid coding Motion estimation by block-matching Motion estimation with sub-pixel accuracy Power spectral density of the motion-compensated prediction error Rate-distortion analysis Loop filter Motion compensated coding with sub-pixel accuracy Rate-constrained motion estimation Bernd Girod: EE398B Image Communication II Motion Compensated Coding no. 1 Motion-compensated prediction previous frame stationary Δt background current frame x time t y moving object ⎛ d x ⎞ „Displacement vector“ ⎜ d ⎟ shifted ⎝ y ⎠ object Prediction for the luminance signal S(x,y,t) within the moving object: ˆ S (x, y,t) = S(x − dx , y − dy ,t − Δt) Bernd Girod: EE398B Image Communication II Motion Compensated Coding no. 2 Combining transform coding and prediction Transform domain prediction Space domain prediction Q Q T - T - T T −1 T −1 T −1 PT T PTS T T −1 T−1 PT PS Bernd Girod: EE398B Image Communication II Motion Compensated Coding no. 3 Motion-compensated hybrid coder Coder Control Control Data Intra-frame DCT DCT Coder - Coefficients Decoder Intra-frame Decoder 0 Motion- Compensated Intra/Inter Predictor Motion Data Motion Estimator Bernd Girod: EE398B Image Communication II Motion Compensated Coding no. 4 Motion-compensated hybrid decoder Control Data DCT Coefficients Decoder Intra-frame Decoder 0 Motion- Compensated Intra/Inter Predictor Motion Data Bernd Girod: EE398B Image Communication II Motion Compensated Coding no. 5 Block-matching algorithm search range in Subdivide current reference frame frame into blocks. Sk −1 Find one displacement vector for each block. Within a search range, find a best „match“ that minimizes an error measure.
    [Show full text]
  • Efficient Video Coding with Motion-Compensated Orthogonal
    Efficient Video Coding with Motion-Compensated Orthogonal Transforms DU LIU Master’s Degree Project Stockholm, Sweden 2011 XR-EE-SIP 2011:011 Efficient Video Coding with Motion-Compensated Orthogonal Transforms Du Liu July, 2011 Abstract Well-known standard hybrid coding techniques utilize the concept of motion- compensated predictive coding in a closed-loop. The resulting coding de- pendencies are a major challenge for packet-based networks like the Internet. On the other hand, subband coding techniques avoid the dependencies of predictive coding and are able to generate video streams that better match packet-based networks. An interesting class for subband coding is the so- called motion-compensated orthogonal transform. It generates orthogonal subband coefficients for arbitrary underlying motion fields. In this project, a theoretical lossless signal model based on Gaussian distribution is proposed. It is possible to obtain the optimal rate allocation from this model. Addition- ally, a rate-distortion efficient video coding scheme is developed that takes advantage of motion-compensated orthogonal transforms. The scheme com- bines multiple types of motion-compensated orthogonal transforms, variable block size, and half-pel accurate motion compensation. The experimental results show that this scheme outperforms individual motion-compensated orthogonal transforms. i Acknowledgements This thesis was carried out at Sound and Image Processing Lab, School of Electrical Engineering, KTH. I would like to express my appreciation to my supervisor Markus Flierl for the opportunity of doing this thesis. I am grateful for his patience and valuable suggestions and discussions. Many thanks to Haopeng Li, Mingyue Li, and Zhanyu Ma, who helped me a lot during my research.
    [Show full text]
  • Respiratory Motion Compensation Using Diaphragm Tracking for Cone-Beam C-Arm CT: a Simulation and a Phantom Study
    Hindawi Publishing Corporation International Journal of Biomedical Imaging Volume 2013, Article ID 520540, 10 pages http://dx.doi.org/10.1155/2013/520540 Research Article Respiratory Motion Compensation Using Diaphragm Tracking for Cone-Beam C-Arm CT: A Simulation and a Phantom Study Marco Bögel,1 Hannes G. Hofmann,1 Joachim Hornegger,1 Rebecca Fahrig,2 Stefan Britzen,3 and Andreas Maier3 1 Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, 91058 Erlangen, Germany 2 Department of Radiology, Lucas MRS Center, Stanford University, Palo Alto, CA 94304, USA 3 Siemens AG, Healthcare Sector, 91301 Forchheim, Germany Correspondence should be addressed to Andreas Maier; [email protected] Received 21 February 2013; Revised 13 May 2013; Accepted 15 May 2013 Academic Editor: Michael W. Vannier Copyright © 2013 Marco Bogel¨ et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Long acquisition times lead to image artifacts in thoracic C-arm CT. Motion blur caused by respiratory motion leads to decreased image quality in many clinical applications. We introduce an image-based method to estimate and compensate respiratory motion in C-arm CT based on diaphragm motion. In order to estimate respiratory motion, we track the contour of the diaphragm in the projection image sequence. Using a motion corrected triangulation approach on the diaphragm vertex, we are able to estimate a motion signal. The estimated motion signal is used to compensate for respiratory motion in the target region, for example, heart or lungs.
    [Show full text]
  • CALIFORNIA STATE UNIVERSITY, NORTHRIDGE Optimized AV1 Inter
    CALIFORNIA STATE UNIVERSITY, NORTHRIDGE Optimized AV1 Inter Prediction using Binary classification techniques A graduate project submitted in partial fulfillment of the requirements for the degree of Master of Science in Software Engineering by Alex Kit Romero May 2020 The graduate project of Alex Kit Romero is approved: ____________________________________ ____________ Dr. Katya Mkrtchyan Date ____________________________________ ____________ Dr. Kyle Dewey Date ____________________________________ ____________ Dr. John J. Noga, Chair Date California State University, Northridge ii Dedication This project is dedicated to all of the Computer Science professors that I have come in contact with other the years who have inspired and encouraged me to pursue a career in computer science. The words and wisdom of these professors are what pushed me to try harder and accomplish more than I ever thought possible. I would like to give a big thanks to the open source community and my fellow cohort of computer science co-workers for always being there with answers to my numerous questions and inquiries. Without their guidance and expertise, I could not have been successful. Lastly, I would like to thank my friends and family who have supported and uplifted me throughout the years. Thank you for believing in me and always telling me to never give up. iii Table of Contents Signature Page ................................................................................................................................ ii Dedication .....................................................................................................................................
    [Show full text]