Video Super-Resolution Based on Spatial-Temporal Recurrent Residual Networks

Video Super-Resolution Based on Spatial-Temporal Recurrent Residual Networks

1 Computer Vision and Image Understanding journal homepage: www.elsevier.com Video Super-Resolution Based on Spatial-Temporal Recurrent Residual Networks Wenhan Yanga, Jiashi Fengb, Guosen Xiec, Jiaying Liua,∗∗, Zongming Guoa, Shuicheng Yand aInstitute of Computer Science and Technology, Peking University, Beijing, P.R.China, 100871 bDepartment of Electrical and Computer Engineering, National University of Singapore, Singapore, 117583 cNLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, P.R.China, 100190 dArtificial Intelligence Institute, Qihoo 360 Technology Company, Ltd., Beijing, P.R.China, 100015 ABSTRACT In this paper, we propose a new video Super-Resolution (SR) method by jointly modeling intra-frame redundancy and inter-frame motion context in a unified deep network. Different from conventional methods, the proposed Spatial-Temporal Recurrent Residual Network (STR-ResNet) investigates both spatial and temporal residues, which are represented by the difference between a high resolution (HR) frame and its corresponding low resolution (LR) frame and the difference between adjacent HR frames, respectively. This spatial-temporal residual learning model is then utilized to connect the intra-frame and inter-frame redundancies within video sequences in a recurrent convolutional network and to predict HR temporal residues in the penultimate layer as guidance to benefit esti- mating the spatial residue for video SR. Extensive experiments have demonstrated that the proposed STR-ResNet is able to efficiently reconstruct videos with diversified contents and complex motions, which outperforms the existing video SR approaches and offers new state-of-the-art performances on benchmark datasets. Keywords: Spatial residue, temporal residue, video super-resolution, inter-frame motion con- text, intra-frame redundancy. c 2017 Elsevier Ltd. All rights reserved. 1. Introduction Typically, the observation can be modeled as yt = Dtxt + vt; t = 1;:::; T: (1) Video super-resolution (SR) aims to produce high-resolution (HR) video frames from a sequence of low-resolution (LR) in- Here Dt encapsulates various signal quality degradation fac- puts. In recent years, video super-resolution has been drawing tors at the time instance t, e.g., motion blur, defocus blur and increasing interest from both academia and industry. Although down-sampling. Additive noise during observation at that time various HR video devices have been developed constantly, it is is denoted as vt. Generally, the SR problem, i.e., solving out xt still highly expensive to produce, store and transmit HR videos. in Eq. (1), is an ill-posed linear inverse problem that is rather Thus, there is a great demand for modern SR techniques to gen- challenging. Thus, accurately estimating xt demands either suf- erate HR videos from LR ones. ficient observations yt or proper priors on xt. The video SR problem, as well as other signal super- All video SR methods can be divided into two classes: resolution problems, can be summarized as restoring the origi- reconstruction-based and learning-based. Reconstructed-based methods Liu and Sun(2014); Baker and Kanade(1999); He nal scene xt from its several quality-degraded observations fytg. and Kondi(2006); Kanaev and Miller(2013); Omer and Tana- ka(2009); Farsiu et al.(2004); Rudin et al.(1992) craft a video SR process to solve the inverse estimation problem of (1). They ∗∗ Corresponding author. This work was supported by National Natural Sci- usually perform motion compensation at first, then perform de- ence Foundation of China under contract No. 61772043, with additional sup- port by the State Scholarship Fund from the China Scholarship Council. blurring by estimating blur functions in Dt of (1), and finally re- e-mail: [email protected] (Jiaying Liu) cover details by local correspondences. The hand-crafted video 2 SR process cannot be applicable for every practical scenario of pleasant video SR results with few artifacts in a time-efficient different properties and perform not well to some unexpected way. cases. In contrast, learning-based methods handle the ill-posed in- verse estimation by learning useful priors for video SR from a large collection of videos. Typical methods include recent- ly developed deep learning-based video SR methods Liao et al. (2015a); Huang et al.(2015, 2017) and give some examples of non-deep learning approaches. In Liao et al.(2015a), a funnel shape convolutional neural network (CNN) was developed to predict HR frames from LR frames that are aligned by optical flow in advance. It shows superior performance on recovering HR video frames captured in still scenes. However, this CN- N model suffers from high computational cost (as it relies on Fig. 1. The architecture of our proposed spatial-temporal recurrent resid- time-consuming regularized optical flow methods) as well as ual network (STR-ResNet) for video SR. It takes not only the LR frames but also the differences of these adjacent LR frames as the input. Some re- visual artifacts caused by complex motions in the video frames. constructed features are constrained to predict the differences of adjacent In Huang et al.(2015, 2017), a bidirectional recurrent convo- HR frames in the penultimate layer. lutional network (BRCN) was employed to model the temporal More concretely, we propose a Spatial Temporal Recurrent correlation among multiple frames and further boost the perfor- Residual Network (STR-ResNet) for video SR as show in mance for video SR over previous methods. Fig.1. As aforementioned, SRT-ResNet models spatial and However, previous learning-based video SR methods that temporal correlations among multiple video frames jointly. In learn to predict HR frames directly based on LR frames, suffer STR-ResNet, one basic component is the spatial residual CNN from following limitations. First, these methods concentrate on (SRes-CNN) for single frame SR, which has a bypass connec- exploiting between-frame correlations and does not jointly con- tion for learning the residue between LR and HR feature maps. sider the intra- and inter-frame correlations that are both critical SRes-CNN is able to capture the correlation information among for the quality of video SR. This unfavorably limits the capaci- pixels within a single frame, and tries to recover an HR frame ty of the network for recovering HR frames with complex con- based on its corresponding LR frame through utilizing such cor- tents. Second, the successive input LR frames are usually high- relations. Then, STR-ResNet stacks multiple SRes-CNNs to- ly correlated with the whole signal of the HR frames, but are gether with recurrent connections between them. The global not correlated with the high frequency details of these HR im- recurrent architecture captures the temporal contextual correla- ages. In the case where dominant training frames present slow tion and recovers the HR frame using both its corresponding motion, the learned priors hardly capture hard cases, such as LR frame and its adjacent frames. To better model inter-frame large movements and shot changes, where neighboring frames motions, STR-ResNet takes not only multiple LR frames but distinguished-contributed operations are needed. Third, it is de- also the residue of these adjacent LR frames as inputs and tries sirable for the joint estimation of video SR to impose priors on to predict the temporal residues of HR frames in the penulti- missing high frequency signals. However, in previous methods, mate layer. An HR frame is thus recovered by STR-ResNet the potential constraints are directly enforced on the estimated by summing up its corresponding LR frame and the predicted HR frames. spatial residue via the SRes-CNN component, under the guid- To solve the above-mentioned issues, in this work, we pro- ance of the predicted temporal residue from adjacent frames via pose a unified deep neural network architecture to jointly model recurrent residual learning. the intra-frame and the inter-frame correlation in an end-to-end By separating the video frames into LR observations and the trainable manner. Compared with previous (deep) video SR spatial residue within a single frame, the low frequency parts of methods Liao et al.(2015a); Huang et al.(2015, 2017), our HR frames and LR frames are untangled. Thus, the models can proposed deep network model does not require explicit compu- only focus on describing high-frequency details. By consider- tation of optical flow or motion compensation. In addition, our ing the temporal residues, in both their prediction path from LR proposed model unifies the convolutional neural networks (C- temporal residues to HR temporal residues and their connection NNs) and recurrent neural networks (RNNs) which are known to spatial residues, the proposed STR-ResNet models both the to be powerful in modeling sequential data. Combining the spa- spatial and temporal correlations jointly and achieves outstand- tial convolutional and temporal recurrent architectures enables ing video SR performance with relatively low computational our model to capture spatial and temporal correlations jointly. complexity. Specially, it models spatial and temporal correlations among In summary, we make the following contributions in this multiple video frames jointly. The temporal residues of HR work to solving the challenging video SR problem: frames are predicted based on input LR frames along with their • We propose a novel deep convolutional neural network temporal residues to further regularize estimation of the spatial architecture specifically for video SR. It follows a join- residues. t spatial-temporal residual learning and aims to predict This architectural choice enables the network to handle the the HR temporal residues which further facilitate the pre- videos containing complex motions in a moving scene, offering dictions of spatial residues and HR frames. By embed- 3 ding the temporal residue prediction, the proposed archi- Freeman et al.(2002); Yuan et al.(2013); Sun et al.(2011), the tecture is capable of implicitly modeling the motion con- videos additionally present the temporal correlation among ad- text among multiple video frames for video SR.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us