TY - GEN
T1 - Compressed Video Quality Enhancement with Motion Approximation and Blended Attention
AU - Han, Xiaohao
AU - Zhang, Wei
AU - Pu, Jian
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - In recent years, various methods have been proposed to tackle the compressed video quality enhancement problem. It aims at restoring the distorted information in low-quality target frames from high-quality reference frames in the compressed video. Most methods for video quality enhancement contain two key stages, i.e., the synchronization and the fusion stages. The synchronization stage synchronizes the input frames by compensating the estimated motion vector to reference frames. The fusion stage reconstructs each frame with the compensated frames. However, the synchronization stage in previous works merely estimates the motion vector between the reference frame and the target frame. Due to the quality fluctuation of frames and region occlusion of objects, the missing detail information cannot be adequately replenished. To make full use of the temporal motion between input frames, we propose a motion approximation scheme to utilize the motion vector between the reference frames. It is able to generate additional compensated frames to further refine the missing details in the target frame. In the fusion stage, we propose a deep neural network to extract frame features with blended attention to the texture details and the quality discrepancy at different times. The experimental results show the effectiveness and robustness of our method.
AB - In recent years, various methods have been proposed to tackle the compressed video quality enhancement problem. It aims at restoring the distorted information in low-quality target frames from high-quality reference frames in the compressed video. Most methods for video quality enhancement contain two key stages, i.e., the synchronization and the fusion stages. The synchronization stage synchronizes the input frames by compensating the estimated motion vector to reference frames. The fusion stage reconstructs each frame with the compensated frames. However, the synchronization stage in previous works merely estimates the motion vector between the reference frame and the target frame. Due to the quality fluctuation of frames and region occlusion of objects, the missing detail information cannot be adequately replenished. To make full use of the temporal motion between input frames, we propose a motion approximation scheme to utilize the motion vector between the reference frames. It is able to generate additional compensated frames to further refine the missing details in the target frame. In the fusion stage, we propose a deep neural network to extract frame features with blended attention to the texture details and the quality discrepancy at different times. The experimental results show the effectiveness and robustness of our method.
UR - https://www.scopus.com/pages/publications/85143635726
U2 - 10.1109/ICPR56361.2022.9956074
DO - 10.1109/ICPR56361.2022.9956074
M3 - 会议稿件
AN - SCOPUS:85143635726
T3 - Proceedings - International Conference on Pattern Recognition
SP - 338
EP - 344
BT - 2022 26th International Conference on Pattern Recognition, ICPR 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 26th International Conference on Pattern Recognition, ICPR 2022
Y2 - 21 August 2022 through 25 August 2022
ER -