A video structural similarity quality metric based on a joint spatial-temporal visual attention model |
| |
Authors: | Hua Zhang Xiang Tian Yao-wu Chen |
| |
Institution: | (1) Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA |
| |
Abstract: | Objective video quality assessment plays a very important role in multimedia signal processing. Several extensions of the structural similarity (SSIM) index could not predict the quality of the video sequence effectively. In this paper we propose a structural similarity quality metric for videos based on a spatial-temporal visual attention model. This model acquires the motion attended region and the distortion attended region by computing the motion features and the distortion contrast. It mimics the visual attention shifting between the two attended regions and takes the burst of error into account by introducing the non-linear weighting functions to give a much higher weighting factor to the extremely damaged frames. The proposed metric based on the model renders the final object quality rating of the whole video sequence and is validated using the 50 Hz video sequences of Video Quality Experts Group Phase I test database. |
| |
Keywords: | Quality assessment Structural similarity (SSIM) index Attended region Visual attention shift |
本文献已被 维普 万方数据 SpringerLink 等数据库收录! |