3D No-Reference Image Quality Assessment via Transfer Learning and Saliency-Guided Feature Consolidationстатья

Статья опубликована в высокорейтинговом журнале

Информация о цитировании статьи получена из Scopus, Web of Science
Статья опубликована в журнале из списка Web of Science и/или Scopus
Дата последнего поиска статьи во внешних источниках: 15 августа 2019 г.

Работа с статьей

[1] 3d no-reference image quality assessment via transfer learning and saliency-guided feature consolidation / X. Xiaogang, B. Shi, Z. Gu et al. // IEEE ACCESS. — 2019. — Vol. 7. — P. 85286–85297. Motivated by the success of convolutional neural networks (CNNs) in image-related applications, in this paper, we design an effective method for no-reference 3D image quality assessment (3D IQA) through CNN-based feature extraction and consolidation strategy. In the first and most vital stage, quality-aware features, which reflect the inherent quality of images, are extracted by a fine-tuned CNN model exploiting the concept of transfer learning. This fine-tuning strategy solves the large-scale training data dependence existing in current deep-learning-based IQA algorithms. In the second stage, features from the left and right view are consolidated by linear weighted fusion where the weight for each image is obtained from its saliency map. In addition, the statistical characteristics of the disparity map are also considered in a multi-scale manner as additional features. In the final stage of quality mapping, the objective score for each stereoscopic pair is gained by support vector regression. The experimental results on the public databases show that our approach outperforms many existing no-reference and even full-reference methods. [ DOI ]

Публикация в формате сохранить в файл сохранить в файл сохранить в файл сохранить в файл сохранить в файл сохранить в файл скрыть