Visual Saliency Based Blind Image Quality Assessnent via Deep Convolutional Neural Network.
This is the implementation of Visual Saliency Based Blind Image Quality Assessment via Convolutional Neural Network. VSBIQA is partly based on the deepIQA. This work aims to evaluate the given image's quality through deep learning method. The main difference of this work between others' works is that we proposed to use salient image patches to train a designed DL-model, thus making the feature extraction more accuracy and efficient.
- chainer
- opencv
- sklearn
- data preparation In this work, we use HC method to calculate salient images. You can find the source code in MingMingCheng's homepage. We use LIVE2 and CSIQ database for training and testing. The file directory should be:
VSBIQA/
VSBIQA/data/Imageset/live2/ # live2 dataset
VSBIQA/data/Imageset/prewitt_images/ # gradient images of live2
VSBIQA/data/Imageset/saliency_images/ # salient images
- train
python train.py --gpu 0
- test
python demo.py --model ./models/nr_jay_live2.model --gpu 0
If you find VSBIQA helpful in your research, please consider citing:
@inproceedings{li2017visual,
title={Visual Saliency Based Blind Image Quality Assessment via Convolutional Neural Network},
author={Li, Jie and Zhou, Yue},
booktitle={International Conference on Neural Information Processing},
pages={550--557},
year={2017},
organization={Springer}
}