Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replicating Paper results using pretrained weights #7

Open
adityac8 opened this issue Jul 9, 2020 · 8 comments
Open

Replicating Paper results using pretrained weights #7

adityac8 opened this issue Jul 9, 2020 · 8 comments

Comments

@adityac8
Copy link

adityac8 commented Jul 9, 2020

Hi,
For replicating the results of the paper, here are the steps I followed:

  1. Downloaded pretrained weights using this link. I downloaded epoch44 as that was the latest model available at the link.

  2. Placed TEST1200 images crops of size 512x512 in test_data directory.

  3. Placed the files checkpoint, epoch44.data-00000-of-00001 and epoch44.index in MSPFN/model/MSPFN

  4. Replaced the lines in MSPFN/model/test/test_MSPFN.py
    img_path = '.\\test_data\\TEST100\\inputcrop\\' to img_path = './test_data/TEST1200/inputcrop/
    save_path = '.\\test_data\\MSPFN\\' to save_path = './test_data/MSPFN/
    saver.restore(sess, '../MSPFN/epoch50') to saver.restore(sess, '../MSPFN/epoch44')

  5. Calculate the PSNR.

However, I get a PSNR of 30.34 as compared to 32.39 reported in the paper.
Could you please tell what other modifications are required to replicated the results of the paper.

Thanks

@kuijiang94
Copy link
Owner

kuijiang94 commented Jul 10, 2020

Thanks for your attention!
We have retested the baseline (TEST_MSPFN_M17N1.py ) on Test1200, and gained the PSNR of 32. with the pretrained model (epoch44). I guess the results of 30.34 is directly tested on the RGB image. You can test it again on the "Y" channel.

@adityac8
Copy link
Author

Hi
Thank you for such a quick response. I calculated the PSNR on RGB channel and used the following script.

import os
import numpy as np
from skimage.measure.simple_metrics import compare_psnr
from glob import glob
from skimage import io

tar_dir = 'TEST1200/target'
prd_dir = 'MSPFN/model/test/test_data/MSPFN'

img_files_tar = sorted(glob(os.path.join(tar_dir, '*.png')))
img_files_prd = sorted(glob(os.path.join(prd_dir, '*.png')))

psnr= []
for tar,prd in zip(img_files_tar,img_files_prd):
    tar_img = io.imread(tar).astype(np.float32)
    prd_img = io.imread(prd).astype(np.float32)
    tar_img /=255.
    prd_img /=255.
    PSNR = compare_psnr(tar_img, prd_img, data_range=1)
    psnr.append(PSNR)

average_PSNR = sum(psnr)/len(psnr)
print("PSNR for test set is ", average_PSNR)

Could you please provide the script that you use to calculate the PSNR.

Thanks

@kuijiang94
Copy link
Owner

I am sorry for the previous misleading comments. I have corrected it.
For the test, we transform the RGB to Ycbcr firstly, and then calculate the RMSE and PSNR on the "Y" channel with the following script, returning the result of 32 (PSNR).
######################################
function psnr=compute_psnr(img1,img2)
if size(img1, 3) == 3,
img1 = rgb2ycbcr(img1);
img1 = img1(:, :, 1);
end

if size(img2, 3) == 3,
img2 = rgb2ycbcr(img2);
img2 = img2(:, :, 1);
end

imdff = double(img1) - double(img2);
imdff = imdff(:);

rmse = sqrt(mean(imdff.^2));
psnr = 20*log10(255/rmse);
######################################

In addition, we also adopt the script used in NTIRE2017, which uses full RGB channels and ignores the (6 + scale) pixels from the border for comparison. The PSNR score is 30.338. The detailed script is shown as
######################################
function res = NTIRE_PeakSNR_imgs(F, G, scale)
% NTIRE 2017 image super-resolution challenge scoring function
%
% F - original image
% G - distorted image
% scale factor - determins the number of boundary pixels to ignore (6+scale)
%
% returns res, the PSNR over all pixel values

if ischar(F)
F = imread(F);
end
if ischar(G)
G = imread(G);
end

boundarypixels = 0;
if exist('scale','var')
boundarypixels = 6+scale;
F = F(boundarypixels+1:end-boundarypixels,boundarypixels+1:end-boundarypixels,:);
G = G(boundarypixels+1:end-boundarypixels,boundarypixels+1:end-boundarypixels,:);
end

if max(F(:)) > 1
F = im2double(F);
end
if max(G(:)) > 1
G = im2double(G);
end
E = F - G; % error signal
N = numel(E); % Assume the original signal is at peak (|F|=1)
res = 10*log10( N / sum(E(:).^2) );
######################################
You can try it again!
Best wishes!

@adityac8
Copy link
Author

Hi,
Thanks for clarifying my doubts. I am now able to get 32.04 using an equivalent python script.

import os
import numpy as np
from glob import glob
from skimage import io
from skimage.color import rgb2ycbcr

tar_dir = 'TEST1200/target'
prd_dir = 'MSPFN/model/test/test_data/MSPFN'

img_files_tar = sorted(glob(os.path.join(tar_dir, '*.png')))
img_files_prd = sorted(glob(os.path.join(prd_dir, '*.png')))

def myPSNR(tar_img, prd_img):
    imdff = np.float32(prd_img) - np.float32(tar_img)
    rmse = np.sqrt(np.mean(imdff**2))
    ps = 20*np.log10(255/rmse)
    return ps

psnr= []
for tar,prd in zip(img_files_tar,img_files_prd):
    tar_img = io.imread(tar)
    prd_img = io.imread(prd)
    tar_img = rgb2ycbcr(tar_img)[:, :, 0]
    prd_img = rgb2ycbcr(prd_img)[:, :, 0]
    PSNR = myPSNR(tar_img, prd_img)
    psnr.append(PSNR)

average_PSNR = sum(psnr)/len(psnr)
print("PSNR for test set is ", average_PSNR)

Thank you so much for such fast responses. I have a few doubts regarding training the model. I think opening a separate issue for it would be much better.

@adityac8
Copy link
Author

Hi @kuihua
Thanks for providing the test set #10 (comment)
I downloaded the test set and ran the testing on Rain100H, Rain100L and Test1200. There are differences in the replicated results as compared to that of the paper. For evaluation, I use the above MATLAB script(Y channel) #7 (comment).

Test1200 Rain100H Rain100L
Replicated(epoch30) 32 28.19 32.19
Replicated(epoch44) 32.03 28.23 32.13
Paper 32.39 28.66 32.4

I use Python 3 and TensorFlow 1.12.0
Could you please tell me how can I get the numbers as reported in the paper.
Thanks

@adityac8 adityac8 reopened this Jul 22, 2020
@adityac8
Copy link
Author

Hi Kui,
Similar to PSNR, did you also compute SSIM on Y channel using MATLAB.

Thanks

@kuijiang94
Copy link
Owner

Hi Aditya,
We calculate the SSIM using the public released codes by [1] without changing the settings.
[1] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: From error visibility to structural similarity" IEEE Transactios on Image Processing, vol. 13, no. 4, pp.600-612, Apr. 2004.

In addition, I am sorry for your waiting in terms of the source codes of NIQE. The mail is intercepted and returned twice.

Regards,
Kui Jiang

@adityac8
Copy link
Author

Hi Kui
Thank you for your help. I am able to download ssim_index.m
Further, I use this script

function ssim_mean=compute_ssim(img1,img2)
    if size(img1, 3) == 3,
        img1 = rgb2ycbcr(img1);
        img1 = img1(:, :, 1);
    end

    if size(img2, 3) == 3,
        img2 = rgb2ycbcr(img2);
        img2 = img2(:, :, 1);
    end
    ssim_mean = ssim_index(img1, img2);
end

On running the script, I am able to get 0.913 with it. Is it correct.
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants