Developed a Convolutional Neural Network architecture to generate Color images from Grayscale input images.
Specifically, it generates color images in the YUV channels space when given input grayscale images in the Y channel.
Implemented a modified version of VGG-16 along with use of its Hypercolumns to concatenate with upscaling Hypercolumns (inspired from residual skip connections in ResNet) to generate UV channel output.
Trained and tested on cloud with Places dataset using Huber Loss function and Adam Optimizer.
One of the issue is that it sometimes produces sepia toned images. This is because of the simple loss function
Places dataset. More details here
Here are some of the best, average and worst results. Most of the results are sepia toned. However, obvious objects such as sky, farms etc. are being colored almost correctly. After further literature search, I found that using a complex cost function that takes into account more information can greatly achieve good results and produce much lesser sepia toned images.
<== Original Image
<== Generated Image
<== Original Image
<== Generated Image
<== Original Image
<== Generated Image
- Automatic Colorization
- http://warmspringwinds.github.io/tensorflow/tf-slim/2016/11/22/upsampling-and-image-segmentation-with-tensorflow-and-tf-slim/
- Places dataset.
- Implement a better and complex loss function