Image Colorization using Autoencoders

Image Colorization using Autoencoders

In this project, our goal is to leverage the power of autoencoders to transform grayscale images into colorful representations.

We start by fetching the Labeled Faces in the Wild (LFW) dataset, containing RGB images of human faces. After normalizing the data, we design a convolutional autoencoder with an encoder and decoder architecture.

The encoder captures essential features of colored images, and the decoder reconstructs the original image from this compressed representation. We split the data accordingly to train the autoencoders for each color channel (red, green, and blue). After training, we can apply the autoencoders to grayscale inputs to produce vibrant and colorful images, combining the outputs for the final result.

This project provides exciting insights into the potential of autoencoders for image colorization and opens doors for further applications in computer vision.

Results of converting grey scaled images to colourful images using autoencoders
Results of converting grey scaled images to colourful images using autoencoders