by Emil Wallner

How to colorize black & white photos with just 100 lines of neural network code

Earlier this year, Amir Avni used neural networks to troll the subreddit/r/Colorization — a community where people colorize historical black and white images manually using Photoshop.

They were astonished with Amir’s deep learning bot. What could take up to a month of manual labour could now be done in just a few seconds.

I was fascinated by Amir’s neural network, so I reproduced it and documented the process. First off, let’s look at some of the results/failures from my experiments (scroll to the bottom for the final result).

z-NKnwq8RlBQ1CAnXUylT1B-DlSZQgby-Urw
The original b&w images are from Unsplash

Today, colorization is usually done by hand in Photoshop. To appreciate all the hard work behind this process, take a peek at this gorgeous colorization memory lane video:

In short, a picture can take up to one month to colorize. It requires extensive research. A face alone needs up to 20 layers of pink, green and blue shades to get it just right.

This article is for beginners. Yet, if you’re new to deep learning terminology, you can read my previous two posts here and here, and watch Andrej Karpathy’s lecture for more background.

I’ll show you how to build your own colorization neural net in three steps.

The first section breaks down the core logic. We’ll build a bare-bones 40-line neural network as an “alpha” colorization bot. There’s not a lot of magic in this code snippet. This well help us become familiar with the syntax.

The next step is to create a neural network that can generalize — our “beta” version. We’ll be able to color images the bot has not seen before.

For our “final” version, we’ll combine our neural network with a classifier. We’ll use an Inception Resnet V2 that has been trained on 1.2 million images. To make the coloring pop, we’ll train our neural network on portraits from Unsplash.

If you want to look ahead, here’s a Jupyter Notebook with the Alpha version of our bot. You can also check out the three versions on FloydHub and GitHub, along with code for all the experiments I ran on FloydHub’s cloud GPUs.

Core logic

In this section, I’ll outline how to render an image, the basics of digital colors, and the main logic for our neural network.

Black and white images can be represented in grids of pixels. Each pixel has a value that corresponds to its brightness. The values span from 0–255, from black to white.

-E40goEdeMZCIl7UH8vGN1BBe7AQXAngQkQ0

Color images consist of three layers: a red layer, a green layer, and a blue layer. This might be counter-intuitive to you. Imagine splitting a green leaf on a white background into the three channels. Intuitively, you might think that the plant is only present in the green layer.

But, as you see below, the leaf is present in all three channels. The layers not only determine color, but also brightness.

mq8HhFZvcoGr4kxzjelwOzbyvrf2SMKgDxNj

To achieve the color white, for example, you need an equal distribution of all colors. By adding an equal amount of red and blue, it makes the green brighter. Thus, a color image encodes the color and the contrast using three layers:

JKLZUfQIv5Wu55lep9EN4MZmDozm4tryBjLW

Just like black and white images, each layer in a color image has a value from 0–255. The value 0 means that it has no color in this layer. If the value is 0 for all color channels, then the image pixel is black.

As you may know, a neural network creates a relationship between an input value and output value. To be more precise with our colorization task, the network needs to find the traits that link grayscale images with colored ones.

In sum, we are searching for the features that link a grid of grayscale values to the three color grids.

4AueuCZR9MRkEnl1pQbIa6J1IXu7GaEC81Rz
f() is the neural network, [B&W] is our input, and [R],[G],[B] is our output.

Alpha version

We’ll start by making a simple version of our neural network to color an image of a woman’s face. This way, you can get familiar with the core syntax of our model as we add features to it.

With just 40 lines of code, we can make the following transition. The middle picture is done with our neural network and the picture to the right is the original color photo. The network is trained and tested on the same image — we’ll get back to this during the beta-version.

i0BF0Lhq7FP1JHbY52n9Ex9WpvwqjAXSAsJU
Photo by Camila Cordeiro

Color space

First, we’ll use an algorithm to change the color channels, from RGB to Lab. L stands for lightness, and a and b for the color spectra green–red and blue–yellow.

As you can see below, a Lab encoded image has one layer for grayscale, and has packed three color layers into two. This means that we can use the original grayscale image in our final prediction. Also, we only have two channels to predict.

dVUUskOODMzF5O2K0yV1TDM8Cuseb7jB6Y8D

Science fact — 94% of the cells in our eyes determine brightness. That leaves only 6% of our receptors to act as sensors for colors. As you can see in the above image, the grayscale image is a lot sharper than the color layers. This is another reason to keep the grayscale image in our final prediction.

From B&W to color

Our final prediction looks like this. We have a grayscale layer for input, and we want to predict two color layers, the ab in Lab. To create the final color image we’ll include the L/grayscale image we used for the input. The result will be creating a Lab image.

3spQqgv2ssg6UQbeWv7tojgxmhZEj9VFEYX8

To turn one layer into two layers, we use convolutional filters. Think of them as the blue/red filters in 3D glasses. Each filter determines what we see in a picture. They can highlight or remove something to extract information out of the picture. The network can either create a new image from a filter or combine several filters into one image.

For a convolutional neural network, each filter is automatically adjusted to help with the intended outcome. We’ll start by stacking hundreds of filters and narrow them down into two layers, the a and b layers.

Before we get into detail into how it works, let’s run the code.

Run the code on FloydHub

Click the below button you open a Workspace on FloydHub where you will find the same environment and dataset used for the Full version. You can also find the trained models for Serving.

Xf2GlQ87eVy8vc7PnY3fWpGJ0ep4QNxB8QQv

You can also make a local FloydHub installation with their 2-min installation, watch my 5-min video tutorial or check out my step-to-step guide. It’s the best (and easiest) way to train deep learning models on cloud GPUs.

Alpha version

Once FloydHub is installed, use the following commands:

git clone https://github.com/emilwallner/Coloring-greyscale-images-in-Keras

Open the folder and initiate FloydHub.

cd Coloring-greyscale-images-in-Keras/floydhubfloyd init colornet

The FloydHub web dashboard will open in your browser. You will be prompted to create a new FloydHub project called colornet. Once that's done, go back to your terminal and run the same initcommand.

floyd init colornet

Okay, let’s run our job:

floyd run --data emilwallner/datasets/colornet/2:data --mode jupyter --tensorboard

Some quick notes about our job:

  • We mounted a public dataset on FloydHub (which I’ve already uploaded) at the datadirectory with the below line:
--dataemilwallner/datasets/colornet/2:data

You can explore and use this dataset (and many other public datasets) by viewing it on FloydHub

  • We enabled Tensorboard with --tensorboard
  • We ran the job in Jupyter Notebook mode with --mode jupyter
  • If you have GPU credit, you can also add the GPU flag --gputo your command. This will make it approximately 50x faster

Go to the Jupyter notebook. Under the Jobs tab on the FloydHub website, click on the Jupyter Notebook link and navigate to this file:

floydhub/Alpha version/working_floyd_pink_light_full.ipynb

Open it and click Shift+Enter on all the cells.

Gradually increase the epoch value to get a feel for how the neural network learns.

model.fit(x=X, y=Y, batch_size=1, epochs=1)

Start with an epoch value of 1 and the increase it to 10, 100, 500, 1000 and 3000. The epoch value indicates how many times the neural network learns from the image. You will find the image img_result.pngin the main folder once you’ve trained your neural network.

# Get imagesimage = img_to_array(load_img('woman.png'))image = np.array(image, dtype=float)
# Import map images into the lab colorspaceX = rgb2lab(1.0/255*image)[:,:,0]Y = rgb2lab(1.0/255*image)[:,:,1:]Y = Y / 128X = X.reshape(1, 400, 400, 1)Y = Y.reshape(1, 400, 400, 2)
# Building the neural networkmodel = Sequential()model.add(InputLayer(input_shape=(None, None, 1)))model.add(Conv2D(8, (3, 3), activation='relu', padding='same', strides=2))model.add(Conv2D(8, (3, 3), activation='relu', padding='same'))model.add(Conv2D(16, (3, 3), activation='relu', padding='same'))model.add(Conv2D(16, (3, 3), activation='relu', padding='same', strides=2))model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))model.add(Conv2D(32, (3, 3), activation='relu', padding='same', strides=2))model.add(UpSampling2D((2, 2)))model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))model.add(UpSampling2D((2, 2)))model.add(Conv2D(16, (3, 3), activation='relu', padding='same'))model.add(UpSampling2D((2, 2)))model.add(Conv2D(2, (3, 3), activation='tanh', padding='same'))
# Finish modelmodel.compile(optimizer='rmsprop',loss='mse')
#Train the neural networkmodel.fit(x=X, y=Y, batch_size=1, epochs=3000)print(model.evaluate(X, Y, batch_size=1))
# Output colorizationsoutput = model.predict(X)output = output * 128canvas = np.zeros((400, 400, 3))canvas[:,:,0] = X[0][:,:,0]canvas[:,:,1:] = output[0]imsave("img_result.png", lab2rgb(canvas))imsave("img_gray_scale.png", rgb2gray(lab2rgb(canvas)))

FloydHub command to run this network:

floyd run --data emilwallner/datasets/colornet/2:data --mode jupyter --tensorboard

Technical explanation

To recap, the input is a grid representing a black and white image. It outputs two grids with color values. Between the input and output values, we create filters to link them together. This is a convolutional neural network.

When we train the network, we use colored images. We convert RGB colors to the Lab color space. The black and white layer is our input and the two colored layers are the output.

DonLhWNz4wst7PZn1ZSYSfEfY9O5h9vvr8dh

To the left side, we have the B&W input, our filters, and the prediction from our neural network.

We map the predicted values and the real values within the same interval. This way, we can compare the values. The interval ranges from -1 to 1. To map the predicted values, we use a tanh activation function. For any value you give the tanh function, it will return -1 to 1.

The true color values range between -128 and 128. This is the default interval in the Lab color space. By dividing them by 128, they too fall within the -1 to 1 interval. This “normalization” enables us to compare the error from our prediction.

After calculating the final error, the network updates the filters to reduce the total error. The network continues in this loop until the error is as low as possible.

Let’s clarify some syntax in the code snippet.

X = rgb2lab(1.0/255*image)[:,:,0]Y = rgb2lab(1.0/255*image)[:,:,1:]

1.0/255 indicates that we are using a 24-bit RGB color space. It means that we are using numbers between 0–255 for each color channel. This results in 16.7 million color combinations.

Since humans can only perceive 2–10 million colors, it does not make much sense to use a larger color space.

Y = Y / 128

The Lab color space has a different range in comparison to RGB. The color spectrum ab in Lab ranges from -128 to 128. By dividing all values in the output layer by 128, we bound the range between -1 and 1.

We match it with our neural network, which also returns values between -1 and 1.

After converting the color space using the function rgb2lab() we select the grayscale layer with: [ : , : , 0]. This is our input for the neural network. [ : , : , 1: ] selects the two color layers, green–red and blue–yellow.

After training the neural network, we make a final prediction which we convert into a picture.

output = model.predict(X)output = output * 128

Here, we use a grayscale image as input and run it through our trained neural network. We take all the output values between -1 and 1 and multiply it by 128. This gives us the correct color in the Lab color spectrum.

canvas = np.zeros((400, 400, 3))canvas[:,:,0] = X[0][:,:,0]canvas[:,:,1:] = output[0]

Lastly, we create a black RGB canvas by filling it with three layers of 0s. Then we copy the grayscale layer from our test image. Then we add our two color layers to the RGB canvas. This array of pixel values is then converted into a picture.

Takeaways from the Alpha version

  • Reading research papers is challenging. Once I summarized the core characteristics of each paper, it became easier to skim papers. It also allowed me to put the details into a context.
  • Starting simple is key. Most of the implementations I could find online were 2–10K lines long. That made it hard to get an overview of the core logic of the problem. Once I had a barebones version, it became easier to read both the code implementation, and also the research papers.
  • Explore public projects. To get a rough idea for what to code, I skimmed 50–100 projects on colorization on Github.
  • Things won’t always work as expected. In the beginning, it could only create red and yellow colors. At first, I had a Relu activation function for the final activation. Since it only maps numbers into positive digits, it could not create negative values, the blue and green spectrums. Adding a tanh activation function and mapping the Y values fixed this.
  • Understanding > Speed. Many of the implementations I saw were fast but hard to work with. I chose to optimize for innovation speed instead of code speed.

Beta version

To understand the weakness of the alpha version, try coloring an image it has not been trained on. If you try it, you’ll see that it makes a poor attempt. It’s because the network has memorized the information. It has not learned how to color an image it hasn’t seen before. But this is what we’ll do in the beta version. We’ll teach our network to generalize.

Below is the result of coloring the validation images with our beta version.

Instead of using Imagenet, I created a public dataset on FloydHub with higher quality images. The images are from Unsplash — creative commons pictures by professional photographers. It includes 9,500 training images and 500 validation images.

BZBm1qEEWFclSw-S6LQavmedTm0ijiGaqD78

The feature extractor

Our neural network finds characteristics that link grayscale images with their colored versions.

Imagine you had to color black and white images — but with restriction that you can only see nine pixels at a time. You could scan each image from the top left to bottom right and try to predict which color each pixel should be.

mt-qv1Zp-Cjw2hUpTaVp9LyQmeIpD0GKKM1G

For example, these nine pixels are the edge of the nostril from the woman just above. As you can imagine, it’d be next to impossible to make a good colorization, so you break it down into steps.

First, you look for simple patterns: a diagonal line, all black pixels, and so on. You look for the same exact pattern in each square and remove the pixels that don’t match. You generate 64 new images from your 64 mini filters.

vIOjTTUiUF5BAIn62BAN6EceZEbCFmD-AcOo
The number of filtered images for each step

If you scan the images again, you’d see the same small patterns you’ve already detected. To gain a higher level understanding of the image, you decrease the image size in half.

5QylrDkthjNwYWqjVhOhUy2yifuRILeNpm6r
We decrease the size in three steps

You still only have a 3x3 filter to scan each image. But by combining your new nine pixels with your lower level filters, you can detect more complex patterns. One pixel combination might form a half circle, a small dot, or a line. Again, you repeatedly extract the same pattern from the image. This time, you generate 128 new filtered images.

After a couple of steps the filtered images you produce might look something like these:

RnFvJJmiQVJXIXa36EwYyEhKSoeLi1S4-l0U
From Keras layer tutorial

As mentioned, you start with low-level features, such as an edge. Layers closer to the output are combined into patterns. Then, they are combined into details, and eventually transformed into a face. This video tutorial provides a further explanation.

The process is similar to that of most neural networks that deal with vision. The type of network here is known as a convolutional neural network. In these networks, you combine several filtered images to understand the context in the image.

From feature extraction to color

The neural network operates in a trial and error manner. It first makes a random prediction for each pixel. Based on the error for each pixel, it works backward through the network to improve the feature extraction.

It starts adjusting for the situations that generate the largest errors. In this case, the adjustments are: whether to color or not, and how to locate different objects.

The network starts by coloring all the objects brown. It’s the color that is most similar to all other colors, thus producing the smallest error.

Because most of the training data is quite similar, the network struggles to differentiate between different objects. It will fail to generate more nuanced colors. That’s what we’ll explore in the full version.

Below is the code for the beta version, followed by a technical explanation of the code.

# Get imagesX = []for filename in os.listdir('../Train/'):    X.append(img_to_array(load_img('../Train/'+filename)))X = np.array(X, dtype=float)
# Set up training and test datasplit = int(0.95*len(X))Xtrain = X[:split]Xtrain = 1.0/255*Xtrain
#Design the neural networkmodel = Sequential()model.add(InputLayer(input_shape=(256, 256, 1)))model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=2))model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))model.add(Conv2D(128, (3, 3), activation='relu', padding='same', strides=2))model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))model.add(Conv2D(256, (3, 3), activation='relu', padding='same', strides=2))model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))model.add(UpSampling2D((2, 2)))model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))model.add(UpSampling2D((2, 2)))model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))model.add(Conv2D(2, (3, 3), activation='tanh', padding='same'))model.add(UpSampling2D((2, 2)))
# Finish modelmodel.compile(optimizer='rmsprop', loss='mse')
# Image transformerdatagen = ImageDataGenerator(        shear_range=0.2,        zoom_range=0.2,        rotation_range=20,        horizontal_flip=True)
# Generate training databatch_size = 50def image_a_b_gen(batch_size):    for batch in datagen.flow(Xtrain, batch_size=batch_size):        lab_batch = rgb2lab(batch)        X_batch = lab_batch[:,:,:,0]        Y_batch = lab_batch[:,:,:,1:] / 128        yield (X_batch.reshape(X_batch.shape+(1,)), Y_batch)
# Train modelTensorBoard(log_dir='/output')model.fit_generator(image_a_b_gen(batch_size), steps_per_epoch=10000, epochs=1)# Test imagesXtest = rgb2lab(1.0/255*X[split:])[:,:,:,0]Xtest = Xtest.reshape(Xtest.shape+(1,))Ytest = rgb2lab(1.0/255*X[split:])[:,:,:,1:]Ytest = Ytest / 128print model.evaluate(Xtest, Ytest, batch_size=batch_size)
# Load black and white imagescolor_me = []for filename in os.listdir('../Test/'):        color_me.append(img_to_array(load_img('../Test/'+filename)))color_me = np.array(color_me, dtype=float)color_me = rgb2lab(1.0/255*color_me)[:,:,:,0]color_me = color_me.reshape(color_me.shape+(1,))
# Test modeloutput = model.predict(color_me)output = output * 128
# Output colorizationsfor i in range(len(output)):        cur = np.zeros((256, 256, 3))        cur[:,:,0] = color_me[i][:,:,0]        cur[:,:,1:] = output[i]        imsave("result/img_"+str(i)+".png", lab2rgb(cur))

Here’s the FloydHub command to run the Beta neural network:

floyd run --data emilwallner/datasets/colornet/2:data --mode jupyter --tensorboard

Technical explanation

The main difference from other visual neural networks is the importance of pixel location. In coloring networks, the image size or ratio stays the same throughout the network. In other types of network, the image gets distorted the closer it gets to the final layer.

The max-pooling layers in classification networks increase the information density, but also distort the image. It only values the information, but not the layout of an image. In coloring networks we instead use a stride of 2, to decrease the width and height by half. This also increases information density but does not distort the image.

z3panAJJuAKxXonehTzjD2h3jbQ60ByVsNT6

Two further differences are: upsampling layers and maintaining the image ratio. Classification networks only care about the final classification. Therefore, they keep decreasing the image size and quality as it moves through the network.

Coloring networks keep the image ratio constant. This is done by adding white padding like the visualization above. Otherwise, each convolutional layer cuts the images. It’s done with the *padding='same'*parameter.

To double the size of the image, the coloring network uses an upsampling layer.

for filename in os.listdir('/Color_300/Train/'):    X.append(img_to_array(load_img('/Color_300/Test'+filename)))

This for-loop first counts all the file names in the directory. Then, it iterates through the image directory and converts the images into an array of pixels. Finally, it combines them into a giant vector.

datagen = ImageDataGenerator(        shear_range=0.2,        zoom_range=0.2,        rotation_range=20,        horizontal_flip=True)

With ImageDataGenerator, we adjust the setting for our image generator. This way, each image will never be the same, thus improving the learning rate. The shear_rangetilts the image to the left or right, and the other settings are zoom, rotation and horizontal-flip.

batch_size = 50def image_a_b_gen(batch_size):    for batch in datagen.flow(Xtrain, batch_size=batch_size):        lab_batch = rgb2lab(batch)        X_batch = lab_batch[:,:,:,0]        Y_batch = lab_batch[:,:,:,1:] / 128        yield (X_batch.reshape(X_batch.shape+(1,)), Y_batch)

We use the images from our folder, Xtrain, to generate images based on the settings above. Then, we extract the black and white layer for the X_batch and the two colors for the two color layers.

model.fit_generator(image_a_b_gen(batch_size), steps_per_epoch=1, epochs=1000)

The stronger the GPU you have, the more images you can fit into it. With this setup, you can use 50–100 images. steps_per_epoch is calculated by dividing the number of training images with your batch size.

For example: 100 images with a batch size of 50 gives 2 steps per epoch. The number of epochs determines how many times you want to train all images. 10K images with 21 epochs will take about 11 hours on a Tesla K80 GPU.

Takeaways

  • Run a lot of experiments in smaller batches before you make larger runs. Even after 20–30 experiments, I still found mistakes. Just because it’s running doesn’t mean it’s working. Bugs in a neural network are often more nuanced than traditional programming errors. One of the more bizarre ones was my Adam hiccup.
  • A more diverse dataset makes the pictures brownish. If you have very similar images, you can get a decent result without needing a more complex architecture. The trade-off is the network becomes worse at generalizing.
  • Shapes, shapes, and shapes. The size of each image has to be exact and remain proportional throughout the network. In the beginning, I used an image size of 300. Halving this three times gives sizes of 150, 75, and 35.5. The result is losing half a pixel! This led to many “hacks” until I realized it’s better to use a power of two: 2, 8, 16, 32, 64, 256 and so on.
  • Creating datasets: a) Disable the .DS_Store file, it drove me crazy. b) Be creative. I ended up with a Chrome console script and an extension to download the files. c) Make a copy of the raw files you scrape and structure your cleaning scripts.

Full-version

Our final version of the colorization neural network has four components. We split the network we had before into an encoder and a decoder. Between them, we’ll use a fusion layer. If you are new to classification networks, I’d recommend having a look at this tutorial.

In parallel to the encoder, the input images also run through one of today’s most powerful classifiers — the Inception ResNet v2 . This is a neural network trained on 1.2M images. We extract the classification layer and merge it with the output from the encoder.

qovQFs9u4JKRuqYEgN7kE2iWMZhX6I3WJPln

Here is a more detailed visual from the original paper.

By transferring the learning from the classifier to the coloring network, the network can get a sense of what’s in the picture. Thus, enabling the network to match an object representation with a coloring scheme.

Here are some of the validation images, using only 20 images to train the network on.

K3OU3lMzzks0UI-MGAap-fmYCuVhpjHKvTzQ

Most of the images turned out poor. But I was able to find a few decent ones because of a large validation set (2,500 images). Training it on more images gave a more consistent result, but most of them turned out brownish. Here is a full list of the experiments I ran including the validation images.

Here are the most common architectures from previous research, with links:

  • Manually adding small dots of color in a picture to guide the neural network (link)
  • Find a matching image and transfer the coloring (learn more here and here)
  • Residual encoder and merging classification layers (link)
  • Merging hypercolumns from a classifying network (more detail here and here)
  • Merging the final classification between the encoder and decoder (details here and here)

Colorspaces: Lab, YUV, HSV, and LUV (more detail here and here)

Loss: Mean square error, classification, weighted classification (link)

I chose the ‘fusion layer’ architecture (the fifth one in the list above).

This was because it produces some of the best results. It is also easier to understand and reproduce in Keras. Although it’s not the strongest colorization network design, it is a good place to start. It’s a great architecture to understand the dynamics of the coloring problem.

I used the neural network design from this paper by Federico Baldassarre and collaborators. I proceeded with my own interpretation in Keras.

Note: in the below code I switch from Keras’ sequential model to their functional API. [Documentation]

# Get imagesX = []for filename in os.listdir('/data/images/Train/'):    X.append(img_to_array(load_img('/data/images/Train/'+filename)))X = np.array(X, dtype=float)Xtrain = 1.0/255*X
#Load weightsinception = InceptionResNetV2(weights=None, include_top=True)inception.load_weights('/data/inception_resnet_v2_weights_tf_dim_ordering_tf_kernels.h5')inception.graph = tf.get_default_graph()
embed_input = Input(shape=(1000,))
#Encoderencoder_input = Input(shape=(256, 256, 1,))encoder_output = Conv2D(64, (3,3), activation='relu', padding='same', strides=2)(encoder_input)encoder_output = Conv2D(128, (3,3), activation='relu', padding='same')(encoder_output)encoder_output = Conv2D(128, (3,3), activation='relu', padding='same', strides=2)(encoder_output)encoder_output = Conv2D(256, (3,3), activation='relu', padding='same')(encoder_output)encoder_output = Conv2D(256, (3,3), activation='relu', padding='same', strides=2)(encoder_output)encoder_output = Conv2D(512, (3,3), activation='relu', padding='same')(encoder_output)encoder_output = Conv2D(512, (3,3), activation='relu', padding='same')(encoder_output)encoder_output = Conv2D(256, (3,3), activation='relu', padding='same')(encoder_output)
#Fusionfusion_output = RepeatVector(32 * 32)(embed_input) fusion_output = Reshape(([32, 32, 1000]))(fusion_output)fusion_output = concatenate([encoder_output, fusion_output], axis=3) fusion_output = Conv2D(256, (1, 1), activation='relu', padding='same')(fusion_output)
#Decoderdecoder_output = Conv2D(128, (3,3), activation='relu', padding='same')(fusion_output)decoder_output = UpSampling2D((2, 2))(decoder_output)decoder_output = Conv2D(64, (3,3), activation='relu', padding='same')(decoder_output)decoder_output = UpSampling2D((2, 2))(decoder_output)decoder_output = Conv2D(32, (3,3), activation='relu', padding='same')(decoder_output)decoder_output = Conv2D(16, (3,3), activation='relu', padding='same')(decoder_output)decoder_output = Conv2D(2, (3, 3), activation='tanh', padding='same')(decoder_output)decoder_output = UpSampling2D((2, 2))(decoder_output)
model = Model(inputs=[encoder_input, embed_input], outputs=decoder_output)
#Create embeddingdef create_inception_embedding(grayscaled_rgb):    grayscaled_rgb_resized = []    for i in grayscaled_rgb:        i = resize(i, (299, 299, 3), mode='constant')        grayscaled_rgb_resized.append(i)    grayscaled_rgb_resized = np.array(grayscaled_rgb_resized)    grayscaled_rgb_resized = preprocess_input(grayscaled_rgb_resized)    with inception.graph.as_default():        embed = inception.predict(grayscaled_rgb_resized)    return embed
# Image transformerdatagen = ImageDataGenerator(        shear_range=0.4,        zoom_range=0.4,        rotation_range=40,        horizontal_flip=True)
#Generate training databatch_size = 20
def image_a_b_gen(batch_size):    for batch in datagen.flow(Xtrain, batch_size=batch_size):        grayscaled_rgb = gray2rgb(rgb2gray(batch))        embed = create_inception_embedding(grayscaled_rgb)        lab_batch = rgb2lab(batch)        X_batch = lab_batch[:,:,:,0]        X_batch = X_batch.reshape(X_batch.shape+(1,))        Y_batch = lab_batch[:,:,:,1:] / 128        yield ([X_batch, create_inception_embedding(grayscaled_rgb)], Y_batch)
#Train model      tensorboard = TensorBoard(log_dir="/output")model.compile(optimizer='adam', loss='mse')model.fit_generator(image_a_b_gen(batch_size), callbacks=[tensorboard], epochs=1000, steps_per_epoch=20)
#Make a prediction on the unseen imagescolor_me = []for filename in os.listdir('../Test/'):    color_me.append(img_to_array(load_img('../Test/'+filename)))color_me = np.array(color_me, dtype=float)color_me = 1.0/255*color_mecolor_me = gray2rgb(rgb2gray(color_me))color_me_embed = create_inception_embedding(color_me)color_me = rgb2lab(color_me)[:,:,:,0]color_me = color_me.reshape(color_me.shape+(1,))
# Test modeloutput = model.predict([color_me, color_me_embed])output = output * 128
# Output colorizationsfor i in range(len(output)):    cur = np.zeros((256, 256, 3))    cur[:,:,0] = color_me[i][:,:,0]    cur[:,:,1:] = output[i]    imsave("result/img_"+str(i)+".png", lab2rgb(cur))

Here’s the FloydHub command to run the full neural network:

floyd run --data emilwallner/datasets/colornet/2:data --mode jupyter --tensorboard

Technical Explanation

Keras’ functional API is ideal when we are concatenating or merging several models.

kMNLgj1gQ71kdm5VbyRs33eOwkAYPn4zNLD7

First, we download the Inception ResNet v2 neural network and load the weights. Since we will be using two models in parallel, we need to specify which model we are using. This is done in Tensorflow, the backend for Keras.

inception = InceptionResNetV2(weights=None, include_top=True)inception.load_weights('/data/inception_resnet_v2_weights_tf_dim_ordering_tf_kernels.h5')inception.graph = tf.get_default_graph()

To create our batch, we use the tweaked images. We conver them to black and white and run them through the Inception ResNet model.

grayscaled_rgb = gray2rgb(rgb2gray(batch))embed = create_inception_embedding(grayscaled_rgb)

First, we have to resize the image to fit into the Inception model. Then we use the preprocessor to format the pixel and color values according to the model. In the final step, we run it through the Inception network and extract the final layer of the model.

def create_inception_embedding(grayscaled_rgb):    grayscaled_rgb_resized = []    for i in grayscaled_rgb:        i = resize(i, (299, 299, 3), mode='constant')        grayscaled_rgb_resized.append(i)    grayscaled_rgb_resized = np.array(grayscaled_rgb_resized)    grayscaled_rgb_resized = preprocess_input(grayscaled_rgb_resized)    with inception.graph.as_default():        embed = inception.predict(grayscaled_rgb_resized)    return embed

Let’s go back to the generator. For each batch, we generate 20 images in the below format. It takes about an hour on a Tesla K80 GPU. It can do up to 50 images at a time with this model without having memory problems.

yield ([X_batch, create_inception_embedding(grayscaled_rgb)], Y_batch)

This matches with our colornet model format.

model = Model(inputs=[encoder_input, embed_input], outputs=decoder_output)

encoder_inputis fed into our Encoder model, the output of the Encoder model is then fused with the embed_inputin the fusion layer; the output of the fusion is then used as input in our Decoder model, which then returns the final output, decoder_output.

fusion_output = RepeatVector(32 * 32)(embed_input) fusion_output = Reshape(([32, 32, 1000]))(fusion_output)fusion_output = concatenate([fusion_output, encoder_output], axis=3) fusion_output = Conv2D(256, (1, 1), activation='relu')(fusion_output)

In the fusion layer, we first multiply the 1000 category layer by 1024 (32 * 32). This way, we get 1024 rows with the final layer from the Inception model.

This is then reshaped from 2D to 3D, a 32 x 32 grid with the 1000 category pillars. These are then linked together with the output from the encoder model. We apply a 254 filtered convolutional network with a 1X1 kernel, the final output of the fusion layer.

Takeaways

  • The research terminology was daunting. I spent three days googling for ways to implement the “fusion model” in Keras. Because it sounded complex, I didn’t want to face the problem. Instead, I tricked myself into searching for short cuts.
  • I asked questions online. I didn’t have a single comment in the Keras slack channel and Stack Overflow deleted my questions. But, by publicly breaking down the problem to make it simple to answer, it forced me to isolate the error, taking me closer to a solution.
  • Email people. Although forums can be cold, people care if you connect with them directly. Discussing color spaces over Skype with a researcher is inspiring!
  • After delaying on the fusion problem, I decided to build all the components before I stitched them together. Here are a few experiments I used to break down the fusion layer.
  • Once I had something I thought would work, I was hesitant to run it. Although I knew the core logic was okay, I didn’t believe it would work. After a cup of lemon tea and a long walk — I ran it. It produced an error after the first line in my model. But after four days, several hundred bugs and several thousand Google searches, “Epoch 1/22” appeared under my model.

Next steps

Colorizing images is a deeply fascinating problem. It is as much as a scientific problem as artistic one. I wrote this article so you can get up to speed in coloring and continue where I left off. Here are some suggestions to get started:

  • Implement it with another pre-trained model
  • Try a different dataset
  • Increase the network’s accuracy by using more pictures
  • Build an amplifier within the RGB color space. Create a similar model to the coloring network, that takes a saturated colored image as input and the correct colored image as output.
  • Implement a weighted classification
  • Apply it to video. Don’t worry too much about the colorization, but make the switch between images consistent. You could also do something similar for larger images, by tiling smaller ones.

You can also easily colorize your own black and white images with my three versions of the colorization neural network using FloydHub.

  • For the alpha version, simply replace the woman.jpgfile with your file with the same name (image size 400x400 pixels).
  • For the beta and the full version, add your images to the Testfolder before you run the FloydHub command. You can also upload them directly in the Notebook to the Test folder while the notebook is running. Note that these images need to be exactly 256x256 pixels. Also, you can upload all test images in color because it will automatically convert them into B&W.

If you build something or get stuck, ping me on twitter: emilwallner. I’d love to see what you are building.

Huge thanks to Federico Baldassarre, for answering my questions and their previous work on colorization. Also thanks to Muthu Chidambaram, who influenced the core implementation in Keras, and the Unsplash community for providing the pictures. Thanks also to Marine Haziza, Valdemaras Repsys, Qingping Hou, Charlie Harrington, Sai Soundararaj, Jannes Klaas, Claudio Cabral, Alain Demenet, and Ignacio Tonoli for reading drafts of this.

About Emil Wallner

This the third part in a multi-part blog series from Emil as he learns deep learning. Emil has spent a decade exploring human learning. He’s worked for Oxford’s business school, invested in education startups, and built an education technology business. Last year, he enrolled at Ecole 42 to apply his knowledge of human learning to machine learning.

You can follow along with Emil on Twitter and Medium.

This was first published as a community post on Floydhub’s blog.