Tutorial 4: Visualizing and Modifying DL Networks

Laura E. Boucheron, Electrical & Computer Engineering, NMSU

October 2020

Copyright (C) 2020 Laura E. Boucheron

This information is free; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version.

This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this work; if not, If not, see https://www.gnu.org/licenses/.

Overview

In this tutorial, we pick up with the trained MNIST Network from Tutorial 2 and explore some ways of probing the characteristics of the trained network to help us debug common pitfalls in adapting network architectures. 

This tutorial contains 5 sections:

  • Section 0: Preliminaries: some notes on using this notebook, how to download the image dataset that we will use for this tutorial, and import commands for the libraries necessary for this tutorial
  • Section 1: Printing Characteristics of the CNN how to print textual summaries of the CNN architecture
  • Section 2: Visualizing Activations how to filter an example image through thhe MNIST network and visualize the activations
  • Section 3: Inputting New and Different Data to the Network how to process new data to be compatible with the MNIST network and the effects of showing a non-digit image to the network
  • Section 4: The VGG Network an exploration of the VGG16 network.

There are a few subsections with the heading " Your turn: " throughout this tutorial in which you will be asked to apply what you have learned.

Portions of this tutorial have been taken or adapted from https://machinelearningmastery.com/how-to-visualize-filters-and-feature-maps-in-convolutional-neural-networks/ and the documentation at https://keras.io.

Section 0: Preliminaries

A Note on Jupyter Notebooks

There are two main types of cells in this notebook: code and markdown (text). You can add a new cell with the plus sign in the menu bar above and you can change the type of cell with the dropdown menu in the menu bar above. As you complete this tutorial, you may wish to add additional code cells to try out your own code and markdown cells to add your own comments or notes.

Markdown cells can be augmented with a number of text formatting features, including

  • bulleted
  • lists

embedded $\LaTeX$, monotype specification of code syntax, bold font, and italic font. There are many other features of markdown cells--see the jupyter documentation for more information.

You can edit a cell by double clicking on it. If you double click on this cell, you can see how to implement the various formatting referenced above. Code cells can be run and markdown cells can be formatted using Shift+Enter or by selecting the Run button in the toolbar above.

Once you have completed (all or part) of this notebook, you can share your results with colleagues by sending them the .ipynb file. Your colleagues can then open the file and will see your markdown and code cells as well as any results that were printed or displayed at the time you saved the notebook. If you prefer to send a notebook without results displayed (like this notebook appeared when you downloaded it), you can select ("Restart & Clear Output") from the Kernel menu above. You can also export this notebook in a non-executable form, e.g., .pdf through the File, Save As menu.

Section 0.1 Downloading Images

Download the my_digits1_compressed.jpg and latest_256_0193.jpg files available on the workshop webpage. We will use those images in Sections 3 and 4 of this tutorial.

We will also use the cameraman.png and peppers.png files that we used in Tutorial 1 and the CalTech101 dataset that we used in Tutorial 2.

Section 0.2a Import Necessary Libraries (For users using a local machine)

Here, at the top of the code, we import all the libraries necessary for this tutorial. We will introduce the functionality of any new libraries throughout the tutorial, but include all import statements here as standard coding practice. We include a brief comment after each library here to indicate its main purpose within this tutorial.

It would be best to run this next cell before the workshop starts to make sure you have all the necessary packages installed on your machine.

A few other notes:

  • After the first import of keras packages, you may get a printout in a pink box that states
    Using Theano backend
    or
    Using TensorFlow backend
  • You may get one or more warnings complaining about various configs. As long as you don't get any errors, you should be good to go. You can, if you wish, fix whatever is causing a warning at a later point in time. I find it best to copy and paste the error warning itself into a Google search and tack on the OS in which you encountered the error. Seldom have I encountered an error that someone else hasn't encountered in my same OS.
  • The third to the last line in the following code cell imports the MNIST dataset.
  • The last two lines load the VGG16 network and the weights for that network trained on the ImageNet dataset. The code below will load the VGG16 network, trained on ImageNet. The first time this code is run, the trained network will be downloaded. Subsequent times, the trained network will be loaded from the local disk. This network is very large (528 MB) as we will see shortly, so it may take some time to download. Generally, we would include the last line below as part of our code rather than imports, but we include it here to allow that download to complete before the workshop.
In [1]:
import numpy as np # mathematical and scientific functions
import imageio # image reading capabilities
import skimage.color # functions for manipulating color images
import skimage.transform # functions for transforms on images
import matplotlib.pyplot as plt # visualization

# format matplotlib options
%matplotlib inline
plt.rcParams.update({'font.size': 16})

import keras.backend # information on the backend that keras is using
from keras.models import Model # a generic keras model class used to modify architectures
from keras.utils import np_utils # functions to wrangle label vectors
from keras.models import Sequential # the basic deep learning model
from keras.layers import Dense, Flatten, Convolution2D, MaxPooling2D # important CNN layers
from keras.models import load_model # to load a pre-saved model (may require hdf libraries installed)
from keras.preprocessing.image import load_img # keras method to read in images 
from keras.preprocessing.image import img_to_array # keras method to convert images to numpy array
from keras.applications.vgg16 import preprocess_input # keras method to transform images to VGG16 expected characteristics
from keras.applications.vgg16 import decode_predictions # keras method to present highest ranked categories
from keras.preprocessing.image import ImageDataGenerator # framework to input batches of images into keras

from keras.datasets import mnist # the MNIST dataset
from keras.applications import vgg16 # the VGG network
model_vgg16 = vgg16.VGG16(include_top=True,weights='imagenet') # download the ImageNet weights for VGG16
Using TensorFlow backend.

Section 0.2b Build the Conda Environment (For users using the ARS HPC Ceres with JupyterLab)

Open a terminal from inside JupyterLab (File > New > Terminal) and type the following commands

source activate
conda create --name NMSU-AI-Workshop_image-processing python=3.7 numpy matplotlib imageio scikit-image ipykernel -y

It may take 5 minutes to build the Conda environment.

When the environment finishes building, select this environment as your kernel in your Jupyter Notebook (click top right corner where you see Python 3, select your new kernel from the dropdown menu, click select)

You will want to do this BEFORE the workshop starts.

Section 0.4 Load Your Trained MNIST Model

At the end of Tutorial 3 we saved the trained MNIST model model1 in model1.h5. Here will load that model and we can pick up right where we left off.

If you were not able to save the model at the end of Tutorial 3, you can re-run the training of the MNIST model here before we start the rest of the tutorial. For your convenience, below is the complete code that will load and preprocess the MNIST data and define and train the model. You can cut and paste the code here into a code cell in this notebook and run it.

from keras.datasets import mnist
from keras.utils import np_utils
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
Y_train = np_utils.to_categorical(y_train, 10)
Y_test = np_utils.to_categorical(y_test, 10)
from keras.models import Sequential
from keras.layers import Dense, Flatten, Convolution2D, MaxPooling2D
model1 = Sequential()
model1.add(Convolution2D(32, (3, 3), activation='relu', input_shape=(28,28,1)))
model1.add(Convolution2D(32, (3, 3), activation='relu'))
model1.add(MaxPooling2D(pool_size=(2,2)))
model1.add(Flatten())
model1.add(Dense(128, activation='relu'))
model1.add(Dense(10, activation='softmax'))
model1.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
model1.fit(X_train, Y_train, batch_size=64, epochs=1, verbose=1)
In [2]:
model1 = load_model('model1.h5')

We have now loaded the trained MNIST model from Tutorial 3. Since this is a new notebook, however, we do not have the actual MNIST data loaded. We copy the code for loading and preprocessing the MNIST data from the Tutorial 3 notebook.

In [3]:
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
Y_train = np_utils.to_categorical(y_train, 10)
Y_test = np_utils.to_categorical(y_test, 10)

Section 1: Printing Characteristics of the CNN

1.1 The summary method

The summary method of a keras model will display a basic text summary of the CNN architecture with layer name, layer type, output shape, and number of parameters.

In [4]:
model1.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 24, 24, 32)        9248      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 12, 12, 32)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 4608)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 128)               589952    
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1290      
=================================================================
Total params: 600,810
Trainable params: 600,810
Non-trainable params: 0
_________________________________________________________________

A note about the None value in shape

The None value in the output shapes is used as a placehoder before the network knows how many samples it will be processing.

Using tab-compete to explore attributes and methods

The tab-complete feature of ipython can be very helpful to explore the available attributes and methods for a variable. There are useful attributes and methods for the model model1 and for the layers in the model, accessed with the layers attribute of the model.

Your turn:

Explore the attributes and methods of the model variable model1 by placing your cursor after the . and pressing tab.

In [5]:
help(model1.summary)
Help on method summary in module keras.engine.network:

summary(line_length=None, positions=None, print_fn=None) method of keras.engine.sequential.Sequential instance
    Prints a string summary of the network.
    
    # Arguments
        line_length: Total length of printed lines
            (e.g. set this to adapt the display to different
            terminal window sizes).
        positions: Relative or absolute positions of log elements
            in each line. If not provided,
            defaults to `[.33, .55, .67, 1.]`.
        print_fn: Print function to use.
            It will be called on each line of the summary.
            You can set it to a custom function
            in order to capture the string summary.
            It defaults to `print` (prints to stdout).

Your turn:

Explore the attributes and methods of the layers in model1. Change the index into model1.layers in the first code cell and run that cell to access the different CNN layers. Then, place your cursor after the . and press tab.

In [6]:
layer = model1.layers[0]
In [7]:
help(layer.activation)
Help on function relu in module keras.activations:

relu(x, alpha=0.0, max_value=None, threshold=0.0)
    Rectified Linear Unit.
    
    With default values, it returns element-wise `max(x, 0)`.
    
    Otherwise, it follows:
    `f(x) = max_value` for `x >= max_value`,
    `f(x) = x` for `threshold <= x < max_value`,
    `f(x) = alpha * (x - threshold)` otherwise.
    
    # Arguments
        x: Input tensor.
        alpha: float. Slope of the negative part. Defaults to zero.
        max_value: float. Saturation threshold.
        threshold: float. Threshold value for thresholded activation.
    
    # Returns
        A tensor.

1.2 A layer-wise summary of input and output shapes

While the summary method of the model prints some useful information, there are additional pieces of information that can be very useful. Below is a function definition which will print a layer-wise summary of the input and output shapes. This information can be very helpful in helping to understand and debug the workings (or non-workings) of the model. This code loops over each layer in the model using the layers attribute of the model.

In [8]:
def print_shapes(model):
    print('Layer Name\t\tType\t\tInput Shape\t\tOutput Shape\tTrainable')# print column headings
    for layer in model.layers:  # loop over layers
        lname = layer.name # grab layer name
        ltype = type(layer).__name__ # grab layer type
        ltype[ltype.find('/'):] # parse for only the last part of the string
        if ltype=='Conv2D': # print for convolutional layers
            print(lname+'\t\t'+ltype+'\t\t'+str(layer.input_shape)+'\t'+\
                  str(layer.output_shape)+'\t'+str(layer.trainable))
        elif ltype=='MaxPooling2D': # print for maxpool layers
            print(lname+'\t\t'+ltype+'\t'+str(layer.input_shape)+'\t'+\
                  str(layer.output_shape))
        elif ltype=='Flatten': # print for flatten layers
            print(lname+'\t\t'+ltype+'\t\t'+str(layer.input_shape)+'\t'+\
                  str(layer.output_shape))
        elif ltype=='Dense': # print for dense layers
            print(lname+'\t\t\t'+ltype+'\t\t'+str(layer.input_shape)+'\t\t'+\
                  str(layer.output_shape)+'\t'+str(layer.trainable))

We can print a summary of the input and output shapes by passing model1 to the print_shapes function.

In [9]:
print_shapes(model1)
Layer Name		Type		Input Shape		Output Shape	Trainable
conv2d_1		Conv2D		(None, 28, 28, 1)	(None, 26, 26, 32)	True
conv2d_2		Conv2D		(None, 26, 26, 32)	(None, 24, 24, 32)	True
max_pooling2d_1		MaxPooling2D	(None, 24, 24, 32)	(None, 12, 12, 32)
flatten_1		Flatten		(None, 12, 12, 32)	(None, 4608)
dense_1			Dense		(None, 4608)		(None, 128)	True
dense_2			Dense		(None, 128)		(None, 10)	True

Your turn:

Does this summary reconcile with the discussion in Tutorial 3 about the architecture of the MNIST model? You might find it helpful to refer to the Tutorial 3 slides with the visualization of the MNIST network.

These shapes are consistent with the discussion in the Tutorial 2 slides.

1.3 A layer-wise summary of filter shape and parameters

Below is a function definition which will print a layer-wise summary of the filters and parameters. This information can also be very helpful in helping to understand and debug the workings (or non-workings) of the model.

In [10]:
def print_params(model):
    total_params = 0 # initialize counter for total params
    trainable_params = 0 # initialize counter for trainable params
    print('Layer Name\t\tType\t\tFilter shape\t\t# Parameters\tTrainable') # print column headings
    for layer in model.layers: # loop over layers
        lname = layer.name # grab layer name
        ltype = type(layer).__name__ # grab layer type
        ltype[ltype.find('/'):] # parse for only the last part of the string
        if ltype=='Conv2D': # print for convolutional layers
            weights = layer.get_weights()
            print(lname+'\t\t'+ltype+'\t\t'+str(weights[0].shape)+'\t\t'+\
                  str(layer.count_params())+'\t'+str(layer.trainable))
            if layer.trainable:
                trainable_params += layer.count_params()
            total_params += layer.count_params() # update number of params
        elif ltype=='MaxPooling2D': # print for max pool layers
            weights = layer.get_weights()
            print(lname+'\t\t'+ltype+'\t---------------\t\t---')
        elif ltype=='Flatten': # print for flatten layers
            print(lname+'\t\t'+ltype+'\t\t---------------\t\t---')
        elif ltype=='Dense': # print for dense layers
            weights = layer.get_weights()
            print(lname+'\t\t\t'+ltype+'\t\t'+str(weights[0].shape)+'\t\t'+\
                  str(layer.count_params())+'\t'+str(layer.trainable))
            if layer.trainable:
                trainable_params += layer.count_params()
            total_params += layer.count_params() # update number of params
    print('---------------')
    print('Total trainable parameters: '+str(trainable_params)) # print total params
    print('Total untrainable parameters: '+str(total_params-trainable_params))
    print('Total parameters: '+str(total_params))

We can print a summary of the input and output shapes by passing model1 to the print_shapes function.

In [11]:
print_params(model1)
Layer Name		Type		Filter shape		# Parameters	Trainable
conv2d_1		Conv2D		(3, 3, 1, 32)		320	True
conv2d_2		Conv2D		(3, 3, 32, 32)		9248	True
max_pooling2d_1		MaxPooling2D	---------------		---
flatten_1		Flatten		---------------		---
dense_1			Dense		(4608, 128)		589952	True
dense_2			Dense		(128, 10)		1290	True
---------------
Total trainable parameters: 600810
Total untrainable parameters: 0
Total parameters: 600810

Check out the total number of parameters!!!

The MNIST model that we trained yesterday has more than 600,000 parameters that it learned during training! We will see later in this tutorial that is is actually a very small network.

We note a few things about the number of parameters per layer:

  • The second conv layer has a lot more parameters than the first. That is due to the fact that the second conv layer filters across all 32 channels of the activations from the first conv layer.
  • The max pool and flatten layers don't have any parameters.
  • The fully connected (dense) layers are the source of a large proportion of the total parameters in this network.

Your turn:

What implications does the number of trainable parameters per layer have on transfer learning and decisions about which layers to freeze? There is a code cell below prepopluated with the code to clone model1 and to freeze all but the last layer for you to modify and explore using the print_params and print_shapes functions defined above.

In [12]:
model1_clone = keras.models.clone_model(model1)
model1_clone.set_weights(model1.get_weights())

for layer in model1_clone.layers[:-1]:
    layer.trainable=False
    
print_params(model1_clone)
print('')
print_shapes(model1_clone)
Layer Name		Type		Filter shape		# Parameters	Trainable
conv2d_1		Conv2D		(3, 3, 1, 32)		320	False
conv2d_2		Conv2D		(3, 3, 32, 32)		9248	False
max_pooling2d_1		MaxPooling2D	---------------		---
flatten_1		Flatten		---------------		---
dense_1			Dense		(4608, 128)		589952	False
dense_2			Dense		(128, 10)		1290	True
---------------
Total trainable parameters: 1290
Total untrainable parameters: 599520
Total parameters: 600810

Layer Name		Type		Input Shape		Output Shape	Trainable
conv2d_1		Conv2D		(None, 28, 28, 1)	(None, 26, 26, 32)	False
conv2d_2		Conv2D		(None, 26, 26, 32)	(None, 24, 24, 32)	False
max_pooling2d_1		MaxPooling2D	(None, 24, 24, 32)	(None, 12, 12, 32)
flatten_1		Flatten		(None, 12, 12, 32)	(None, 4608)
dense_1			Dense		(None, 4608)		(None, 128)	False
dense_2			Dense		(None, 128)		(None, 10)	True

A note on number of parameters per layer

This cell describes a few more details of how you can reconcile the filter shape and the total number of parameters. The uninterested reader can skip this section without impeding their ability to complete the rest of the tutorial.

Recall that a basic neuron has a set of weights and a bias. The parameters that must be learned in deep learning layers include both the weights and biases. We break down the computation for convolutional layers and for fully connected (dense) layers. We will use the same notation as from the Tutorial 3 slides.

Convolutional layers:

Each filter in a convolutional layer has $K\cdot K\cdot C$ weights and one bias, where $K$ is the kernel size and $C$ is the number of channels. Thus we have a total of $M_{conv}\cdot K\cdot K\cdot C$ weights and $M_{conv}$ biases, where $M_{conv}$ is the number of filters in the layer. This means a total number of trainable parameters of $M_{conv}(K^2C+1)$.

Fully connected layers:

Each node in a fully connected layer is connected to every node in the previous layer and we thus have $M_{FC}^{(i-1)}$ weights and one bias per node, where $M_{FC}^{(i-1)}$ is the number of weights in the previous fully connected (or flattened) layer. Thus we have a total of $M_{FC}^{(i)}\cdot M_{FC}^{(i-1)}$ weights and $M_{FC}^{(i)}$ biases, where $M_{FC}^{(i)}$ is the number of nodes in the current fully connected layer. This means we have a total number of trainable parameters of $M_{FC}^{(i)}(M_{FC}^{(i-1)}+1)$.

Section 2 Visualizing Activations

In this section we will explore means to visualize the activations in different layers throughout the network.

Section 2.1 Wrangling the example input image dimensions

The responses (activations) for each filter in a layer can be computed by sending an example image through the network and requesting that the network report the output at the layer of interest (rather than at the output layer).

First, we need to choose an image to filter through the network. It is this image for which the activations will be computed. We begin here with the first test image. Recall that the network expects a tensor in the form samples$\times28\times28\times1$. In this case, we'll be providing only one sample, so we need our input to be $1\times28\times28\times1$.

The following code reshapes the zeroth test image with is shape $28\times28\times1$ into a tensor of shape $1\times28\times28\times1$ where the leading dimension of 1 is just wrangling the dimensionality to the samples$\times28\times28\times1$ format expected of an input tensor.

In [13]:
X_example = X_test[0].reshape(1,28,28,1)

We can plot the image to give us an idea of the appearance of the original image. This will allow us to better analyze the filtered images that we will see when we plot the activations. Since we have added some extra dimensions to this image, we use the np.squeeze function to remove those dimensions with only one entry before sending it to plt.imshow. In this case, the np.squeeze function returns a $28\times28$ numpy array.

In [14]:
plt.figure()
plt.imshow(np.squeeze(X_example),cmap='gray')
plt.axis('off')
plt.show()

Your turn:

Explore the dimensionalities of X_example relative to X_test[0] and np.squeeze(X_example).

In [15]:
print('The dimensions of X_example are '+str(X_example.shape))
print('The dimensions of X_test[0] are '+str(X_test[0].shape))
print('The dimensions of np.squeeze(X_example) are '+str(np.squeeze(X_example).shape))
The dimensions of X_example are (1, 28, 28, 1)
The dimensions of X_test[0] are (28, 28, 1)
The dimensions of np.squeeze(X_example) are (28, 28)

Section 2.2 Modify model to output activations after the first conv layer

Now, we modify our model1 to output the activations after the first convolutional layer. We use the generic Model class from keras and specify the same inputs as model1, but specify the output to be after the zeroth layer. This is where the print_shapes and print_params functions can be very helpful to determine which layer you actually want to specify as output. We call this new model model1_layer0 to designate that it is the same as model, but outputing information after layer 0, i.e., the first convolutional layer.

In [16]:
model1_layer0 = Model(inputs=model1.inputs, outputs=model1.layers[0].output)

Now if we ask for the prediction of the model for X_example, the model will output the activations at the first convolutional layer instead of the activations at the final softmax layer.

In [17]:
layer0_activations = model1_layer0.predict(X_example)

Your turn:

Based on our textual summaries of this network, we expect that the output should be of shape $1\times26\times26\times32$. Check the dimensionality and variable type of layer0_activations, as well as the intensity range.

In [18]:
print('The shape is: '+str(layer0_activations.shape))
print('The variable type is: '+str(layer0_activations.dtype))
print('The intensity range is ['+str(layer0_activations.min())+','+\
      str(layer0_activations.max())+']')
The shape is: (1, 26, 26, 32)
The variable type is: float32
The intensity range is [0.0,0.6418904]

Section 2.3 Visualizing the 32 activations of the first conv layer

We can loop over the 32 activations and plot each. plt.imshow will, by default, choose an intensity range to match that of the input image. This can make it difficult to compare between activations: a bright pixel in one image might actually be more activated than in another. We can force the plots to be on the same scale of intensities by passing the minimum and maximum intensities to plt.imshow using the vmin and vmax options.

In [19]:
plt.figure(figsize=(7,7))
min_int = layer0_activations.min() # find min intensity for all activations
max_int = layer0_activations.max() # find max intensity for all activations
subplot_rows = np.ceil(np.sqrt(layer0_activations.shape[-1])).astype(int) # determine subplots rows
for f in range(0,layer0_activations.shape[-1]): # loop over filters
    plt.subplot(subplot_rows,subplot_rows,f+1) # choose current subplot
    plt.imshow(np.squeeze(layer0_activations[:,:,:,f]),cmap='gray',\
               vmin=min_int,vmax=max_int) # plot activations
    plt.axis('off')
In [20]:
plt.figure(figsize=(7,7))
min_int = layer0_activations.min() # find min intensity for all activations
max_int = layer0_activations.max() # find max intensity for all activations
subplot_rows = np.ceil(np.sqrt(layer0_activations.shape[-1])).astype(int) # determine subplots rows
for f in range(0,layer0_activations.shape[-1]): # loop over filters
    plt.subplot(subplot_rows,subplot_rows,f+1) # choose current subplot
    plt.imshow(np.squeeze(layer0_activations[:,:,:,f]),cmap='gray',\
               vmin=min_int,vmax=max_int) # plot activations
    plt.axis('off')

We see that there are some filters that respond to the entire digit, some that respond only the the horizontal stroke of the digit, some that respond only to the vertical stroke, and we may even see some that don't respond at all. These filters that don't respond may be tuned for shapes (e.g., curves) that don't appear in the digit 7.

Here is how you can get at the actual filter weights in any given layer. This example is for the first convolutional layer. The zeroth element in the weights attribute are the filter weights an the first element are the biases. Note that the weights come out as a tensorflow tensor tf.Tensor. This variable type could be formatted into a differen variable type if desired.

In [21]:
layer0_weights = model1.layers[0].weights
filters = layer0_weights[0]
for i in range(32):
    print(filters[:,:,:,i])
tf.Tensor(
[[[-0.11928951]
  [-0.11802406]
  [-0.08555958]]

 [[ 0.03568331]
  [-0.10645144]
  [ 0.08209385]]

 [[ 0.089081  ]
  [ 0.16471936]
  [ 0.10030998]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.13069217]
  [ 0.13711627]
  [-0.02428622]]

 [[ 0.17016223]
  [ 0.01591511]
  [-0.1747878 ]]

 [[ 0.03293896]
  [-0.10402777]
  [ 0.00938711]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[-0.07189089]
  [ 0.02260103]
  [ 0.17485012]]

 [[-0.01281643]
  [ 0.00364695]
  [-0.01955995]]

 [[ 0.08354465]
  [ 0.14626887]
  [-0.06975073]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[-0.01558521]
  [ 0.09532611]
  [ 0.11297574]]

 [[ 0.12708496]
  [ 0.16442516]
  [ 0.11896724]]

 [[ 0.01993517]
  [-0.0076207 ]
  [ 0.04706249]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.0201411 ]
  [ 0.14947397]
  [ 0.1127718 ]]

 [[-0.11429866]
  [-0.13874073]
  [ 0.09261933]]

 [[ 0.02624158]
  [-0.14107576]
  [-0.07730232]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.02571716]
  [ 0.0283037 ]
  [-0.1060114 ]]

 [[ 0.19133383]
  [-0.02391103]
  [-0.12858815]]

 [[ 0.20242667]
  [-0.07441299]
  [-0.14733131]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[-0.00099031]
  [-0.04039461]
  [-0.05412214]]

 [[ 0.03504249]
  [-0.0355735 ]
  [ 0.0840058 ]]

 [[ 0.03955384]
  [ 0.18248433]
  [ 0.17918624]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[-0.0710971 ]
  [-0.08397201]
  [ 0.15953107]]

 [[ 0.10340404]
  [ 0.16246542]
  [ 0.08614749]]

 [[-0.03784866]
  [ 0.10818982]
  [ 0.06632348]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[-0.17951363]
  [-0.21614301]
  [-0.04647693]]

 [[ 0.0538866 ]
  [ 0.11447871]
  [ 0.11052182]]

 [[ 0.05876677]
  [ 0.16266273]
  [ 0.13459352]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.21858856]
  [ 0.02572625]
  [ 0.01998409]]

 [[-0.05627529]
  [ 0.00217793]
  [ 0.17738932]]

 [[-0.06833854]
  [-0.12076717]
  [-0.14646263]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.06945894]
  [ 0.09935874]
  [-0.09441872]]

 [[ 0.07620844]
  [ 0.00798459]
  [-0.14422807]]

 [[ 0.12438419]
  [-0.00169664]
  [-0.16155717]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.08279163]
  [ 0.17983888]
  [ 0.04031287]]

 [[ 0.06238161]
  [-0.10075922]
  [ 0.00945941]]

 [[ 0.02365316]
  [ 0.01150253]
  [-0.05261592]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.08195013]
  [-0.0558096 ]
  [ 0.0018133 ]]

 [[ 0.14962088]
  [-0.02291837]
  [-0.02490019]]

 [[ 0.15442675]
  [-0.05539534]
  [ 0.07519206]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.04997068]
  [ 0.06712386]
  [-0.18678527]]

 [[ 0.14113931]
  [ 0.14077549]
  [-0.1154337 ]]

 [[ 0.07792388]
  [ 0.05804091]
  [-0.08122236]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.15674159]
  [-0.00533815]
  [-0.1332218 ]]

 [[ 0.04452355]
  [ 0.03193785]
  [-0.13914524]]

 [[ 0.0210743 ]
  [-0.20080926]
  [-0.19441718]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.0931642 ]
  [ 0.14004625]
  [-0.00375825]]

 [[ 0.06369194]
  [ 0.09199505]
  [ 0.01160289]]

 [[-0.02002174]
  [ 0.04656243]
  [ 0.03530259]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[-0.14811073]
  [-0.09461611]
  [-0.08681727]]

 [[-0.10723176]
  [-0.02693634]
  [ 0.18912886]]

 [[ 0.0749003 ]
  [ 0.21311341]
  [ 0.00987586]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.14768513]
  [ 0.0893127 ]
  [ 0.06691834]]

 [[-0.02841202]
  [ 0.03625124]
  [ 0.09278807]]

 [[-0.12048357]
  [-0.14490303]
  [-0.06575694]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.18940216]
  [ 0.0613969 ]
  [-0.02644119]]

 [[-0.03045674]
  [ 0.07456902]
  [-0.1503751 ]]

 [[ 0.00809097]
  [-0.18287216]
  [-0.17646405]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.04735795]
  [ 0.08879718]
  [-0.27213305]]

 [[-0.01993781]
  [ 0.06715427]
  [-0.08143204]]

 [[ 0.08665308]
  [ 0.14838606]
  [ 0.19056866]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.0889265 ]
  [-0.126452  ]
  [-0.03017733]]

 [[ 0.0344901 ]
  [ 0.03510329]
  [ 0.09697086]]

 [[-0.14392672]
  [ 0.08477463]
  [ 0.05434055]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.05635219]
  [ 0.05565791]
  [ 0.04102019]]

 [[ 0.16129413]
  [ 0.15658869]
  [ 0.14488545]]

 [[ 0.10280994]
  [-0.1136065 ]
  [-0.14019181]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.12227925]
  [-0.15830491]
  [-0.11229364]]

 [[ 0.14177567]
  [-0.13639277]
  [ 0.07016367]]

 [[ 0.19158837]
  [ 0.11746981]
  [-0.0853593 ]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[-0.13725968]
  [ 0.01408892]
  [ 0.13294005]]

 [[-0.15244223]
  [-0.01614482]
  [ 0.07097559]]

 [[-0.19415273]
  [ 0.02358678]
  [ 0.18933882]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.13221808]
  [-0.13895324]
  [-0.11520083]]

 [[ 0.09102837]
  [-0.04139049]
  [-0.1071979 ]]

 [[ 0.15875573]
  [ 0.07612811]
  [-0.0024398 ]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.02338754]
  [ 0.07153843]
  [ 0.17767   ]]

 [[-0.10913194]
  [ 0.04612751]
  [-0.00323688]]

 [[-0.24784087]
  [ 0.01738954]
  [ 0.07893877]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.05672454]
  [ 0.04985772]
  [ 0.13715975]]

 [[ 0.11659484]
  [ 0.05991041]
  [-0.05635499]]

 [[-0.13822524]
  [-0.11396929]
  [ 0.03847733]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.12763546]
  [ 0.0472966 ]
  [-0.14628851]]

 [[ 0.02299911]
  [ 0.07491566]
  [-0.10978886]]

 [[ 0.17733066]
  [-0.05574615]
  [-0.1825821 ]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[-0.17182022]
  [-0.11344615]
  [-0.16466103]]

 [[ 0.05232147]
  [ 0.00067957]
  [-0.00694126]]

 [[ 0.13682301]
  [ 0.12626795]
  [ 0.00245567]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.11989044]
  [-0.02272124]
  [ 0.10521115]]

 [[-0.00727249]
  [ 0.04496015]
  [ 0.04048687]]

 [[ 0.13809517]
  [ 0.07028908]
  [ 0.17253952]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.06868175]
  [ 0.06445747]
  [ 0.08038477]]

 [[-0.09493048]
  [-0.03530069]
  [-0.04194488]]

 [[-0.13698152]
  [ 0.08415607]
  [ 0.10913029]]], shape=(3, 3, 1), dtype=float32)
tf.Tensor(
[[[ 0.12565301]
  [ 0.09576475]
  [ 0.17032366]]

 [[ 0.04568217]
  [-0.05835482]
  [-0.142061  ]]

 [[-0.20830888]
  [-0.06609122]
  [-0.06356008]]], shape=(3, 3, 1), dtype=float32)

Your turn:

Explore the change in visualization if you do not use the vmin and vmax options above. For your convenience, the code from the cell above is copied below for you to modify.

In [23]:
plt.figure(figsize=(7,7))
subplot_rows = np.ceil(np.sqrt(layer0_activations.shape[-1])) # determine subplots rows
for f in range(0,layer0_activations.shape[-1]): # loop over filters
    plt.subplot(subplot_rows,subplot_rows,f+1) # choose current subplot
    plt.imshow(np.squeeze(layer0_activations[:,:,:,f]),cmap='gray')
    plt.axis('off')

Section 2.4 Modify model to visualize activations after the second conv layer

We can look at the activations of the second convolutional layer with a simple modification of the code above.

In [24]:
model1_layer1 = Model(inputs=model1.inputs, outputs=model1.layers[1].output)
layer1_activations = model1_layer1.predict(X_example)
plt.figure(figsize=(7,7))
min_int = layer1_activations.min() # find min intensity for all activations
max_int = layer1_activations.max() # find max intensity for all activations
subplot_rows = np.ceil(np.sqrt(layer1_activations.shape[-1])) # determine subplots rows
for f in range(0,layer1_activations.shape[-1]): # loop over filters
    plt.subplot(subplot_rows,subplot_rows,f+1) # choose current subplot
    plt.imshow(np.squeeze(layer1_activations[:,:,:,f]),cmap='gray',\
               vmin=min_int,vmax=max_int) # plot activations
    plt.axis('off')

We see that the second layer filters have gotten more specific in the structures to which they are responding. This is consistent with what we know about the hierarchical nature of feature learning in CNNs.

Section 2.5 Modify model to visualize activations after the max pool layer

Your turn:

Modify the code above to visualize the output after the max pool layer. For your convenience, the code from the cell above is copied below for you to modify. Note--you probably want to define a new variable for this model to avoid overwriting the other models from above.

In [25]:
model1_layer2 = Model(inputs=model1.inputs, outputs=model1.layers[2].output)
layer2_activations = model1_layer2.predict(X_example)
plt.figure(figsize=(7,7))
min_int = layer2_activations.min() # find min intensity for all activations
max_int = layer2_activations.max() # find max intensity for all activations
subplot_rows = np.ceil(np.sqrt(layer2_activations.shape[-1])) # determine subplots rows
for f in range(0,layer2_activations.shape[-1]): # loop over filters
    plt.subplot(subplot_rows,subplot_rows,f+1) # choose current subplot
    plt.imshow(np.squeeze(layer2_activations[:,:,:,f]),cmap='gray',\
               vmin=min_int,vmax=max_int) # plot filter coeffs
    plt.axis('off')

We note that the activations of the max pool layer are simply lower resolution representations of the second convolutional layer activations.

Section 2.6 Modify model to visualize activations of the fully connected layers

While the output of the flattened and fully connected layers are not images, we can visualize the activations by treating them like a one-row image. This can give us some insight into which neurons are responding the most to the digit 7.

The flattened layer

Since the dimensions of the flattened layer are $1\times4608$, we need to "stretch" out the pixels to actually be able to see them. We use the aspect parameter in plt.imshow to do this.

In [26]:
model1_layer3 = Model(inputs=model1.inputs, outputs=model1.layers[3].output)
layer3_activations = model1_layer3.predict(X_example)
plt.figure(figsize=(20,20))
plt.imshow(layer3_activations,cmap='gray',aspect=50) # plot filter coeffs
plt.axis('off')
plt.show()

This visualization may not be particularly elucidating, but we include it here for the sake of completeness. Note that this $1\times4608$ vector of activations is just a reshaping of the $32\times12\times12=4608$ pixels in the max pool activations.

The first fully connected layer

Since the dimensions of the first fully connected layer is only $1\times128$, we don't need to mess with the aspect ratio of the plt.imshow visualization.

In [27]:
model1_layer4 = Model(inputs=model1.inputs, outputs=model1.layers[4].output)
layer4_activations = model1_layer4.predict(X_example)
plt.figure(figsize=(20,20))
plt.imshow(layer4_activations,cmap='gray') # plot filter coeffs
plt.axis('off')
plt.show()

This visualization can be interpreted in some sense as some aggregate of features that this layer is cueing on from the max pool layer. We expect that different of these neurons will activate for different digits.

The second fully connected layer

Note that the second fully connected layer is also the softmax output layer. We leave the axis labels on here for more easy determination of which digit(s) the network is claiming probability for. We also put grid lines on the image to even better delineate the different digits.

In [28]:
model1_layer5 = Model(inputs=model1.inputs, outputs=model1.layers[5].output)
layer5_activations = model1_layer5.predict(X_example)
plt.figure(figsize=(20,20))
plt.imshow(layer5_activations,cmap='gray') # plot filter coeffs
#plt.axis('off')
plt.grid('on')
plt.show()

We notice that the output here is a very high confidence in the digit 7 and very little in other digits. This is consistent with the interpretation of the softmax output layer if we were to look at the actual probability values.

In [29]:
print(layer5_activations)
[[3.0017281e-06 5.4504127e-08 2.4596244e-04 5.6476460e-04 1.8830582e-09
  6.0545705e-07 1.1623571e-10 9.9909592e-01 4.1047962e-05 4.8641545e-05]]

Your turn:

Explore the activations of the network for other input images. For your convenience, code cells from above have been copied here for you to modify.

Defining a specific input image

In [30]:
X_example = X_test[3].reshape(1,28,28,1)
print('Original image')
plt.figure()
plt.imshow(np.squeeze(X_example),cmap='gray')
plt.axis('off')
plt.show()
Original image

Output of the first convolutional layer

In [31]:
print('First convolutional layer')
model1_layer0 = Model(inputs=model1.inputs, outputs=model1.layers[0].output)
layer0_activations = model1_layer0.predict(X_example)
plt.figure(figsize=(7,7))
min_int = layer0_activations.min() # find min intensity for all activations
max_int = layer0_activations.max() # find max intensity for all activations
subplot_rows = np.ceil(np.sqrt(layer0_activations.shape[-1])) # determine subplots rows
for f in range(0,layer0_activations.shape[-1]): # loop over filters
    plt.subplot(subplot_rows,subplot_rows,f+1) # choose current subplot
    plt.imshow(np.squeeze(layer0_activations[:,:,:,f]),cmap='gray',\
               vmin=min_int,vmax=max_int) # plot filter coeffs
    plt.axis('off')
First convolutional layer

Second convolutional layer

In [32]:
print('Second convolutional layer')
model1_layer1 = Model(inputs=model1.inputs, outputs=model1.layers[1].output)
layer1_activations = model1_layer1.predict(X_example)
plt.figure(figsize=(7,7))
min_int = layer1_activations.min() # find min intensity for all activations
max_int = layer1_activations.max() # find max intensity for all activations
subplot_rows = np.ceil(np.sqrt(layer1_activations.shape[-1])) # determine subplots rows
for f in range(0,layer1_activations.shape[-1]): # loop over filters
    plt.subplot(subplot_rows,subplot_rows,f+1) # choose current subplot
    plt.imshow(np.squeeze(layer1_activations[:,:,:,f]),cmap='gray',\
               vmin=min_int,vmax=max_int) # plot filter coeffs
    plt.axis('off')
Second convolutional layer

The flattened layer

In [33]:
print('The flattened layer')
model1_layer3 = Model(inputs=model1.inputs, outputs=model1.layers[3].output)
layer3_activations = model1_layer3.predict(X_example)
plt.figure(figsize=(20,20))
plt.imshow(layer3_activations,cmap='gray',aspect=50) # plot filter coeffs
plt.axis('off')
plt.show()
The flattened layer

The first fully connected layer

In [34]:
print('First fully connected layer')
model1_layer4 = Model(inputs=model1.inputs, outputs=model1.layers[4].output)
layer4_activations = model1_layer4.predict(X_example)
plt.figure(figsize=(20,20))
plt.imshow(layer4_activations,cmap='gray') # plot filter coeffs
plt.axis('off')
plt.show()
First fully connected layer

The second fully connected layer (the output layer)

In [36]:
print('Second fully connected layer (output layer)')
model1_layer5 = Model(inputs=model1.inputs, outputs=model1.layers[5].output)
layer5_activations = model1_layer5.predict(X_example)
plt.figure(figsize=(20,20))
plt.imshow(layer5_activations,cmap='gray') # plot filter coeffs
#plt.axis('off')
plt.grid('on')
plt.show()
Second fully connected layer (output layer)

Section 3 Inputting New and Different Data to the Trained Network

In this section, we'll explore the use of this trained network to operate on new data. We will use the my_digits1_compressed.jpg image provided as part of this tutorial.

In [37]:
I = imageio.imread('my_digits1_compressed.jpg')

plt.figure(figsize=(7,7))
plt.imshow(I,cmap='gray')
plt.show()

Note that image is an RGB image of 10 handwritten digits. You will wrangle this image into a format suitable for input to the MNIST network in this section.

Your turn:

You should have noticed that image is an RGB image of 10 handwritten digits. Use what you have learned in Tutorials 1, 2 and 3 to extract each of those 10 digits from the image and get it in the correct form to input to the MNIST network.

As a reminder you probably want to pay attention to:

  • RGB versus gray
  • Variable type
  • Intensity range (Hint--you can invert the intensities and have light digits on a dark background by subtracting the image from the maximum intensity.)
  • Cropping indices (Hint--rows 295 through 445 and columns 1160 through 1310 will crop the digit 0)
  • Resizing
  • Correct tensor dimensions: recall that the network expects a tensor in the form samples$\times28\times28\times1$. In this case, you'll be providing only one sample, so you will need your input to be $1\times28\times28\times1$.

Use your extracted digits as input to the MNIST network model1. Does the network predict the correct label for the digit? What does the predicted softmax label tell you about the confidence in the prediction for this new image?

In [38]:
I_gray = skimage.color.rgb2gray(I) # convert to grayscale
I_gray = 1-I_gray # invert colors
I0 = I_gray[295:445,1160:1310] # crop out the digit 0
I0 = skimage.transform.resize(I0,(28,28)) # resize to 28x28

I1 = I_gray[355:505,2035:2190] 
I1 = skimage.transform.resize(I1,(28,28))

I2 = I_gray[425:625,2900:3100]
I2 = skimage.transform.resize(I2,(28,28))

I3 = I_gray[465:665,3775:3975]
I3 = skimage.transform.resize(I3,(28,28))

I4 = I_gray[1250:1400,1140:1290]
I4 = skimage.transform.resize(I4,(28,28))

I5 = I_gray[1270:1460,1950:2140]
I5 = skimage.transform.resize(I5,(28,28))

I6 = I_gray[1365:1515,2855:2995]
I6 = skimage.transform.resize(I6,(28,28))

I7 = I_gray[1375:1565,3705:3895]
I7 = skimage.transform.resize(I7,(28,28))

I8 = I_gray[1890:2090,1100:1300]
I8 = skimage.transform.resize(I8,(28,28))

I9 = I_gray[1915:2100,1890:2075]
I9 = skimage.transform.resize(I9,(28,28))

plt.figure()
plt.imshow(I9,cmap='gray')
plt.show()
In [39]:
print('Actual 0')
digit = I0.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 1')
digit = I1.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 2')
digit = I2.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 3')
digit = I3.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 4')
digit = I4.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 5')
digit = I5.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 6')
digit = I6.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)

print('Actual 7')
digit = I7.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 8')
digit = I8.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 9')
digit = I9.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
Actual 0
1/1 [==============================] - 0s 29ms/step
[[6.8588352e-01 8.9294204e-05 2.8592588e-02 1.1056284e-02 1.6317039e-04
  1.8102162e-02 4.2147860e-02 9.5154130e-05 2.0557204e-01 8.2979593e-03]]
0

Actual 1
1/1 [==============================] - 0s 1ms/step
[[0.02225103 0.47520384 0.03956151 0.07000417 0.01033422 0.04155826
  0.06538767 0.0103284  0.2567881  0.00858283]]
1

Actual 2
1/1 [==============================] - 0s 982us/step
[[1.31949689e-02 2.00228090e-03 5.25341094e-01 1.60992015e-02
  1.92626700e-04 8.34627077e-04 1.01408865e-02 2.72795733e-04
  4.30687636e-01 1.23385724e-03]]
2

Actual 3
1/1 [==============================] - 0s 2ms/step
[[5.5088840e-02 1.0979493e-03 6.2766701e-02 7.1404439e-01 3.0127668e-04
  2.0295974e-02 3.9182319e-03 1.7636752e-03 1.3292606e-01 7.7969101e-03]]
3

Actual 4
1/1 [==============================] - 0s 2ms/step
[[0.00449317 0.00260425 0.00868481 0.04395765 0.3183233  0.03924195
  0.00431868 0.00904973 0.2877898  0.28153667]]
4

Actual 5
1/1 [==============================] - 0s 2ms/step
[[5.7665119e-03 1.7849916e-04 9.4105694e-03 3.9598392e-03 4.1900723e-05
  8.2532495e-01 1.5064508e-02 1.2539045e-04 1.3682646e-01 3.3014147e-03]]
5

Actual 6
1/1 [==============================] - 0s 1ms/step
[[8.3931452e-03 4.7429978e-05 2.0042355e-03 5.4233207e-04 2.6819468e-04
  8.6273877e-03 5.2567339e-01 1.4837368e-05 4.5419329e-01 2.3568630e-04]]
6
Actual 7
1/1 [==============================] - 0s 1ms/step
[[1.4535265e-02 1.1916492e-02 1.5503377e-01 2.7446765e-01 3.5255362e-04
  5.7224068e-03 2.5718266e-03 9.0109944e-02 4.2829594e-01 1.6994102e-02]]
8

Actual 8
1/1 [==============================] - 0s 2ms/step
[[0.00280937 0.00272878 0.05847805 0.03561175 0.001444   0.01035202
  0.01397143 0.0010156  0.87139267 0.00219628]]
8

Actual 9
1/1 [==============================] - 0s 1ms/step
[[0.02064448 0.09211981 0.16881867 0.18561675 0.00585732 0.01686379
  0.00185522 0.286122   0.15169708 0.07040492]]
7

We see that the network is a bit less certain about this digit than the digits we looked at in Tutorial 2. This is not surprising considering that this data came from a completely different source.

The following code is just demonstration that you can combine the resize and reshape into one command if you so desire.

In [40]:
test = I_gray[295:445,1160:1310] # crop out the digit 0
test = skimage.transform.resize(test,(1,28,28,1)) # resize to 28x28
print(test.shape)
(1, 28, 28, 1)

This is a small demonstration of the ability for this network to correctly classify data from an entirely new source. We note, however, that the preparation of the data is critical for this success. If we were to pre-process the data in a manner not designed for the MNIST network, we might get very different results.

Your turn:

Repeat the above analysis, but keep the digit as dark on a light background.

In [41]:
I0 = 1 - I0
I1 = 1 - I1
I2 = 1 - I2
I3 = 1 - I3
I4 = 1 - I4
I5 = 1 - I5
I6 = 1 - I6
I7 = 1 - I7
I8 = 1 - I8
I9 = 1 - I9

plt.figure()
plt.imshow(I9,cmap='gray')
plt.show()
In [42]:
print('Actual 0')
digit = I0.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 1')
digit = I1.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 2')
digit = I2.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 3')
digit = I3.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 4')
digit = I4.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 5')
digit = I5.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 6')
digit = I6.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)

print('Actual 7')
digit = I7.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 8')
digit = I8.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
print('')

print('Actual 9')
digit = I9.reshape(1, 28, 28, 1) # reshape to 1x28x28x1
Y = model1.predict(digit,verbose=1) # predict label
print(Y)
y = np.argmax(Y)
print(y)
Actual 0
1/1 [==============================] - 0s 17ms/step
[[1.7879015e-01 1.0775392e-03 4.1533150e-02 3.8683835e-01 4.7188296e-04
  2.7173072e-02 1.0522263e-01 3.1542531e-04 2.5347787e-01 5.0999611e-03]]
3

Actual 1
1/1 [==============================] - 0s 1ms/step
[[0.15352415 0.00519428 0.0460986  0.01866703 0.01560142 0.08057252
  0.13575493 0.0184795  0.51991373 0.00619381]]
8

Actual 2
1/1 [==============================] - 0s 1ms/step
[[0.06937138 0.0016258  0.39591628 0.08640119 0.00275283 0.01784105
  0.0181016  0.00171659 0.40358233 0.00269094]]
8

Actual 3
1/1 [==============================] - 0s 1ms/step
[[0.01207224 0.00104034 0.00974111 0.31430888 0.0006123  0.29656494
  0.01953802 0.00113817 0.34256437 0.00241961]]
8

Actual 4
1/1 [==============================] - 0s 1ms/step
[[0.03492652 0.02859496 0.08790826 0.24790956 0.00627503 0.16382834
  0.01825364 0.02637605 0.36372364 0.02220399]]
8

Actual 5
1/1 [==============================] - 0s 919us/step
[[3.1650923e-02 1.3236036e-04 5.6083077e-03 6.2008310e-02 7.5048767e-04
  4.0030685e-01 6.1103166e-03 3.3410531e-03 4.7141460e-01 1.8676784e-02]]
8

Actual 6
1/1 [==============================] - 0s 948us/step
[[8.7858617e-02 2.5425587e-04 2.7476188e-03 2.7944353e-01 5.3617731e-04
  1.7298260e-01 7.5791337e-02 1.6354371e-04 3.7863430e-01 1.5880676e-03]]
8
Actual 7
1/1 [==============================] - 0s 1ms/step
[[0.02107237 0.0008527  0.13839269 0.35405746 0.00270692 0.01733918
  0.00718368 0.06233457 0.37462932 0.02143106]]
8

Actual 8
1/1 [==============================] - 0s 1ms/step
[[0.07130016 0.00160166 0.02468756 0.04227049 0.00253682 0.0526606
  0.01373134 0.00716663 0.77491325 0.00913141]]
8

Actual 9
1/1 [==============================] - 0s 1ms/step
[[0.02789359 0.00249591 0.11879742 0.09317992 0.00181518 0.02467562
  0.03635905 0.00696428 0.68441904 0.00340001]]
8

We see that the network has now incorrectly classified each digit as a 3 or 8 (your mileage may vary depending on exactly how your network converged).

We note that the network is approximately equally certain that this digit is a 0, 3, 6, or 8. This uncertainty is common in situations where the network is exposed to data of a form it has never seen before.

By keeping the background light and the digit dark, the network interprets the background as the digit. It is not actually processing the information in the same way we interpret this image as the digit 0.

Section 4: The VGG16 Network

In this section, we will use what we have learned about deep learning and image processing to explore a common CNN network called VGG16. The VGG network is described in this paper: https://arxiv.org/abs/1409.1556 and is a common architecture for image classification. This network was trained to classify 1000 categories (https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a).

Note--VGG16 is some 528 MB to download. This is a typical size for state-of-the-art CNNs. We actually downloaded these weights when we imported libraries (local machines) or activated the conda environment (HPC).

Section 4.1 Loading the VGG Network Trained on ImageNet

keras includes functions to load a VGG16 network with additional options to download a "pretrained" network--i.e., one that has been trained on ImageNet. ImageNet is a database of millions of images (see http://www.image-net.org/) spanning thousands of categories.

The code below will load the VGG16 network, trained on ImageNet. The first time this code is run, the trained network will be downloaded. Subsequent times, the trained network will be loaded from the local disk. This network is very large as we will see shortly, so it may take some time to download.

Similar to how we saved our MNIST model at the end of Tutorial 3 and loaded it at the beginning of this tutorial, we are loading a VGG16 model that has already been trained on the millions of images and 1000 categories of ImageNet. It is no trivial task to train a network the size of VGG16 (weeks on a multiple GPUs), so we want to leverage the work that has already been done.

In [43]:
model_vgg = vgg16.VGG16(include_top=True,weights='imagenet')

We can use print_shapes and print_params to explore the structure of the VGG16 model.

In [44]:
print_shapes(model_vgg)
Layer Name		Type		Input Shape		Output Shape	Trainable
block1_conv1		Conv2D		(None, 224, 224, 3)	(None, 224, 224, 64)	True
block1_conv2		Conv2D		(None, 224, 224, 64)	(None, 224, 224, 64)	True
block1_pool		MaxPooling2D	(None, 224, 224, 64)	(None, 112, 112, 64)
block2_conv1		Conv2D		(None, 112, 112, 64)	(None, 112, 112, 128)	True
block2_conv2		Conv2D		(None, 112, 112, 128)	(None, 112, 112, 128)	True
block2_pool		MaxPooling2D	(None, 112, 112, 128)	(None, 56, 56, 128)
block3_conv1		Conv2D		(None, 56, 56, 128)	(None, 56, 56, 256)	True
block3_conv2		Conv2D		(None, 56, 56, 256)	(None, 56, 56, 256)	True
block3_conv3		Conv2D		(None, 56, 56, 256)	(None, 56, 56, 256)	True
block3_pool		MaxPooling2D	(None, 56, 56, 256)	(None, 28, 28, 256)
block4_conv1		Conv2D		(None, 28, 28, 256)	(None, 28, 28, 512)	True
block4_conv2		Conv2D		(None, 28, 28, 512)	(None, 28, 28, 512)	True
block4_conv3		Conv2D		(None, 28, 28, 512)	(None, 28, 28, 512)	True
block4_pool		MaxPooling2D	(None, 28, 28, 512)	(None, 14, 14, 512)
block5_conv1		Conv2D		(None, 14, 14, 512)	(None, 14, 14, 512)	True
block5_conv2		Conv2D		(None, 14, 14, 512)	(None, 14, 14, 512)	True
block5_conv3		Conv2D		(None, 14, 14, 512)	(None, 14, 14, 512)	True
block5_pool		MaxPooling2D	(None, 14, 14, 512)	(None, 7, 7, 512)
flatten		Flatten		(None, 7, 7, 512)	(None, 25088)
fc1			Dense		(None, 25088)		(None, 4096)	True
fc2			Dense		(None, 4096)		(None, 4096)	True
predictions			Dense		(None, 4096)		(None, 1000)	True
In [45]:
print_params(model_vgg)
Layer Name		Type		Filter shape		# Parameters	Trainable
block1_conv1		Conv2D		(3, 3, 3, 64)		1792	True
block1_conv2		Conv2D		(3, 3, 64, 64)		36928	True
block1_pool		MaxPooling2D	---------------		---
block2_conv1		Conv2D		(3, 3, 64, 128)		73856	True
block2_conv2		Conv2D		(3, 3, 128, 128)		147584	True
block2_pool		MaxPooling2D	---------------		---
block3_conv1		Conv2D		(3, 3, 128, 256)		295168	True
block3_conv2		Conv2D		(3, 3, 256, 256)		590080	True
block3_conv3		Conv2D		(3, 3, 256, 256)		590080	True
block3_pool		MaxPooling2D	---------------		---
block4_conv1		Conv2D		(3, 3, 256, 512)		1180160	True
block4_conv2		Conv2D		(3, 3, 512, 512)		2359808	True
block4_conv3		Conv2D		(3, 3, 512, 512)		2359808	True
block4_pool		MaxPooling2D	---------------		---
block5_conv1		Conv2D		(3, 3, 512, 512)		2359808	True
block5_conv2		Conv2D		(3, 3, 512, 512)		2359808	True
block5_conv3		Conv2D		(3, 3, 512, 512)		2359808	True
block5_pool		MaxPooling2D	---------------		---
flatten		Flatten		---------------		---
fc1			Dense		(25088, 4096)		102764544	True
fc2			Dense		(4096, 4096)		16781312	True
predictions			Dense		(4096, 1000)		4097000	True
---------------
Total trainable parameters: 138357544
Total untrainable parameters: 0
Total parameters: 138357544

There are over 138 million parameters in this network!!! It has many more layers than the simple MNIST network that we have been working with.

Section 4.2: Classification Capabilities of the VGG16 Network

So, what happens if we show a new image to this network? Let's see what happens if we show it the cameraman.png image.
This example is adapted from https://towardsdatascience.com/keras-transfer-learning-for-beginners-6c9b8b7143e.

In the code below, we are leveraging many built-in keras functions, including

  • load_img from keras to read in an image while resizing to the expected input size of $224\times224$
  • preprocess_input specific to the VGG16 model in keras to scale the intensities to the expected range. I'm unsure of the details of this, but I do know that it's a process more complicated than a simple intensity scaling (since it results in negative intensities). Further details are likely in the VGG paper (https://arxiv.org/abs/1409.1556), but can be difficult to find sometimes.
  • decode_predictions specific to the VGG16 model in keras to find the top three confidences and map those to the class labels
In [46]:
# Example adapted from https://towardsdatascience.com/keras-transfer-learning-for-beginners-6c9b8b7143e

# load an image from file
image = load_img('cameraman.png', target_size=(224, 224))
# convert the image pixels to a numpy array
image = img_to_array(image)
# reshape data for the model
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
# prepare the image for the VGG model
image = preprocess_input(image)
# predict the probability across all output classes
yhat = model_vgg.predict(image)
# convert the probabilities to class labels
label = decode_predictions(yhat)
# retrieve the most likely result, e.g. highest probability
for k in range(0,3):
    labelk = label[0][k]
    # print the classification
    print('%s (%.2f%%)' % (labelk[1], labelk[2]*100))
tripod (99.65%)
crutch (0.12%)
harmonica (0.09%)

We notice that the network's performance on this image is pretty decent, specifying that it believes the image is one of a "tripod." There is not a "cameraman" category in ImageNet (https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a), so the network chose the most likely category from the available ones.

Here, we repeat the above for the peppers.png image.

In [47]:
# Example adapted from https://towardsdatascience.com/keras-transfer-learning-for-beginners-6c9b8b7143e

# load an image from file
image = load_img('peppers.png', target_size=(224, 224))
# convert the image pixels to a numpy array
image = img_to_array(image)
# reshape data for the model
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
# prepare the image for the VGG model
image = preprocess_input(image)
# predict the probability across all output classes
yhat = model_vgg.predict(image)
# convert the probabilities to class labels
label = decode_predictions(yhat)
# retrieve the most likely result, e.g. highest probability
for k in range(0,3):
    labelk = label[0][k]
    # print the classification
    print('%s (%.2f%%)' % (labelk[1], labelk[2]*100))
bell_pepper (89.12%)
cucumber (5.36%)
grocery_store (1.27%)

We find that the network's performance on this model is also very good, specifying that the image is of a "bell pepper".

Both the cameraman and peppers image contain objects similar to those that the VGG16 network encountered in the ImageNet database. As such, it does a remarkably good job of classifying those images.

What happens if the network encounters something really different than what it's seen before? What if you show it an image of something not included in the 1000 classes of objects? The image latest_256_0193.jpg image is an image of the Sun at a wavelength of 193 angsgtroms from NASA's Solar Dynamics Observatory satellite (https://sdo.gsfc.nasa.gov/data/)

In [48]:
# Example adapted from https://towardsdatascience.com/keras-transfer-learning-for-beginners-6c9b8b7143e

# load an image from file
image = load_img('latest_256_0193.jpg', target_size=(224, 224))
# convert the image pixels to a numpy array
image = img_to_array(image)
# reshape data for the model
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
# prepare the image for the VGG model
image = preprocess_input(image)
# predict the probability across all output classes
yhat = model_vgg.predict(image)
# convert the probabilities to class labels
label = decode_predictions(yhat)
# retrieve the most likely result, e.g. highest probability
for k in range(0,3):
    labelk = label[0][k]
    # print the classification
    print('%s (%.2f%%)' % (labelk[1], labelk[2]*100))

I = imageio.imread('latest_256_0193.jpg')
plt.figure(figsize=(5,5))
plt.imshow(I)
plt.title('latest_256_0193.jpg')
plt.show()
tick (15.11%)
French_loaf (14.08%)
nail (9.86%)

The network is torn between classifying this image of the sun as a "tick", "French loaf", or a "nail". Hmmm.... Things aren't looking so good anymore. But we must remember that we never showed the network ground truth of the sun in 193 angstroms during training. We can't really expect that it can jump to that conclusion.

Your turn:

What class does the VGG16 network think your data belong to? If you don't have data with you, you can peruse the internet for images of something that you want to try classifying with the network. Just save the image to the same directory as this notebook and use the code above to classify it.

Your turn:

Using an image of your choice, use the methods we learned above to explore the workings of the VGG16 model.

4.3 Transfer Learning on the VGG16 Architecture

Here we show an example of how to perform transfer learning on the VGG16 architecture using the CalTech101 dataset as the new input data.

Similar to how we modified the model to output activations at certain layers bove, we can truncate the VGG16 model to any desired layer and then add on additional layers at our discretion. In this case, since we noted relatively good performance of the basic VGG16 architecture on images of similar appearance to the object categories in ImageNet, we expect that we won't need to change too much of the VGG16 architecture. In this example, we choose to modify only the final prediction layer.

It is also common practice to retrain on all the fully connected layers. Generally speaking, the further in appearance your data are from the ImageNet images, the further back in the architecture you probably want to retrain.

In the following code, we keep the entire VGG16 architecture up until the last fully connected layer (which also happens to be the output layer) and define a new model model2_vgg which consists only of those layers we want to keep.

In [49]:
model2_vgg = Model(inputs=model_vgg.input,outputs=model_vgg.layers[-2].output)

Now we need to add in at least a new prediction layer as the final layer. We could also add additional layers within the network if we thought they were needed. Since the CalTech101 dataset has 101 classes, we need the final fully connected layer to have 101 nodes and a softmax activation.

In [50]:
new_output = model2_vgg.output # take the output as currently defined
new_output = Dense(101,activation='softmax')(new_output) # operate on that output with another dense layer
model2_vgg = Model(inputs=model2_vgg.input,outputs=new_output) # define a new model with the new output

Up to this point we have defined a new architecture, where we amputated the final fully connected layer and stitched back on a new one. If we look at the layers of this new model using the modified print_params function,

In [51]:
print_params(model2_vgg)
Layer Name		Type		Filter shape		# Parameters	Trainable
block1_conv1		Conv2D		(3, 3, 3, 64)		1792	True
block1_conv2		Conv2D		(3, 3, 64, 64)		36928	True
block1_pool		MaxPooling2D	---------------		---
block2_conv1		Conv2D		(3, 3, 64, 128)		73856	True
block2_conv2		Conv2D		(3, 3, 128, 128)		147584	True
block2_pool		MaxPooling2D	---------------		---
block3_conv1		Conv2D		(3, 3, 128, 256)		295168	True
block3_conv2		Conv2D		(3, 3, 256, 256)		590080	True
block3_conv3		Conv2D		(3, 3, 256, 256)		590080	True
block3_pool		MaxPooling2D	---------------		---
block4_conv1		Conv2D		(3, 3, 256, 512)		1180160	True
block4_conv2		Conv2D		(3, 3, 512, 512)		2359808	True
block4_conv3		Conv2D		(3, 3, 512, 512)		2359808	True
block4_pool		MaxPooling2D	---------------		---
block5_conv1		Conv2D		(3, 3, 512, 512)		2359808	True
block5_conv2		Conv2D		(3, 3, 512, 512)		2359808	True
block5_conv3		Conv2D		(3, 3, 512, 512)		2359808	True
block5_pool		MaxPooling2D	---------------		---
flatten		Flatten		---------------		---
fc1			Dense		(25088, 4096)		102764544	True
fc2			Dense		(4096, 4096)		16781312	True
dense_1			Dense		(4096, 101)		413797	True
---------------
Total trainable parameters: 134674341
Total untrainable parameters: 0
Total parameters: 134674341

we see that all the layers have attribute trainable set to True. This means that if we were to train this new model, we will train all 138 million parameters. We don't want to do this.

We want to "freeze" the parameters for all layers except that new one that we added. To do this, we set the trainable attribute of all layers we wish to freeze to False.

In [54]:
for layer in model2_vgg.layers[:-1]:
    layer.trainable=False

Now if we check the trainability of the layers, we see that all layers except the final layer are frozen, and we will be training (learning) only some 413,797 parameters.

In [55]:
print_params(model2_vgg)
Layer Name		Type		Filter shape		# Parameters	Trainable
block1_conv1		Conv2D		(3, 3, 3, 64)		1792	False
block1_conv2		Conv2D		(3, 3, 64, 64)		36928	False
block1_pool		MaxPooling2D	---------------		---
block2_conv1		Conv2D		(3, 3, 64, 128)		73856	False
block2_conv2		Conv2D		(3, 3, 128, 128)		147584	False
block2_pool		MaxPooling2D	---------------		---
block3_conv1		Conv2D		(3, 3, 128, 256)		295168	False
block3_conv2		Conv2D		(3, 3, 256, 256)		590080	False
block3_conv3		Conv2D		(3, 3, 256, 256)		590080	False
block3_pool		MaxPooling2D	---------------		---
block4_conv1		Conv2D		(3, 3, 256, 512)		1180160	False
block4_conv2		Conv2D		(3, 3, 512, 512)		2359808	False
block4_conv3		Conv2D		(3, 3, 512, 512)		2359808	False
block4_pool		MaxPooling2D	---------------		---
block5_conv1		Conv2D		(3, 3, 512, 512)		2359808	False
block5_conv2		Conv2D		(3, 3, 512, 512)		2359808	False
block5_conv3		Conv2D		(3, 3, 512, 512)		2359808	False
block5_pool		MaxPooling2D	---------------		---
flatten		Flatten		---------------		---
fc1			Dense		(25088, 4096)		102764544	False
fc2			Dense		(4096, 4096)		16781312	False
dense_1			Dense		(4096, 101)		413797	True
---------------
Total trainable parameters: 413797
Total untrainable parameters: 134260544
Total parameters: 134674341

Now we need to point out new architecture at the CalTech101 data in order to train that final layer. We again use built-in functions in keras to handle the flow of data through the network. With MNIST we could fit all the training images in one array in memory and grab batches from there. With a larger dataset like CalTech101, we begin to lose our ability to fit everything in memory and instead will read image from the specified directory into batches at training time.

The code below uses an ImageDataGenerator class from keras and defines the preprocessing applied to that image as the preprocess_input function we already used above. Recall that this is a preprocessing function defined specifically for the VGG16 network.

Next, we use the flow_from_directory function of the ImageDataGenerator to define a flow of images from a specified directory. It is assumed that the specified directory contains a subdirectory for each class. Other options specified for this function are

  • target_size which specifies the spatial dimensions to which all input data will be resized
  • color_mode which specifies that these images should be treated as RGB images. These built-in functions are nice to use since they (often, not always) have niceties associated with taking care of things like converting grayscale images to the correct dimensionality.
  • batch_size the number of images to process per batch in the training
  • class_mode which specifies that this is a multi-class classification problem
  • shuffle which specifies that the batches will be selected randomly rather than in alphanumerical order

There are other options available, see help(train_datagen.flow_from_directory).

In [56]:
train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
train_generator = train_datagen.flow_from_directory('101_ObjectCategories',\
                                                    target_size=(224,224), color_mode='rgb',\
                                                    batch_size=32, class_mode='categorical',\
                                                    shuffle=True)
Found 8677 images belonging to 101 classes.

Now we compile the model and specify the same options as we used for the MNIST network.

In [57]:
model2_vgg.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

Now we can train the model. Since we don't have the entire training data to feed to the training, we instead invoke the fit_generator method which can utilize the train_generator we define above. Additionally, the fit_generator takes a steps_per_epoch option rather than a batch_size option. We define the steps_per_epoch as the number of images over which we will train divided by the batch size.

This code will take a while. It took about 30 minutes per epoch on 6 4.8 GHz Intel i9 processors. In most applications, we would train this over multiple epochs to boost the accuracy even higher. Here we train for one epoch to limit the computational time. After one epoch, the network reported accuracy above 80%.

In [58]:
step_size_train = train_generator.n//train_generator.batch_size # the // does a floor after division
model2_vgg.fit_generator(generator=train_generator, steps_per_epoch=step_size_train, epochs=2, verbose=1)
Epoch 1/2
271/271 [==============================] - 21s 78ms/step - loss: 0.8273 - accuracy: 0.8242
Epoch 2/2
271/271 [==============================] - 20s 74ms/step - loss: 0.1062 - accuracy: 0.9675
Out[58]:
<keras.callbacks.callbacks.History at 0x7ff9582f6e90>

Your turn:

Using an image of your choice from CalTech101, use the methods we learned above to explore the workings of the new transfer-learned VGG16 model.

Since the decode_predictions method is specific to VGG16, it will complain if asked to operate on the outputs of this modified architecture. Instead, we will determine the class label by using the argmax of the output yhat and by asking the model for a dictionary of the labels with train_generator.class_indices.

In [102]:
# Example adapted from https://towardsdatascience.com/keras-transfer-learning-for-beginners-6c9b8b7143e

# load an image from file
image = load_img('101_ObjectCategories/emu/image_0001.jpg', target_size=(224, 224))
# convert the image pixels to a numpy array
image = img_to_array(image)
# reshape data for the model
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
# prepare the image for the VGG model
image = preprocess_input(image)
# predict the probability across all output classes
yhat = model2_vgg.predict(image)
# convert the probabilities to class labels
label_list_str_to_num = train_generator.class_indices
label_list_num_to_str = {v: k for k, v in label_list_str_to_num.items()}
ksorted = np.argsort(yhat[0])
# retrieve the most likely result, e.g. highest probability
for k in ksorted[-1:-3:-1]:
    # print the classification
    print('%s (%.2f%%)' % (label_list_num_to_str[k], yhat[0][k]))
emu (0.99%)
rooster (0.00%)
In [ ]: