Medium Feed


Sunday, November 5, 2017

Tensorflow : Retraining Inception V3 model to classify custom objects

This tutorial we will see on how to retrain Inception model to classify custom objects. And also we will try to see how to save model checkpoint files and making use of Tensorboard effectively.

As like most of you, initially I was also confused with what Inception models really does. does it have object detection ability  ? or does it just have image classification. ?

The answer was Inception model is just for classification,  NOT for object detection. For object detection, google provides  "object detection API" library which can detect all trained objects in a single image. On the other hand, Inception library just can be used to classify an image/ object. Hope you understand what object detection and classification means.

This tutorial, we are going to explore only on Inception model retraining.

With the launch of Inception V3 model, we should thank Google for saving lot of computation time for us and providing ability to retrain the existing model . To build a model from scratch requires more GPU computation and does requires more amount of time. with the help of pre-trained models, we can retrain the final layer with our custom image class which will take very lesser time than building model from scratch.

Firstly,  how does re-training works in existing model ?

To understand how it works, you need to know the concepts of Tensorflow Bottlenecks. The last but one layer of the neural network is trained to give out different values based on the image that it gets. This layer has enough summarized information to provide the next layer which does the actual classification task. This last but one layer is called the bottleneck.

Tensorflow computes all the bottleneck values as the first step in training. The bottleneck values are then stored as they will be required for each iteration of training. The computation of these values is faster because tensorflow takes the help of existing pre-trained model to assist it with the process.

How to Re-train Inception V3 model 

As we start re-training model, we should have below things done.

1. Plan for new classes or categories need to be re-trained.

In this example : we are going to train 3 new categories. let's take Obama, Trump and George Bush images to train (minimum of 50 images per categories).

Create 3 folders with respective images for training inside a folder named "USPresidents"

2. Create a label file (.txt file) which contains list new categories going to be retrained.

3. Download Inception V3 model from the below URL and

4. Download python file from the below link

Run the downloaded file from the command prompt

python --model_dir ./inceptionModelFolderPath --image_dir ~/USPresidentsFolderPath --output_graph ./outputFolderPath --how_many_training_steps 500

–model_dir – This parameter gives the location of the pre-trained model. (The model file location which we downloaded in STEP 3)
–image_dir – Path of the image folder which was created in step 1
–output_graph –  The location to store the newly trained graph.
–how_many_training_steps – Training steps indicate the number iterations to perform. By default, this is 4000. Finding the right number is a trial and error process and once you find the best model, you can start using that.

The output the above script running will generate a graph definition file name output_graph.pb.which will be used later to test the retrained model.

Also, if you dig into the, you can notice the Tensor name of the last trained layer.  You can search by "final_tensor_name"

Testing the Re-Trained Model

To test retrained model, take a sample image which you want test and run the below python script.

Save this script as "" and run from the command prompt

C:\> python

You are done.. !!

As a result, you will get results predicted for the given test image. Now you have a custom image classifier running which can classify who is there in the given image. Is it Obama or Trump. Your retrained model can identify for you.

If you want to debug the list of tensor names or operations. you can add the below line after tf.session() crated.

 for i in sess.graph.get_operations():

This will print all the operations inside the retrained model. As you can noticed before retrain the output tensors is Softmax, now if you can run this piece of code it can show you the retrained layer output sensor which is "final_result".

Also, you can use tensorboard, digging further on each convolution layer and debugging.

Enjoy !!


  1. The preassembled model of the i3 MK3S+ is sweet for anybody from a rank beginner to a 3D printing Bike Helmets for Kids veteran. You can save a bit of money by choosing the package model, is in a position to} probably take at least of|no much less than} a day to assemble and may be be} greatest left to experienced customers and tinkerers. You’ll have to call or visit a FedEx Office location to find out|to search out} out if they provide this service. Based upon this, now let’s calculate the 3d printing value by time. 2014 – Originally $2 per hour of printing time; Reduced to $1 as the demand grew. PVA is extensively used as a assist materials for intricate designs due to its tendency to dissolve in water.