Sunday, April 22, 2018

How to train for Tensorflow Object Detection API

1. How to use Tensorflow Object Detection API
2. How to train for Tensorflow Object Detection API
3. How to use Tensorboard
4. How to use a trained model of TF Detect in Android

At first, you need tensorflow1.7.0 and cuda9.0. Make sure you have both of them.

Make own dataset


Now you need a dataset. I prepared 120  pictures of Thora Birch:

From these pictures, we will make a dataset. We will use labelImg to define where is what.
$ git clone https://github.com/tzutalin/labelImg.git
$ cd labelImg
$ python3.6 labelImg.py

labelImg will open. Select "Open Dir".

Choose the thora folder.

Opened.

From the top of window, "Edit" -> "Create Rectbox". And create a rectangle box on her face. The category name is "thora".

Then save it. It is saved as (picture name).xml. Create rectbox on fer faces of other 119 pictures.

Make folders for tensorflow training: ckpt, info, input, label, val_input. val_label.
(dataset folder will be made later)

We need .record file to make a model of tensorflow object detection API, so we will use "racoon_dataset" to create the file. Clone the project to your local.
$ git clone https://github.com/datitran/raccoon_dataset
$ mv ./raccoon_dataset ./dataset
$ cd ./dataset

Now you have a dataset directory. See inside:
(I deleted test files because they seemed unnecessary)

Remove all files from "data", "training" and "images".

Move all pictures you have to the "images" folder.

Move all of the .xml files, that were made by labelImg, to "annotations" folder.

Open "xml_to_csv.py" and change the name of generated csv file.


It's time to change the pictures/xmls to a csv file. Run this command:
$ python3.6 xml_to_csv.py

You should have got a csv in the current folder:

Move it to data folder:



We will convert the csv to record file of tensorflow.
Open the "generate_tfrecord.py" and change the file this way:

Then save the file.

Run the following command:
$ python3.6 generate_tfrecord.py

This will generate "thora.record" file (which is tensorflow record file).
Copy the thora.record and paste it in "input" and "val_input" folders.


Then record files are ready.

Now we will make .pbtxt files to define labels. Make "label.pbtxt" in "label" and "val_label" folders. Code written inside of the .pbtxt files is same. Write as follows in both of the files:
item {
  id: 1
  name: 'thora'
}
(id:0 is a placeholder, so you always need to start with "id:1")
Examples of label list.

Config file


Create a "tf.config" text file.


You can see sample files here. I used "ssd_mobilenet_v1_pets.config" and customized the sample config file a bit. (I'm not sure if this is a right choice though.)

Search with "PATH_TO_BE_CONFIGURED"

Then configure the value:

Also maybe you need to decrease the number of steps executed for training. 200k would be too much for a test.


Also change the num of classes to 1. Because we have only thora class except others class.


If you download a pre-trained model here, you can use it for fine-tuning. I used "ssd_mobilenet_v1_coco_11_06_2017" for fine-tuning:

Start training


See here about the command to start training/evaluating. (We cloned the tensorflow object detection API project in Documents directory.) I used this command to start training:
$ cd $HOME/Documents/models/research
$ python3.6 object_detection/train.py --logtostderr --pipeline_config_path=$HOME/Documents/workspace/tf/tf.config --train_dir=$HOME/Documents/workspace/tf


Training.


I checked if my model works.


Export a trained model


To use the model for mobile app, we need to export the model and make .pb file.

See here for the command. It's like this:
# From tensorflow/models/research/
python object_detection/export_inference_graph.py \
    --input_type image_tensor \
    --pipeline_config_path ${PIPELINE_CONFIG_PATH} \
    --trained_checkpoint_prefix ${TRAIN_PATH} \
    --output_directory output_inference_graph.pb

For example:
$ cd  ~/Documents/models/research
$ python3.6 object_detection/export_inference_graph.py --input_type image_tensor --pipeline_config_path $HOME/Documents/workspace/tf/pipeline.config --trained_checkpoint_prefix $HOME/Documents/workspace/tf/model.ckpt-100 --output_directory $HOME/Documents/workspace/tf/data/output_inference_graph

Optimization


Note: as of April 2018, if you optimize tf detect model, it doesn't work properly.

Use optimization_for_inference.py to optimize the .pb file. You can find it in tensorflow project:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/optimize_for_inference.py

You need bazel for the command:
https://docs.bazel.build/versions/master/install.html

According to the tensorflow page, if we suppose you cloned tensorflow in Documents directory, an example of command-line usage is:
$ cd $HOME/Documents
$ bazel build tensorflow/python/tools:optimize_for_inference && \
$ bazel-bin/tensorflow/python/tools/optimize_for_inference \
--input=frozen_inception_graph.pb \
--output=optimized_inception_graph.pb \
--frozen_graph=True \
--input_names=Mul \
--output_names=softmax
Note: you need to check what the output/input node names are by summarize_graph tool and provide them as input_names/output_names of the arguments of optimize_for_inference tool.
For my model, the input name was "image_tensor", and the output node name was "detection_boxes, detection_scores, detection_classes, num_detections" (4 outputs).

Suppose you cloned tensorflow in Documents directory, summarize_graph tool can be used like this:
$ cd $HOME/Documents
$ bazel build tensorflow/tools/graph_transforms:summarize_graph
$ bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=tensorflow_inception_graph.pb

See also:
https://github.com/tensorflow/models/issues/2283
https://stackoverflow.com/questions/41265035/tensorflow-why-there-are-3-files-after-saving-the-model
https://stackoverflow.com/questions/40028175/how-do-you-get-the-name-of-the-tensorflow-output-nodes-in-a-keras-model
How to use SSD: Single Shot MultiBox Detector
Use keras' Classifier model on android app

references:
https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9