In the previous blogs we have seen how to generate data for object detection and convert it into TFRecord format to train the model. In this blog we will learn how to use this data to train the model.
To train the model we will use the pre-trained model and then use transfer learning to train it on our dataset. I have used mobilenetpre trained model. Here is mobilenet model. For its configuration file you can go to model -> research -> object_detection -> samples -> configs ->> ssd_mobilenet_v1_pets.config. The configuration file that we have downloaded, needs to be edited as per our requirement. In configuration file we have changed the no. of classes, no. of steps in training, path to model checkpoint and path to pbtxt files as shown below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
# SSD with Mobilenet v1, configured for Oxford-IIIT Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
For the object-detection.pbtxt file, create a pbtxt file and put following text inside it to specify our labels for the problem.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
item{
id:1
name:'up'
}
item{
id:2
name:"down"
}
item{
id:3
name:"left"
}
item{
id:4
name:"right"
}
Now go to models -> research -> object detection -> legecy and copy train.py file to models -> research folder.
Then create a folder named images inside models -> research folder. Put your mobilenet model, configuration file, train and test image data folders, and train and test csv label files. Inside training_data folder, create a folder named data and put your train and test TFRecord files. The hierarchy will look like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
images
-data
-object-detection.pbtxt
-test.record
-train.record
-test
-contains all test images data
-train
-contains all train images data
-ssd_mobilenet_v1_pets.config
-test_labels.csv
-train_labels.csv
-training
Also create a training folder inside the images folder where model will save its checkpoints. Now run the following command to train the model from models -> research folder.
Time for training your model will depend upon your machine configuration and no. of steps that you have mentioned in the configuration file.
Now we have our trained model and its checkpoints are saved inside the models/research/images/training folder. In order to test this model and use this model to detect objects we need to export the inference graph.
To do this first we need to copy models/research/object_detection/export_inference_graph.py to models/research/ folder. Then inside models/research folder create a folder named “snake” which will save the inference graph. From models -> research folder run the following command:
Now we are having forzen_inference_graph.pb inside models/research/snake folder which will be used to detect object using trained model.
This is all for training the model and saving the inference graph, in the next blog we will see how to use this inference graph for object detection and how to run our snake game using this trained object detection model.
This blog is amazing !! Thank you but actually i can’t find out the ‘train.py’ , so could you help me ?