Intelligent Mobile Projects with TensorFlow
上QQ阅读APP看书,第一时间看更新

Using the retrained models in the sample iOS app

The iOS simple example we see in Chapter 1, Getting Started with Mobile TensorFlow, uses the Inception v1 model. To make the app use our retrained Inception v3 model and MobileNet model to do better dog breed recognition, we need to make a few changes to the app. Let's first see what it takes to use the retrained quantized_stripped_dogs_retrained.pb in the iOS simple app:

  1. Double-click the tf_simple_example.xcworkspace file in tensorflow/examples/ios/simple to open the app in Xcode
  2. Drag the quantized_stripped_dogs_retrained.pb model file, the dog_retrained_labels.txt label file, and the lab1.jpg image file we used to test the label_image script, and drop to the project's data folder, making sure both Copy items if needed and Add to targets are checked, as shown in the following screenshot:

Figure 2.5 Adding the retrained model file and the label file to app

  1. Click the RunModelViewController.mm file in Xcode, which uses the TensorFlow C++ API to process an input image, run it through the Inception v1 model, and get the image classification result, and change the lines:
NSString* network_path = FilePathForResourceName(@"tensorflow_inception_graph", @"pb");
NSString* labels_path = FilePathForResourceName(@"imagenet_comp_graph_label_strings", @"txt");
NSString* image_path = FilePathForResourceName(@"grace_hopper", @"jpg");

To the following with the correct model filename, label filename, and test image name:

NSString* network_path = FilePathForResourceName(@"quantized_stripped_dogs_retrained", @"pb");
NSString* labels_path = FilePathForResourceName(@"dog_retrained_labels", @"txt");
NSString* image_path = FilePathForResourceName(@"lab1", @"jpg");
  1. Also in RunModelViewController.mm, to match the required input image size for our Inception v3 (from v1) retrained model, change the value 224 in const int wanted_width = 224; and const int wanted_height = 224; to 299, and the value in both const float input_mean = 117.0f; and const float input_std = 1.0f; to 128.0f
  2. Change the values of the input and output node names from:
std::string input_layer = "input"; 
std::string output_layer = "output"; 

To the following correct values:

std::string input_layer = "Mul"; 
std::string output_layer = "final_result"; 
  1. Finally, you can edit the dog_retrained_labels.txt file to remove the leading nxxxx string in each line (for example, remove n02099712 in n02099712 labrador retriever) – on the Mac you can do this by holding down the option key, then making block selection and deletion – so the recognition results will be more readable

Run the app now and click the Run Model button, in the Xcode's console window or the app's edit box, you'll see the following recognition results, pretty consistent with the results of running the label_image script:

Predictions: 41 0.645  labrador retriever 
64 0.195  golden retriever 
76 0.0261  kuvasz 
32 0.0133  redbone 
20 0.0127  beagle 

To use the MobileNet (mobilenet_1.0_224_quantized) retrained model dog_retrained_mobilenet10_224.pb, we follow steps similar to the previous ones, and in Steps 2 and 3, we use dog_retrained_mobilenet10_224.pb, but in Step 4, we need to keep const int wanted_width = 224; and const int wanted_height = 224;, and only change const float input_mean and const float input_std to 128. Finally, in Step 5, we must use std::string input_layer = "input"; and std::string output_layer = "final_result";. These parameters are the same as those used with the label_image script for dog_retrained_mobilenet10_224.pb.

Run the app again and you'll see similar top recognition results.