1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud

Cloud AutoML Vision Deploy Edge to iOS tutorial

This tutorial requires access to macOS.

Terminology: See the AutoML Vision Edge terminology (https://cloud.google.com/vision/automl/docs/terminology) page for a list of terms used in this tutorial.

What you will build

In this tutorial you will download an exported custom TensorFlow Lite model from AutoML Vision Edge (https://cloud.google.com/vision/automl/docs/edge-quickstart). You will then run a pre- made iOS app that uses the model to identify images of owers.

Note: TensorFlow is a multipurpose machine learning framework. TensorFlow can be used anywhere from training huge models across clusters in the Cloud, to running models locally on an embedded system like your phone.

https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 1/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud

Image credit: Felipe Venâncio, "from my mother's garden" (https://www.ickr.com/photos/aeon/54377391/) (CC BY 2.0 (https://creativecommons.org/licenses/by/2.0/), image shown in app).

Objectives

In this introductory, end-to-end walkthrough you will use code to:

Run a pre-trained model in an iOS app using the TFLite interpreter.

https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 2/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud

Before you begin

Install TensorFlow

Before you begin the tutorial you need to install several pieces of software:

install tensorow version 1.7 (https://www.tensorow.org/hub/installation) install PILLOW

If you have a working Python (https://www.python.org/) installation, run the following commands to download this software:

pip install --upgrade "tensorflow==1.7.*"   pip install PILLOW

Clone the repository

Using the command line, clone the Git repository with the following command:

git clone https://github.com/googlecodelabs/tensorflow-for-poets-2  

Navigate to the directory of the local clone of the repository (tensorflow-for-poets-2 directory). You will run all following code samples from this directory:

cd tensorflow-for-poets-2  

Setup the iOS app

The demo iOS app requires several additional tools:

1. Xcode 2. Xcode command line tools

3. Cocoapods

Download Xcode

https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 3/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud

Use the following link (https://developer.apple.com/xcode/downloads/) to download Xcode on your machine.

Install Xcode command line tools

Install the Xcode command line tools by running the following command:

xcode-select --install  

Install Cocoapods

Cocoapods use Ruby, which is installed by default on macOS.

To install cocoapods, run this command:

sudo gem install cocoapods   Install TFLite Cocoapod

Navigate to the .xcworkspace le

The rest of this codelab needs to run directly in macOS, so close docker now (Ctrl-D will exit docker).

Use the following command to install TensorFlow Lite and create the .xcworkspace le using cocoapods:

pod install --project-directory=ios/tflite/  

Open the project with Xcode. You can open the project either through the command line or via the UI.

To open the project via the command line run the following command:

open ios/tflite/tflite_photos_example.xcworkspace  

To open the project via the UI, launch Xcode and select the "Open another Project" button.

https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 4/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud

After opening the project, navigate to the .xcworkspace le (not the .xcproject le).

Run the original app

The app is a simple example that runs an image recognition model on in the iOS Simulator. The app reads from the photo library, as the Simulator does not support camera input.

Before inserting your customized model, test the baseline version of the app which uses the base "mobilenet" trained on the 1000 ImageNet (http://www.image-net.org/) categories.

To launch the app in the Simulator, select the play button in the upper right corner of the Xcode window.

The "Next Photo" button advances through the photos on the device.

You can add photos to the device's photo library by dragging-and-dropping them onto the Simulator window.

The result should display annotations similar to this image:

https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 5/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud

Run the customized app

The original app setup classies images into one of the 1000 ImageNet classes, using the standard MobileNet.

Modify the app so that it will use your retrained model with custom image categories.

Add your model les to the project

https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 6/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud

The demo project is congured to search for a graph.lite, and a labels.txt les in the android/tflite/app/src/main/assets/ directory.

To replace those two les with your versions, run the following command:

cp tf_files/optimized_graph.lite ios/tflite/data/graph.lite   cp tf_files/retrained_labels.txt ios/tflite/data/labels.txt

Run your app

To relaunch the app in the Simulator, select the play button in the upper right corner of the Xcode window.

To test the modications, add image les from the flower_photos/ directory and get predictions.

Results should look similar to this:

https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 7/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud

Image credit: Felipe Venâncio, "from my mother's garden" (https://www.ickr.com/photos/aeon/54377391/) (CC BY 2.0 (https://creativecommons.org/licenses/by/2.0/), image shown in app).

Note that the default images aren't of owers.

To really try out the model, either add some of the training data images you downloaded earlier, or download some images from a Google search to use for prediction.

How does it work?

https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 8/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud

Now that you have the app running, look at the TensorFlow Lite specic code.

TensorFlowLite Pod

This app uses a pre-compiled TFLite Cocoapod. The Podle includes the cocoapod in the project:

Podfile (https://github.com/googlecodelabs/tensorow-for-poets-2/blob/master/ios/tite/Podle)

platform :ios, '8.0'   inhibit_all_warnings!

target 'tflite_photos_example' pod 'TensorFlowLite'

The code interfacing to the TFLite is all contained in the CameraExampleViewController.mm le.

Setup

The rst block of interest (after the necessary imports) is the viewDidLoad method:

CameraExampleViewController.mm (https://github.com/googlecodelabs/tensorow-for-poets- 2/blob/master/ios/tite/CameraExampleViewController.mm)

#include "tensorflow/contrib/lite/kernels/register.h"   #include "tensorflow/contrib/lite/model.h" #include "tensorflow/contrib/lite/string_util.h" #include "tensorflow/contrib/lite/tools/mutable_op_resolver.h"

...

- (void)viewDidLoad { [super viewDidLoad]; labelLayers = [[NSMutableArray alloc] init];

NSString* graph_path = FilePathForResourceName(model_file_name, model_file_type); model = tflite::FlatBufferModel::BuildFromFile([graph_path UTF8String]); if (!model) { LOG(FATAL) << "Failed to mmap model " << graph_path; } LOG(INFO) << "Loaded model " << graph_path;

https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 9/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud

model->error_reporter(); LOG(INFO) << "resolved reporter";

...

The key line in this rst half of the method is the model = tflite::FlatBufferModel::BuildFromFile([graph_path UTF8String]); line. This code creates a FlatBufferModel from the graph le.

A FlatBuffer is a memory mappable data structure. These are a key feature TFLite as they allow the system to better manage the memory used by the model. The system can transparently swap parts of the model in or out of memory as needed.

The second part of the method builds an interpreter for the model, attaching Op implementations to the graph data structure we loaded earlier:

CameraExampleViewController.mm (https://github.com/googlecodelabs/tensorow-for-poets- 2/blob/master/ios/tite/CameraExampleViewController.mm)

- (void)viewDidLoad {   ...

tflite::ops::builtin::BuiltinOpResolver resolver; LoadLabels(labels_file_name, labels_file_type, &labels);

tflite::InterpreterBuilder(*model, resolver)(&interpreter); if (!interpreter) { LOG(FATAL) << "Failed to construct interpreter"; } if (interpreter->AllocateTensors() != kTfLiteOk) { LOG(FATAL) << "Failed to allocate tensors!"; }

[self attachPreviewLayer]; }

If you're familiar with TensorFlow in python, this is roughly equivalent to building a tf.Session().

Run the model

https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 10/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud

The UpdatePhoto method handles all the details of fetching the next photo, updating the preview window, and running the model on the photo.

CameraExampleViewController.mm (https://github.com/googlecodelabs/tensorow-for-poets- 2/blob/master/ios/tite/CameraExampleViewController.mm)

- (void)UpdatePhoto{   PHAsset* asset; if (photos==nil || photos_index >= photos.count){ [self updatePhotosLibrary]; photos_index=0; } if (photos.count){ asset = photos[photos_index]; photos_index += 1; input_image = [self convertImageFromAsset:asset targetSize:CGSizeMake(wanted_input_width, wanted_ mode:PHImageContentModeAspectFill]; display_image = [self convertImageFromAsset:asset targetSize:CGSizeMake(asset.pixelWidth,asset.pi mode:PHImageContentModeAspectFit]; [self DrawImage]; }

if (input_image != nil){ image_data image = [self CGImageToPixels:input_image.CGImage]; [self inputImageToModel:image]; [self runModel]; } }

It's the last three lines that we are interested in.

The CGImageToPixels method converts the CGImage returned by the iOS Photos library to a simple structure containing the width, height, channels, and pixel data.

CameraExampleViewController.h (https://github.com/googlecodelabs/tensorow-for-poets- 2/blob/master/ios/tite/CameraExampleViewController.h)

typedef struct {   int width;

https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 11/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud

int height; int channels; std::vector data; } image_data;

The inputImageToModel method handles inserting the image into the interpreter memory. This includes resizing the image and adjusting the pixel values to match what's expected by the model.

CameraExampleViewController.mm (https://github.com/googlecodelabs/tensorow-for-poets- 2/blob/master/ios/tite/CameraExampleViewController.h)

- (void)inputImageToModel:(image_data)image{   float* out = interpreter->typed_input_tensor(0);

const float input_mean = 127.5f; const float input_std = 127.5f; assert(image.channels >= wanted_input_channels); uint8_t* in = image.data.data();

for (int y = 0; y < wanted_input_height; ++y) { const int in_y = (y * image.height) / wanted_input_height; uint8_t* in_row = in + (in_y * image.width * image.channels); float* out_row = out + (y * wanted_input_width * wanted_input_channels); for (int x = 0; x < wanted_input_width; ++x) { const int in_x = (x * image.width) / wanted_input_width; uint8_t* in_pixel = in_row + (in_x * image.channels); float* out_pixel = out_row + (x * wanted_input_channels); for (int c = 0; c < wanted_input_channels; ++c) { out_pixel[c] = (in_pixel[c] - input_mean) / input_std; } } } }

We know the model only has one input, so the float* out = interpreter- >typed_input_tensor(0); line asks the interpreter a pointer to the memory for input 0. The rest of the method handles the pointer arithmetic and pixel scaling to copy the data into that input array.

Finally the runModel method executes the model:

https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 12/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud

CameraExampleViewController.mm (https://github.com/googlecodelabs/tensorow-for-poets- 2/blob/master/ios/tite/CameraExampleViewController.mm)

- (void)runModel {   double startTimestamp = [[NSDate new] timeIntervalSince1970]; if (interpreter->Invoke() != kTfLiteOk) { LOG(FATAL) << "Failed to invoke!"; } double endTimestamp = [[NSDate new] timeIntervalSince1970]; total_latency += (endTimestamp - startTimestamp); total_count += 1; NSLog(@"Time: %.4lf, avg: %.4lf, count: %d", endTimestamp - startTimestamp, total_latency / total_count, total_count);

...

}

Next runModel reads back the results. To do this it asks the interpreter for a pointer to the output array's data. The output is a simple array of oats. The GetTopN method handles the extraction of the top 5 results (using a priority queue).

CameraExampleViewController.mm (https://github.com/googlecodelabs/tensorow-for-poets- 2/blob/master/ios/tite/CameraExampleViewController.mm)

- (void)runModel {   ...

const int output_size = (int)labels.size(); const int kNumResults = 5; const float kThreshold = 0.1f;

std::vector> top_results;

float* output = interpreter->typed_output_tensor(0); GetTopN(output, output_size, kNumResults, kThreshold, &top_results);

... }

https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 13/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud

The next few lines simply convert those top 5 (probability, class_id) pairs into (probability, label) pairs, and then passes off that result, asynchronously, to the setPredictionValues method which updates the on screen report:

CameraExampleViewController.mm (https://github.com/googlecodelabs/tensorow-for-poets- 2/blob/master/ios/tite/CameraExampleViewController.mm)

- (void)runModel {   ...

std::vector> newValues; for (const auto& result : top_results) { std::pair item; item.first = result.first; item.second = labels[result.second];

newValues.push_back(item); } dispatch_async(dispatch_get_main_queue(), ^(void) { [self setPredictionValues:newValues]; }); }

What Next

You've now completed a walkthrough of an iOS ower classication app using an Edge model. You used a trained Edge Tensorow Lite model to test an image classication app before making modications to it and getting sample annotations. You then examined TensorFlow Lite specic code to to understand underlying functionality.

The following resources can help you continue to learn about TensorFlow models and AutoML Vision Edge:

Learn more about TFLite from the ocial documentation (https://www.tensorow.org/lite) and the code repository (https://github.com/tensorow/tensorow/tree/master/tensorow/lite).

Try the camera version of this demo app (https://github.com/tensorow/tensorow/tree/master/tensorow/lite/examples/ios/camera),

https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 14/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud

which uses a quantized version of the model. This provides the same power in a smaller, more ecient package. Try some other TFLite ready models (https://github.com/tensorow/tensorow/tree/master/tensorow/lite/models) including a speech hot-word detector and an on-device version of smart-reply. Learn more about TensorFlow in general with TensorFlow's getting started (https://www.tensorow.org/tutorials) documentation.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/), and code samples are licensed under the Apache 2.0 License (https://www.apache.org/licenses/LICENSE-2.0). For details, see our Site Policies (https://developers.google.com/terms/site-policies). Java is a registered trademark of Oracle and/or its aliates.

Last updated December 26, 2019.

https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 15/15