Deploy Edge to Ios Tutorial | Cloud Automl Vision | Google Cloud

Deploy Edge to Ios Tutorial | Cloud Automl Vision | Google Cloud

1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud Cloud AutoML Vision Deploy Edge to iOS tutorial This tutorial requires access to macOS. Terminology: See the AutoML Vision Edge terminology (https://cloud.google.com/vision/automl/docs/terminology) page for a list of terms used in this tutorial. What you will build In this tutorial you will download an exported custom TensorFlow Lite model from AutoML Vision Edge (https://cloud.google.com/vision/automl/docs/edge-quickstart). You will then run a pre- made iOS app that uses the model to identify images of owers. Note: TensorFlow is a multipurpose machine learning framework. TensorFlow can be used anywhere from training huge models across clusters in the Cloud, to running models locally on an embedded system like your phone. https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 1/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud Image credit: Felipe Venâncio, "from my mother's garden" (https://www.ickr.com/photos/aeon/54377391/) (CC BY 2.0 (https://creativecommons.org/licenses/by/2.0/), image shown in app). Objectives In this introductory, end-to-end walkthrough you will use code to: Run a pre-trained model in an iOS app using the TFLite interpreter. https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 2/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud Before you begin Install TensorFlow Before you begin the tutorial you need to install several pieces of software: install tensorow version 1.7 (https://www.tensorow.org/hub/installation) install PILLOW If you have a working Python (https://www.python.org/) installation, run the following commands to download this software: pip install --upgrade "tensorflow==1.7.*" pip install PILLOW Clone the Git repository Using the command line, clone the Git repository with the following command: git clone https://github.com/googlecodelabs/tensorflow-for-poets-2 Navigate to the directory of the local clone of the repository (tensorflow-for-poets-2 directory). You will run all following code samples from this directory: cd tensorflow-for-poets-2 Setup the iOS app The demo iOS app requires several additional tools: 1. Xcode 2. Xcode command line tools 3. Cocoapods Download Xcode https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 3/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud Use the following link (https://developer.apple.com/xcode/downloads/) to download Xcode on your machine. Install Xcode command line tools Install the Xcode command line tools by running the following command: xcode-select --install Install Cocoapods Cocoapods use Ruby, which is installed by default on macOS. To install cocoapods, run this command: sudo gem install cocoapods Install TFLite Cocoapod Navigate to the .xcworkspace le The rest of this codelab needs to run directly in macOS, so close docker now (Ctrl-D will exit docker). Use the following command to install TensorFlow Lite and create the .xcworkspace le using cocoapods: pod install --project-directory=ios/tflite/ Open the project with Xcode. You can open the project either through the command line or via the UI. To open the project via the command line run the following command: open ios/tflite/tflite_photos_example.xcworkspace To open the project via the UI, launch Xcode and select the "Open another Project" button. https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 4/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud After opening the project, navigate to the .xcworkspace le (not the .xcproject le). Run the original app The app is a simple example that runs an image recognition model on in the iOS Simulator. The app reads from the photo library, as the Simulator does not support camera input. Before inserting your customized model, test the baseline version of the app which uses the base "mobilenet" trained on the 1000 ImageNet (http://www.image-net.org/) categories. To launch the app in the Simulator, select the play button in the upper right corner of the Xcode window. The "Next Photo" button advances through the photos on the device. You can add photos to the device's photo library by dragging-and-dropping them onto the Simulator window. The result should display annotations similar to this image: https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 5/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud Run the customized app The original app setup classies images into one of the 1000 ImageNet classes, using the standard MobileNet. Modify the app so that it will use your retrained model with custom image categories. Add your model les to the project https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 6/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud The demo project is congured to search for a graph.lite, and a labels.txt les in the android/tflite/app/src/main/assets/ directory. To replace those two les with your versions, run the following command: cp tf_files/optimized_graph.lite ios/tflite/data/graph.lite cp tf_files/retrained_labels.txt ios/tflite/data/labels.txt Run your app To relaunch the app in the Simulator, select the play button in the upper right corner of the Xcode window. To test the modications, add image les from the flower_photos/ directory and get predictions. Results should look similar to this: https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 7/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud Image credit: Felipe Venâncio, "from my mother's garden" (https://www.ickr.com/photos/aeon/54377391/) (CC BY 2.0 (https://creativecommons.org/licenses/by/2.0/), image shown in app). Note that the default images aren't of owers. To really try out the model, either add some of the training data images you downloaded earlier, or download some images from a Google search to use for prediction. How does it work? https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 8/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud Now that you have the app running, look at the TensorFlow Lite specic code. TensorFlowLite Pod This app uses a pre-compiled TFLite Cocoapod. The Podle includes the cocoapod in the project: Podfile (https://github.com/googlecodelabs/tensorow-for-poets-2/blob/master/ios/tite/Podle) platform :ios, '8.0' inhibit_all_warnings! target 'tflite_photos_example' pod 'TensorFlowLite' The code interfacing to the TFLite is all contained in the CameraExampleViewController.mm le. Setup The rst block of interest (after the necessary imports) is the viewDidLoad method: CameraExampleViewController.mm (https://github.com/googlecodelabs/tensorow-for-poets- 2/blob/master/ios/tite/CameraExampleViewController.mm) #include "tensorflow/contrib/lite/kernels/register.h" #include "tensorflow/contrib/lite/model.h" #include "tensorflow/contrib/lite/string_util.h" #include "tensorflow/contrib/lite/tools/mutable_op_resolver.h" ... - (void)viewDidLoad { [super viewDidLoad]; labelLayers = [[NSMutableArray alloc] init]; NSString* graph_path = FilePathForResourceName(model_file_name, model_file_type); model = tflite::FlatBufferModel::BuildFromFile([graph_path UTF8String]); if (!model) { LOG(FATAL) << "Failed to mmap model " << graph_path; } LOG(INFO) << "Loaded model " << graph_path; https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 9/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud model->error_reporter(); LOG(INFO) << "resolved reporter"; ... The key line in this rst half of the method is the model = tflite::FlatBufferModel::BuildFromFile([graph_path UTF8String]); line. This code creates a FlatBufferModel from the graph le. A FlatBuffer is a memory mappable data structure. These are a key feature TFLite as they allow the system to better manage the memory used by the model. The system can transparently swap parts of the model in or out of memory as needed. The second part of the method builds an interpreter for the model, attaching Op implementations to the graph data structure we loaded earlier: CameraExampleViewController.mm (https://github.com/googlecodelabs/tensorow-for-poets- 2/blob/master/ios/tite/CameraExampleViewController.mm) - (void)viewDidLoad { ... tflite::ops::builtin::BuiltinOpResolver resolver; LoadLabels(labels_file_name, labels_file_type, &labels); tflite::InterpreterBuilder(*model, resolver)(&interpreter); if (!interpreter) { LOG(FATAL) << "Failed to construct interpreter"; } if (interpreter->AllocateTensors() != kTfLiteOk) { LOG(FATAL) << "Failed to allocate tensors!"; } [self attachPreviewLayer]; } If you're familiar with TensorFlow in python, this is roughly equivalent to building a tf.Session(). Run the model https://cloud.google.com/vision/automl/docs/tflite-ios-tutorial 10/15 1/23/2020 Deploy Edge to iOS tutorial | Cloud AutoML Vision | Google Cloud The UpdatePhoto method handles all the details of fetching the next photo, updating the preview window, and running the model on the photo. CameraExampleViewController.mm (https://github.com/googlecodelabs/tensorow-for-poets- 2/blob/master/ios/tite/CameraExampleViewController.mm) - (void)UpdatePhoto{ PHAsset* asset; if (photos==nil || photos_index >= photos.count){ [self updatePhotosLibrary]; photos_index=0; } if (photos.count){ asset = photos[photos_index]; photos_index += 1; input_image = [self convertImageFromAsset:asset targetSize:CGSizeMake(wanted_input_width,

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us