Are you ready for AI-100 Exam? Self-Assess yourself with “Whizlabs FREE TEST” AI-100 WhizCard

Quick Bytes for you before the exam!

WHIZ

The information provided in WhizCards is for educational purposes only; created in CARD our efforts to help aspirants prepare for the AI-100 certification exam. Though references have been taken from documentation, it’s not intended as a substitute for the official docs. The document can be reused, reproduced, and printed in any form; ensure that appropriate sources are credited and required permissions are received. Bing Services

Bing Auto Suggest Bing Image Search API How to Use This? API  Image search capabilities, image only search results  Filters image by editing query, thumbnail preview for the images returned.  Create a Cognitive Services  Expand search capabilities by including Bing's suggested search query API Account.  It helps to improve the users'  Make sure it has access to search experience Bing News Search API Bing Search API.  Returns a list of suggested  Cognitive news searching capability, Finds news by sending search query  You must have an Azure queries based on the partial  Send search query to get relevant news articles, Integrated with Bing Autosuggest Subscription query string in the search box Bing Spell Check API  Request sent on every  Easy to call from any  Contextual grammar check, Spell checking, utilises the machine learning search programming language that can and statistical machine translation.  Request get processed and make HTTP Requests and parse  Common expression in text, Informal terms used in text, Brands, Titles, and JSON message is JSON. Other popular expression, Accurate corrections and contextual correction returned Bing Video Search API  Adds video searching capabilities, Filters, edits, and display video based on the editing query and thumbnail preview  Customizing the search for trending videos from the internet Bing Custom Search API Bing Visual Search API  Return insights for an image, upload an image and providing URL  Insights returned are: Visually similar images, Shopping sources, Web pages that  Customise Search include the images, Well-known people, well known places, Well-known things.  Add free How to Use This? Bing Web Search API  Specify domain and webpages for Bing  Providing instant answers, search results that can be configured with:  Bing Custom Search Use Web pages, Images, Videos, News, and translation.  Enables pinning, boosting Portal to create customized  JSON based and Bing based search, optimal for applications and demoting content search instance How to Use This?  Identify and removes unwanted Unicode characters from search results  Creates custom view of web  Call Bing custom Search API search Bing Entity Search API to post create and integrate  Search images, videos,  Create a Cognitive Service search instances  Return results like entities and places, Can be called by many collaborate and test custom API account languages like HTTP request, parse JSON and SDK. instances, and integrate  Request to API with a  Results can be used by Restaurants, Hotels, and Local Business hosted UI using JS code valid search Query  Return Real time search suggestions, returns multiple entities having  API response is processed multiple. meanings  by parsing JSON   Azure Anomaly Detector

Azure Cognitive Services Anomaly Detector is an Azure offering that offers the Anomaly Anomaly detector following:  Makes AI accessible without machine-learning expertise detection is performs performed in real- change points requirement time. detection as a batch in the data.  That enables incorporating anomaly detection Simply needs an API call to embed the AI capabilities into the apps like the capabilities in apps and helps users in quick problem following. identification. The Anomaly Detector API provides two Anomaly detection detection modes: API also gives 1. Batch: Helps in detecting the  Does not requires background in AI and/or ML. multiple additional anomalies as a batch information about 2. Streaming: Helps in detecting the data. anomalies as the data is  Anomaly detection API performs the ingestion of generated. Performs the anomaly Hear Speak Search Understand Accelerate detection of the latest data point. time-series data irrespective of kind.

 Anomaly detection API selects the best anomaly Categories of services: detection model for the data ensures highest accuracy. Vision Upper and lower helps in recognition, The anomaly results is anomaly detection identification, returned for each data boundaries are  The Anomaly Detector API can be used as stateless captioning, indexing, point's arithmetic mean returned. and moderating by the model service. (pictures, videos, and digital ink content The returned Speech values can be used Search 1. Helps in performing to see the range of normal values, along Features: Facilitates the speech conversion (speech to text, and text with the adding of Bing anomalies in to speech). Search APIs with the data. simple API call. 2. Translation 3. Speaker verification Steps to use: 4. Speaker recognition.  Azure Subscription and Microsoft Power BI Desktop is required  Loading and formatting the time series data is Decision Language Anomaly Detector API – Best Practices needed Helps in apps Allows processing  Function needs to be created to send and building to provide pre-built scripts  Aptly preparing the time series data. format the response. recommendations Sentiment  that helps in quick, evaluation Anomaly Detector API parameters needs to be used  Data source privacy and authentication needs informed and Determination of appropriately to be configured efficient decision- thanks for user  Anomaly Detector API response could be making. acknowledgement visualised  Anomaly data points are displayed Azure Bot Services Azure Data Bricks  Is a data engineering tool based on cloud  Used for data processing and reworking on the massive Bot – What is? Building a bot - Important Steps: quantities data Bots is a software program that How to Create an Azure Data  Helps in data exploration assisted by machine learning performs tasks like a computer. The bricks workspace ? models. functions performed by Bots are Step 1: Login to  Azure Data bricks is used for reading data from multiple generally repetitive tasks, and they are Plan Build Test data sources and then turning the same into the Azure portal Step 2: Specify based on intelligence. values for breakthrough insights by using Spark. Azure bot service: Navigate to Create a creating a Data resource > Analytics bricks workspace. This Service has two offerings:  Creating and managing intelligent bots.Evaluate Connect Publish > Azure Data bricks.  Creation of agents with the capability of performing conversations  Do not require resources to be developed on any AI Azure Data bricks SQL Analytics Azure Data bricks Workspace  Provides integrated connectivity which is scalable  Run and execute SQL queries on  Azure Data bricks Workspace is an  Provides development service for developing intelligent bots Step 3: Select Review their data lake. interactive workspace.  The Bot Framework Service is a component of the Azure Bot + Create Create  Helps in creating visualization  The data in a data lake lands here for dashboards. Service persistent long term storage, either  Visualizing and Exploring the in Azure Blob or  The Bot Framework Service facilitates the sending of information data. (user's app and the bot).

Azure Bot Service/Bot Framework offerings: Azure Face Service Female Female Face Detection AI powered Azure Facial Service can be used to: Bot Service - Steps to create  Helps to detect faces in image. Detect human faces in images,  Bot development using the Bot Step – 1: Azure portal login  Detected face is corresponded to a face Rectangle field  Framework Step – 2: Click the Create a Recognize human faces in Coordinates are returned  Bot Framework Tools to conceal resource link. Resource link is images  location of the face and size of the face can be Female Male complete workflow of bot available on the upper left-hand determined.  development corner of the Azure portal. Analyse human faces in images. Faces are listed in size order In the API response.  Messages and events between Step – 3: Enter bot in search box Immersive Reader bots and channels are shared by Select Web App Bot from the Below face related attributes can be extracted: using BSF (Bot Framework drop-down list  Improving the reading comprehensions by use of AI/ML Service) Step – 4: Click the Create button  Can be integrated with the web applications  Helps the Bot deployment in on the Web App Bot page  Facilitates improved readability by isolating content Azure Step – 5: Make the service and  Pictures are displayed for common words and terms  Helps with channel configuration deploy the bot (on to the cloud)  Highlighting verbs, pronouns, nouns etc. Face Gender Age Emotions Hair Styles Glasses by clicking Create  Performs loudly reading of content  Speech synthesis is integrated with Immersive Reader service  Can perform the translation of the content in real-time  Face Verification, Find similar faces, matching face, authenticate btw objects and recognised  Can perform splitting the words into syllables face. Features Speech Text -Analytics synthesis SSML Text-to-Speech Asynchronous Neural voices Is a part of Azure Cognitive Services synthesis of Standard . Facilitates conversion of text to speech (the speech is human like long audio voices Is a collection of ML and AI algorithms and provides insights related to: synthesised)Can be used to enable not only applications but also tools, and, or  sentiment, devices to perform the conversion of the text and output it as speech.  entities,  relations  key phrases (in unstructured text). Speech synthesis Asynchronous synthesis of long audio Standard voices Neural voices Natural language processing (NLP) is used for:  Sentiment analysis, To perform conversion from Is ideal to be performed for the files Is created by the use of Generally used to perform Speech Synthesis Markup Language (SSML)  Topic detection, text-to-speech. The with duration longer than typically 10 Statistical Parametric chat-bot interactions and  Language detection, conversion from text-to- mins, in scenarios like, audio books or Synthesis/Concatenation voice assistants interactions in Is XML-based markup language  Key phrase extraction, speech can be done by using lectures. Synthesis techniques and the a more natural and more used for customization of speech-to-text outputs.  Document categorization. standard, neural, or custom  The synthesis performed is output voices are: engaging manner.  Classification of documents, voices. asynchronous. Can help with:  Labelling of documents as sensitive or spam. Can be done by using:  The responses returned are  highly intelligible Can be used for audiobooks • Pitch adjustment,  Speech SDK generally not in real-time.  sound natural. as it can convert digital texts • Adding of pauses,  REST API  The requests are generally sent  Easily facilitates the (e-books) to audiobooks • Improvement of pronunciation, asynchronously and then the applications to speak in Can be used in in-car • speeding up or slowing down speaking rate, When doing the conversion polling of the responses happens. multiple languages (45 navigation systems. Speech-to-Text • increasing or decreasing volume, using the above two the  The downloadable synthesised languages) responses are provided in audio is made available via the • attributing multiple voices to a single document Enables speech-to-speech and speech-to-text translation of audio streams, in real- near to real-time service. time, and for multi-language.  Neural voices that are supported is custom neural voices. Can be done by using: Long Audio API With the Speech SDK, the applications, devices, and tools have access to:  source transcriptions  translation outputs for provided audio.  Interim transcription Speech CLI Translator  as speech is detected, the translation results are returned, and the Command line tool that facilitates Speech service without need of code. Machine translation service which is Microsoft Azure cloud- resultant final results could be converted to synthesized speech. Requires no to minimal setup, is production-ready and scalable for running based Below two different approaches power Microsoft's translation engine: larger processes by use of automated .bat and/or shell scripts.  statistical machine translation (SMT) When to use? Can be used to:  neural machine translation (NMT).  Translate text in near real-time by using REST API call. In case of requirement of experimenting with Speech service features with:  uses neural machine translation technology Core features  minimal setup, and  offers a technology known as statistical machine Supports  no code translation to translation multiple when the requirement is simple requirement involving production languages. application to use Speech service Custom Translator: Recognition results are When to Use - Speech SDK: Is extension of Translator, allows building neural translation systems provided along  When integration is required for Speech service functionality within a specific with Speech-to- language/platform (example. C#, Python, C++) Can be used to: text translation  Provides In complex and needs advanced service requests  Perform the translation of the text with Translator Interim  When it requires developing custom behaviour or response streaming  Perform the translation of the text with Microsoft Speech Services. recognition results and Custom Translator can be used along with Translator to translation results  customize neural translation system Speech synthesis Convert text-to-speech using : input Features: from text files, input directly from the command line.  provide improved translation (terminology-specific and style-specific).

Speech recognition Converts speech-to-text from audio files ,directly Speech translation performs audio translation from a microphone, performs transcription QnA Maker When to use: DK Is cloud-based service that facilitates NLP Is a developer kit enabled with advanced AI sensors (Natural Language Processing ) Personalizer provides: when available information is Is an API service offering from Microsoft  computer vision models  Helps to create a conversational static  Helps to increase customer engagement, customer loyalty, and customer advocacy by (question-and-answer layer) on top  Speech models. utilizing customer data. of existing data.  Does not require any ML knowledge or expertise  Could be used to build knowledge Contains: Use Cases base When requirement is  Knowledge base could be built by When the  depth sensor, to provide static performing extraction of questions Product Suggestion Correct rankings of products  spatial microphone array (including same answer information is and answers from: to a particular: to be filtered video camera, orientation sensor, and FAQs, Request, based on Uses reinforcement learning and continues learning on the best course of action that is to be meta- taken. The best action to be taken is determined based on reinforcement learning (reward scores SDKs). Question, or manuals information and collective behaviour). documents Command Metrics Advisor Bonsai Steps How to make – QnA maker For management of bot Is part of Azure Cognitive Services uses AI and performs monitoring of data and Step 1: Pre-Req: Login to Azure Account. conversation based on static AI development platform, which is Low-code. information detection of anomaly in time series data. Step 2: Click on "Create a Resource" (Available at screen’s left side corner).  Improves production and reduces downtime Helps by automating the process of:  Provides guidance for optimization  Models application Step 3: Click "AI + Machine Learning"  Is enabled to make decisions independently. Step 4: Select Web App Bot Service.  Runs on Microsoft Azure Step 5: Provide values for the below:  providing web-based API workspace for: Resource costs are charged to Azure subscription Bot name,  Simplifies machine teaching by integrating it with deep reinforcement learning, which  Data ingestion resource group, etc enables the user to perform training and deploying systems that are smart and  Anomaly detection, and x Step 6: Select Bot Template as question and answer. autonomous.  Diagnostics

Does not requires machine learning knowledge. Metrics Advisor can be used for: Microsoft Azure Speaker Recognition - Main features  Multi-dimensional data analysis via multiple data sources Microsoft Speaker Recognition  Verification of Speaker  Identification and correlation of anomalies Provides solution related to voice recognition  Identification of Speaker  Configuration and fine-tuning of anomaly detection model used Genomics  Security features are Built-in  Diagnosis of anomalies and helping with root cause analysis.  Facilitates organizations to utilize deep  Compliance Management Is an offering from Microsoft  Intelligence API Azure and it implements learning algorithms and overcome issues Workflow - On boarding the data, fine-tuning anomaly pertaining to poor sound quality and  Voice Authentication detection, creating configurations: BWA (Burrows-Wheeler perform speaker’s identification accurately  Customer Support Aligner) and GATK (Genome by use of distinctive characteristics of the  Customer Engagement Analysis Toolkit) in Microsoft voice. Azure Cloud.  Speech information is protected by the use of Microsoft Azure Speaker Recognition – Benefits security protocols which are enterprise- grade  Dynamic speaker identification  Prevent hacking attempts and hackers  Compliance management attempts to gain unauthorized access to  Flexible pricing. Step 2: Build monitor using web portal. confidential data and ensures compliance. Step 1: Create Azure On-board Data Perform Step 3: Sending Step 4: customize resource for Metrics the Fine-tuning of instance by using the anomaly Detection alerts Certified by Advisor. REST API. Subscribe alerts Language Understanding Azure Machine Learning Dataset – What Helps identify information of value in conversations and performs interpretation of Experiments - Characteristics: A dataset is a data uploaded to MLS to be consumed in modelling user goals (also known as intents). It extracts valuable information from sentences has a minimum of one dataset Azure machine learning Is a cloud-based service that helps in process. (also known as entities) and helps in easily creating language bots building, testing, and deploying predictive analytics solutions  Variety of datasets are available at MLS.  has a minimum of one  The list of the datasets is available at the left of the canvas. Machine Learning Studio (MLS): module. Apps  Datasets can also be  Is an interactive workspace connected only to modules.  Enables drag-and-drop of datasets Module – What  Modules in general are  Facilitates modules analysis into an interactive canvas Social Media Feedback, connected to either: An algorithm for your data. MLS features a number of ML algorithms monitoring and response Speech Enabled desktop apps  Helps to connect interactive canvas to make an experiment  datasets including for: for order tracking  Help in : editing the experiment , reserve experiment , running  other modules.  training, experiment , converting training experiment to predictive  scoring Steps: Integrating app with LUIS experiment , publishing experiment as an internet service All required parameters must be  validating set for every module.  To configure internal algorithms of module, Is a drag-and-drop tool used to: a module can have group of paramaters. Sign in to LUIS portal select your  Upon selecting module on the canvas, the Download app Subscription. Select Import JSON to Build Publish ML Save app models as JSON file Authoring resource and new app into ML JSON file web services module’s parameters are displayed within see authoring resource LUIS portal models Read BLOB, Table, assigned app. the Properties pane to the proper of the or Text data canvas. Custom apps could consume the models. tsv Steps to Create Luis chat bot: Experiments, modules and Hive,  Modification of the parameters can be  x datasets Step 1: Perform sign in to LUIS portal. done for model tuning.  Step 2: Select subscription  authoring resource. csv SQL or  Step 3: Create new app. Azure  Step 4: Add prebuilt domain. Tables  Step 5: Intents and entities. oData  Step 6: Train the LUIS app. Machine Learning implementation  Step 7: Test app.  Step 8: Publish app and get endpoint URL. Create Experiments Write Models • Preprocess Data • Analysis and Dataset samples Reduction ML Studio Dataset samples preparation • Extract Feature Analysis. • Enrich Feature • Test and iterate • Train • Score

Write Scored Evaluation of the data System training by resulting dataset using dataset outcome and samples. gauging the accuracy. Core search features end-to-end exploration - Steps Azure Open Dataset Azure Cognitive Search is a cloud search service offering  Has built-in AI capabilities and Highly scalable  Public datasets that are curated  Can be used for adding scenario-specific features to ML Enriches and facilitates to easily perform Step 2: Create a search index Step 3: Perform Upload of Step 4: Query index solutions  Identification of relevant content at scale. Can be done using: content to index. Index can be queried by using: Step 1: Create a search  Usage of Azure open datasets helps to achieve models with "Push" model can be used to push  Explore the required relevant content service (Can be done on Free tier Azure portal Search explorer in the portal, JSON documents (from any more accuracy. and also on paid tier for dedicated REST API. source), REST API, resources)  These datasets are available in the Azure cloud Service provided by Cognitive Search: .NET SDK, "Pull" model (indexers) can be used .NET SDK,  These datasets are integrated into Machine Learning from if source data is residing on Azure. Any other SDK. Any other SDK. Azure  Indexing of query  The datasets readily available to:  Query execution  Azure Databricks  Provides persistent storage for the search indexes  Machine Learning Studio (classic).  Facilitates a language for composing queries  Simple and complex queries could be composed  The datasets are accessible through the APIs  Analysis is AI-centered  Can be used with other products, like:  Helps in creating searchable content from:  Power BI,  Images  Azure Data Factory  Raw text Azure Cognitive Search - AI enrichment  Application files Integration with Azure data is enable Steps in an enrichment Pipeline  AI enrichment is a capability of Azure Cognitive  Includes the data in public domain through: Search indexing Indexers populate an  Search indexers  AI enrichment is used to: index on field-to-field  Automation of data import  Text extraction from Pipeline based on mappings between the  Automation of data refresh  Images,A indexers index and your data  blobs source Building Blocks of Open Datasetx  other unstructured Cognitive Search is a search service residing between data data sources. stores (external) containing data (specifically the un-  Enrichment and extraction: Model Training Inferencing Data exploration indexed), and enables the client app to send query  Make content searchable in an: Skills, now attached to Once indexed, you can requests to a search index and performs the handling of  Index indexers, intercept and access content via Excel, Business the response.  knowledge store. enrich documents search requests Azure Azure power VMs Apps  Implemented using cognitive skills according to the skillset through all query ML Databricks BI etc (Using indexing pipeline). you define

Curated format Access Raw format access REST API through open SDK through Storage SDK

Cosmos DB or Similar Azure BLOB or Azure Data lake (Gen2) Database

Full indexing Azure Cognitve Search Query Request Azure Data factory Your content Your APP

refresh 1. Collect User Input 2. Formulates and sends Indexing Indexes and other Query requests Engine structure Engine Query Response Data Access Layer AI 3. Handles responses (a) result Pull or push model to stage data set (b) single documents Azure Infrastructure Public Data Source such as NOAA

End Consumer

Data set service compo Azure Computer Vision Azure Content Moderator Helps in processing images and returning visual features based information by providing access to advanced algorithms. Text moderation: performs text scanning and identifies and moderates  Is an AI service that performs Artificial Intelligence powered content the content for:  Facilitates determination of whether moderation including handling:  offensive content material,  offensive, potentially offensive content  sexually explicit or suggestive content material, Helps in:  risky content  profanity in the content, and personal data.  Finding brands  Content that is not desirable Custom term lists – Performs the scanning of text against a list of custom  Performs the moderation of the content by scanning the text, scanning  Finding objects defined terms and built-in terms. Custom lists can be used to block/allow  Finding human faces the image, and scanning the videos  Upon doing the scanning of the object (image, text, video) automatically content in accordance with content policies.  Enables multiple DAM (digital asset management) scenarios. applies flags to the content. Image moderation – Performs the scanning of the images and moderates  Has OCR (Optical Character Recognition) capabilities.  Consists of web service APIs and is readily available through: for:  Extraction of printed text and handwritten text can be done from images and  REST calls  Images with adult or racy content, documents by use of Read API.  .NET SDK.  Is optimized to work with:  Helps in identifying and detecting the text in images with  Review tool (facilitates reviewers to assist service and improves the assistance of Optical Character Recognition (OCR) moderation) is also included. Receipts, posters, business cards, letters, whiteboards capability,  Performs the detection of the faces.  Can be used to do object detection. Moderation APIs Custom image lists – Scanning of the images is performed against a custom list of images. Using custom image lists helps in filtering out Object Detection using Computer Vision content that recur and is unintended. Content Moderator API Endpoint Content Moderation Video moderation – Performs scanning of the videos and identifies and  Is similar to tagging x API key URL Insights. moderates:x  Bounding box coordinates (in pixels) are included in the result for each object Classification label  adult video content found. Input Text, image or Classification Score  racy video content  Video Enabled to be used for processing the images and finding the relationship between Operational-specific  time markers for content is returned. the objects in the particular image(s). insights i.e JSON  Helps in the determination of the same tag multiple instances in an image. Input-specific Settings  Tag application on the basis of objects in an image or identified living things in an Review APIs image can be done by the use of Detect API. Review API End- Reviews (JSON) Point Computer Vision algorithms provides tags when an image is uploaded or an image URL  Integration of the provided based on below: API Key moderation  Objects in image, & Review API End- Create Review pipeline and Create  living beings in image, Content Reviews Team Point Reviews human  actions identified in image Reviews reviewers. The tagging is performed for the below along with the main subject:  API Key Creation and Computer Vision can: & Review API End- Person in Furniture automation of  Detect human face Review Work Flow definition foreground Job Create Point Team workflows Generate various aspects related to human face Reviews like: (human-in-loop) The indoor and  age, Moderate with review tools Animals outdoor setting  gender API Key END by use of  Rectangle for each detected face. & Review Jobs, Reviews and  The face detection feature of computer vision Workflow Yes Workflow Team Furniture Accessories is done through the Analyze Image API. create review? workflow  Analyse Image API can be called by native SDK and REST calls. No END Tools Gadgets Workflows Azure Data Science Virtual Machine Azure Custom Vision Building image Classifier (by using web portal) – Azure Data Science Virtual Machine (DSVM) is customized VM image built solely for data science on Microsoft’s Azure cloud . Is an Image recognition service that helps in building, deploying, and improving image identifiers. Image identifier performs the application of the labels (representing classes /objects) to:

 Has pre-configured and pre-installed data science tools that are popular Images, based on their visual characteristics  User can readily start using and building advanced analytics intelligent applications  Allows specifying labels and training custom models to detect the labels  Computer Vision service does not Allows specifying labels and training custom models to detect Steps to setup Azure Data Science Virtual Machine (DSVM) Step 1. Create Step 3. Upload images the labels Custom Vision resources  Image analysis by the Custom Vision service is done by the use of ML algorithms. new project The below are needed to be created by the user upon using the Custom Vision Service:  Custom Vision Training, Step 1: Pre-req: Step 2. Choose training images Availability of  Prediction resources, in Azure. Azure account

Step 7: Verification of all the information should be done Step 2: Perform followed by login to the Azure Steps to Build object detector (with the Custom Vision website) Select Review + x account create. And then Selecting Create. Step 1: Create  Custom Vision resources  New project Step 6. Evaluate classifier Step 2: Choose training images Step 4. tag images Step 6: User shall now fill in the Basics tab Step 3: Upload images Step 3: Locate virtual x Subscription machine listing by doing Step 4: tag images Resource group the below: Step 5: Train detector Virtual machine name type in "data science virtual Step 5. Train classifier Location machine" Step 6: Evaluate detector Size select "Data Science Virtual Step 7: Manage training iterations Machine - Windows 2019." Username Password: Steps to Test and retrain model (with Custom Vision Service)

Step 5: Upon doing the Step 4: At the above the user shall be bottom, Select Testing the model S redirected to "Create a the Create button virtual machine". . Click Submit elect Quick The selected image shall appear Select Test from Image Field In project top menu the Quick in the middle of the page and the From bar which Test window results would come up below the the Custom and enter URL opens Quick image with two columns Vision web Test of intended page. window. image (Tags and Confidence) table. Training model using predicted image The information provided in WhizCards is for educational purposes only; created in our efforts to help aspirants prepare for the AI-100 certification exam. Though View previously Hover above references have been taken from Microsoft documentation, it’s not intended as a submitted images to image and Retrain the substitute for the official docs. The document can be reused, reproduced, and the classifier by: see the tag classifier, by Opening Custom Vision prediction using printed in any form; ensure that appropriate sources are credited and required  selecting performed by Train button. permissions are received. Predictions tab. the classifier.