DEGREE PROJECT IN TECHNOLOGY, FIRST CYCLE, 15 CREDITS STOCKHOLM, SWEDEN 2021

A Systematic Mapping Study on APIs Utilizing Technology

Dilvan Güler Mohamed Mahdi

KTH ROYAL INSTITUTE OF TECHNOLOGY ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Authors

Dilvan Güler [email protected] Information and Communication Technology Mohamed Mahdi [email protected] Computer Engineering

KTH Royal Institute of Technology

Place for Project

Stockholm, Sweden

Examiner

Anders Sjögren KTH Royal Institute of Technology

Supervisor

Mira Kajko­Mattson KTH Royal Institute of Technology

ii Abstract

This thesis covers the systematic mapping of established public Application Programming Interface (API)s that are employing the Artificial Intelligence (AI) technology. This due to the fact that the problem has been the lack of systematic maps of AI APIs in the present time, therefore this thesis has the purpose of increasing the insight in the area by creating the mapping study. The goal is to provide both a basis for research and to aid the general developer which uses the AI APIs.

The systematic mapping of the AI APIs will be conducted by examining the information of the APIs and iterations classifying the AI APIs into categories and presented in tables. The analysis and discussion of the study was made based on the results from the study, namely the phases, the iterations, the result tables and the final systematic map. Additionally, an analysis was made on the validity threats of the study.

The evaluation of each API in this study was done in cycles, by categorizing each AI API into a category that is included in the final result, which is a systematic map. The result has been proven to be useful for the target group of this study, the researchers and developers, by aiding them in finding the right API for them to use in their work. Therefore, this work will help future developers and researchers due to the fact that the thesis is providing relevant information in the development phases of employing AI in the web interfaces at the present time.

Keywords

AI, Artificial Intelligence, API, Systematic Mapping, Web development

iii Sammanfattning

Detta examensarbete omfattar en systematisk kartläggning av etablerade publika API:er som använder sig utav AI­teknik. Eftersom bristen på systematiska kartor över AI API:er varit stor, har detta examensarbete syfte att öka insikten i området genom att skapa denna kartläggningsstudie. Målet med den systematiska kartläggningen är att bygga en grund för framtida forskning inom detta område, samt förenkla processen av att hitta AI API:erna för den allmänna utvecklaren som använder dessa.

Den systematiska kartläggningen av AI API:er kommer att genomföras genom att undersöka tillgänglig dokumentation och information om API:erna. Dessutom klassificerades AI API:erna i kategorier, och presenterades sedan i tabeller. Analysen och diskussionen av studien gjordes baserat på resultaten från studien, nämligen faserna och iterationerna där AI API:erna kategoriseras, samt på resultat tabellerna och den slutliga systematiska kartan på AI API:erna. Dessutom gjordes en analys av hoten mot studiens giltighet.

Utvärderingen av varje API i denna studie gjordes i cykler genom att kategorisera varje AI API i en kategori som sedan inkluderades i resultatet vilket är en systematiska karta. Resultatet från detta examensarbete har visat sig vara användbar för målgruppen, forskare och utvecklare, då det hjälper målgruppen att hitta rätt API att använda i arbetet.

Nyckelord

AI, Artificiell Intelligens, API, Systematisk Kartläggning, Webbutveckling

iv v Acknowledgements

Special thanks to the Associate Professor at KTH, Mira Kajko­Mattson, who has been our supervisor throughout this bachelor thesis and provided very useful tools and have been extremely engaged in the writing process of the project.

We also want to thank Anders Sjögren, our examiner whom we have been able to brainstorm ideas with around the project which has been helpful.

Stockholm, March 2021 Dilvan Güler and Mohamed Mahdi

vi vii Acronyms

AI Artificial Intelligence API Application Programming Interface DAIS Distributed Artificial Intelligence Systems RISE Research Institute of Sweden ML Machine Learning NNs Neural Networks CNNs Convolutional Neural Networks DL Deep Learning HTTP Hyper Text Transfer Protocol REST Representational State Transfer JSON JavaScript Object Notation URL Uniform Resource Locator

viii Contents

1 Introduction 1 1.1 Background ...... 2 1.2 Problem ...... 2 1.3 Purpose ...... 3 1.4 Goal ...... 3 1.5 Research Methodology ...... 3 1.6 Commissioned Work ...... 4 1.7 Target Audience ...... 4 1.8 Delimitations ...... 4 1.9 Benefits, Ethics and Sustainability ...... 5 1.10 Outline ...... 6

2 Theoretical Background 7 2.1 General Overview of AI APIs ...... 7 2.2 Artificial Intelligence ­ An Unsupervised Approach ...... 8 2.3 Application Programming Interface ­ AI for the General Developer ... 11 2.4 Systematic Mapping Study ...... 15 2.5 Similar Work ...... 16

3 Research Methodology 17 3.1 Research Strategy ...... 17 3.2 The Four Research Phases ...... 18 3.3 Research Methodologies ...... 21 3.4 Research Instruments ...... 22 3.5 Validity Threats ...... 23 3.6 Ethical Requirements ...... 25

ix CONTENTS

4 Preparatory Work 27 4.1 Overview of the Systematic Mapping Method ...... 27 4.2 Grouping criterias ...... 28 4.3 Sources ...... 33

5 Mapping conducted in Iterations 35 5.1 The Selected Collection of APIs ...... 35 5.2 First iteration ­ Priced API or Free API ...... 37 5.3 Second iteration ­ HTTP Methods ...... 37 5.4 Third iteration ­ Domains ...... 39 5.5 Fourth iteration ­ Main Tasks ...... 39 5.6 Final Iteration ­ The Final Systematic Map ...... 40

6 Analysis and Discussion 43 6.1 General Analysis ...... 43 6.2 Research Phase Analysis ...... 44 6.3 Iteration Analysis ...... 46 6.4 Validity Threat Analysis ...... 48 6.5 Final Systematic Map Analysis ...... 50

7 Conclusions 53 7.1 Conclusions ...... 53 7.2 Future work ...... 54

References 55

x Chapter 1

Introduction

As the Artificial Intelligence (AI) technology is evolving, and becoming more and more integrated in everyday life, the importance of making the technology accessible to the developer is also becoming more and more important. The term AI is used when a machine has the capabilities to learn things like a human brain would, this technology is used in many fields in the present time due to its capability of providing user tailored experiences for every individual user by predicting needs.

The production of AI technology has also resulted in the changing of the designing interface, adding small AI functionalities has become more of a standard in the later years. Thus, resulting in more challenges in order to please the users and keep up to the industry standards. The developers are the architects of this field, and will be needing to meet the users requirements. However, due to the complexity in the field, which becomes even more complex since users have different goals[13]. Every user goal is different because no two people are the same. Since web APIs have been evolving the last few years, utilizing AI in web APIs has become convenient for developers since it allows them to add large features with minimal effort.

Web APIs provide access and functionality through a remote source, this way an application can incorporate multiple services in an efficient way. Additionally, implementing an API in some cases could be viewed as a way of outsourcing. This is due to the fact that an API is working in a remote setting, far away from where the source code for the website lays, while still performing a complex task that adds value to the website. An example is the Google Maps API, allowing the website to contain valuable functionality(the dynamic map and its content), without forcing the web developer to

1 CHAPTER 1. INTRODUCTION develop the dynamic map itself. Instead, the developer is only required to call the API with an HTTP Method.

1.1 Background

When deep learning is added to machine learning it will lead to a machine that will be able to mimic a human brain, which leads to the machine’s having the ability to recognize patterns. When this has been achieved the tasks such as recognizing images as well as speech are made possible[3].

A study conducted in 2019 researchers realised that many users, while visiting a university website, visited the website with the purpose to ask for information[22]. This became a problem due to the lack of manpower provided on the website, thus answering these questions was a tedious task. Using an AI chatbot, developed into being able to answer the users’ questions at any time, became their rescue and is an example of the benefits that integrating an AI into a user interacted website can achieve. Small features by utilising AI capabilities can lead to better user experience. Also the client that orders a website would be happier if the development time is cut short thanks to integrating AI to aid development.

Creating AI is a hard task, hence why many companies and freelance developers see a market to release API for their completed AI solutions to make a profit. Consequently, this makes it possible for developers to reuse pre existing models to incorporate into their projects to avoid costs, since not knowing what AI API options are available in the market could delay a project. Creating a collection of multiple AI APIs where all are categorised would speed up the future project trying to add chatbots to a website similar to the university, as mentioned earlier, this collection would be in the form of a systematic map

1.2 Problem

A substantial amount of time is focused to further the progress of AI APIs every year, not nearly as much time, if any, is spent on organizing and bringing structure to the field. Therefore the present contributions to this field are scattered, unorganized and missing. Thus leaving the developers unable to find the APIs made for them to use.

2 CHAPTER 1. INTRODUCTION

Which leads to the research question for this thesis:

“How can we systematically organize AI APIs, in order to simplify the process of discovering them for both the users of the APIs and researchers in the field by providing a present time systematic map?”

1.3 Purpose

The purpose of this thesis is to examine and understand the functions of the different AI APIs that are available on the market today. Moreover, this will be conducted by categorizing the APIs and creating a systematic map of the examined APIs that will ease the work for the target audience, developers.

1.4 Goal

The goal of this thesis can be divided into two subgoals, whereas the first goal is to discover, examine and categorize the current AI APIs that represent the present time.

The second goal is to bring structure and organization to the field of AI APIs that was originally missing. This will be in the form of a systematic map that will be beneficial to developers and researchers by providing the information of all AI APIs in an accessible manner.

1.5 Research Methodology

The research method for this thesis will start with a phase of studying the literature around AI technology and using APIs in development. By doing so, the goal with the first phase is to gain a solid foundation of knowledge in both the subject. This was done by first reading fundamental articles about the subject of AI and APIs. Onto the fundamental knowledge, more complex research articles were read. Last, the available technical documentation from corporations which provide AI API services were read.

The second phase is the searching phase, during this phase the AI APIs will be searched for in order to create an uncategorized list.

3 CHAPTER 1. INTRODUCTION

The third phase is the categorizing phase, during this phase the AI APIs will be categorized in iterations. This process will follow an agile process and repeat the iterations until there are no more iterations to make in order to categorize. Therefore, this phase of the research is not linear but rather a circular process.

The last phase, being the fourth phase, is the conduction of the systematic map. During this phase the final systematic map will be built, finished and presented.

1.6 Commissioned Work

During the time period when this study was conducted, there was an initial partnership with Research Institute of Sweden (RISE) to create this study while creating the website for the Distributed Artificial Intelligence Systems (DAIS) project by RISE[2]. The DAIS project is a cooperation between 11 different EU countries, including 47 partners. Thus, due to the fact that by the time the website was created for the DAIS project by RISE, the scope and limitations for the study had changed to a systematic mapping study and the website no longer served a purpose to the study.

1.7 Target Audience

The target audience for this thesis is in fact for anyone who wants to learn about AI APIs to be able to incorporate them in their own project and/or see a comparison amongst them. Due to the fact that the AI APIs researched in this thesis are already developed and commercially available, this thesis is considered to target web developers in need of the comparison and others who are considering which AI API to use for their needs with their website. Additionally, this is targeted for the AI API developing researchers in order for them to see how the current situation on the market looks for AI APIs.

1.8 Delimitations

Since this thesis is solely limited to only using current, already developed APIs, the scope is limited to these current APIs that are available in the present time. Developing a new API is outside of the scope.

A second limitation, that is due to the fact that the APIs used in this research being

4 CHAPTER 1. INTRODUCTION publicly available on the market, is the fact that we will not have access to the complete source code. Therefore these will not be up for evaluation due to the mentioned limitation.

1.9 Benefits, Ethics and Sustainability

The created systematic map will act as a simplifier when a developer wants to decide on an AI API to use in a project. By giving a similar API providing the same feature it is believed a comparison could be made for an optimal decision. Additionally, the project will be sustainable in the way that future developers would be able to continuously contribute to the project and extend the systematic map.

In order to steer research towards quality and ethical principles has been introduced. In particular the following four ethical principles have been followed during this study, in order to assure that the ethical requirements have been met, (1) the information requirement, (2) the consent requirement, (3) the confidentiality requirement and (4) the utilization requirement[31]. If the case would be that any of the ethical requirements would not to be followed, the study will not be considered ethical.

Using an AI, is from an ethical standpoint very relevant since it is opening up a world in the terms of accessibility for the users with disabilities. Therefore in particular cases where using an AI for providing features like text­to­speech is beneficial since it includes the vision impaired and illiterate users of the web site. The development of the product will contribute to the United Nations 17 Sustainable Development Goals, mostly to the 10th goal referring to “Reduce inequality within and among countries”[7]. Thus due to the fact that using AI on the web application has the potential to be helpful for users with disabilities.

The predicted dangers of a study conducted in 2019[30] was the contributions of AI to develop robots advanced enough to carrying explosives in a war, as well as softwares that have the power to control e.g. stocks market and courts by learning and showing signs of creativity to the point that that Google’s AI could create a child of its own. According to the study[30] on understanding the dangers of AI, the conclusion was that the predicted dangers of AI are in fact real.

Thus, we can further develop the ethical requirements based on this study while considering the future dangers of AI. Since the development of AI APIs is not the

5 CHAPTER 1. INTRODUCTION simplest of the tasks the AI API could easily become biased. Thus, making effort in choosing unbiased AIs will be of priority in this study. The developer must develop the AI API consciously with the ethics of AI in mind for future use, such as the possibilities in order for it to become biased, racist, controlling etc.

With this in mind, as the researchers of this study, we must consider the dangers of the AI APIs and read their documentation carefully, in order to consciously choose as non problematic and non dangerous AI APIs as possible in the systematic mapping study.

1.10 Outline

The subsequent chapters of this thesis are outlined in the following manner:

• Chapter 2: Theoretical Background: This chapter presents the theoretical and practical background needed to understand the remainder of the thesis.

• Chapter 3: Research Methodology: In this chapter the research methodology is described by presenting the type of research and its strategies, phases and instruments. This chapter also tackles possible threats to the validity of the research results.

• Chapter 4: Preparatory Work: This chapter presents the preparatory work for the study, namely the preparations that were made before starting the iterative process explained in Chapter 5.

• Chapter 5: Mapping conducted in Iterations: This chapter presents the results, in the form of tables, of the iterative processes that were conducted in order for the final systematic map to be created as well as the final systematic map.

• Chapter 6: Analysis and Discussion: This chapter compiles, analyzes, and discusses the results of the thesis. It also explains how the validity threats discussed in Chapter 3 were addressed.

• Chapter 7: Conclusions and Future work: In this chapter the results obtained from the research are discussed, conclusions are formed and propositions for future work are presented.

6 Chapter 2

Theoretical Background

This chapter introduces the fundamentals of the concept Artificial intelligence, Application Programming Interface as well as it establishes the fundamental knowledge on systematic mapping methods. Section 2.1 that provides a general overview in the field of AI APIs. Followed by the section 2.2 that dives into the field of Artificial intelligence and the evolution of AI. Section 2.3 introduces APIs at a deeper level and the Hyper Text Transfer Protocol (HTTP) methods. Section 2.4 explains the typical ways to conduct a systematic mapping study. Section 2.5 is a summary of a similar study conducting a systematic mapping study on AIOps.

2.1 General Overview of AI APIs

With Alexa, and Google being one of the world’s most popular AI APIs during the present time, the phrases “Alexa!”, “Hi Siri!” and “Hey Google!” followed by a request is not unfamiliar to most people. AI is a term that describes when scientists try to recreate a human brain and add it to a machine to have human mind features. The world has come a far way, to a point where any person could casually talk to a machine and get a response that sounds human­like or a car that drives from point A to B without any assistance. Behind these machines there are researchers that spend years to be able to create what we interact with today.

But for the future it would be great if any person with any knowledge background would be able to freely create something with this brain of a machine. To be able to do that they need to provide some type of service that is available to the public, usually an

7 CHAPTER 2. THEORETICAL BACKGROUND

API. An API allows for people to get access to something that is provided in a different location. Simply, it will act as a bridge between a user and some type of service. By using the available API that contains AI technology and implementing them in Web development one can achieve greater possibilities. Like API, Artificial Intelligence (AI) stands to streamline processes and make our lives and business that much easier, just like Alexa and Siri manages to do.

2.2 Artificial Intelligence ­ An Unsupervised Approach

In this section the subject Artificial Intelligence is explained, the general overview with standard functionalities compared to AI functionalities in subsection 2.2.1. To further explain, an AI Demo is presented in section 2.2.2. The history, evolution and the importance of AI in the future development in the industry is presented in subsection 2.2.3.

2.2.1 What is AI? ­ Computers Emulating Human Learning

Artificial Intelligence is a wide­ranging branch of computer science which in the 1950s, the fathers of the field, Minsky and McCarthy, described artificial intelligence as “any task performed by a machine that would have previously been considered to require human intelligence”[1]. Therefore AI is where computational methods are used to improve performance or to make accurate predictions [17], where accuracy is measured by how well the chosen actions reflect the correct ones [14].

Playing board games, recognizing speech, translating text between languages or identifying objects in images is some of the outcomes from AI. But the term AI can be seen as a science that is intertwining many technologies. These other technologies are considered subsets of AI and are named as follows, Convolutional Neural Networks (CNNs), Deep Learning (DL), Neural Networks (NNs) and Machine Learning (ML) and can be seen in Figure 2.2.1.

As mentioned previously, AI is the science of how computers emulate humans. To make this possible it will utilise the subsets. Take ML as an example, it is the method for machines to learn from data. In ML there are two steps that are consistent, there should

8 CHAPTER 2. THEORETICAL BACKGROUND be an input and an output. Considering a medical image as the input the output would be the corresponding diagnosis that was correlated with the image. It also includes a learning phase where it learns from information to make for future predictions more accurate [28].

Figure 2.2.1: Showcasing the levels to AI.[28]

9 CHAPTER 2. THEORETICAL BACKGROUND

2.2.2 AI Demo

Due to the complexity of grasping the concept of AI, the words used in order to describe the field might not be adequate. Instead, showcasing a demo version of an AI API might provide a visual and practical explanation in order for the subject to be easier to grasp.

The provided AI API demo lets the user input a desired text, that will automatically by the AI be converted to speech. As seen in Figure 2.2.2, the user can specify both the language the speech will be in as well as the pitch and speed of the voice. Additionally, the demo provides information regarding in which way the JavaScript Object Notation (JSON) object is structured for the request body and the request Uniform Resource Locator (URL).[8]

Figure 2.2.2: Showcasing the demo of the AI API ’Speech­To­Text’ by Google Cloud. Link to Demo:[8]

10 CHAPTER 2. THEORETICAL BACKGROUND

2.2.3 The Evolution of AI

The field of AI has its origin in the 1950’s and has gone through steady evolution because of researchers’ interest to explore the subject. The last decade has provided a major breakthrough in the field but will only act as the beginning. The next step is to transition from the Silicon Valley headlines to everyday technology. This will take decades of developing for both researchers and developers to transform their industries to implement the new technology.

The year 2019 was a major year in the industry since lots of attention was focused on AI. The few years leading up to 2019 as well as onwards has expedited AI evolution resulting in knocking down barriers to a wide range of products, services, resources and good practices. There has been a major shift from AI itself to the impact that AI can bring to business, with all this provided to the general public the question has changed from how AI works to what it can bring to you. The next stage is called the age of deployed AI. Deployment is mentioned with the context of integrating into an existing production. The problems that potentially can be solved will need a shared vision between engineers. This age of deployed AI will be about identifying problems in industries and what kind of data the solution might require. In other words deployed AI will bring dramatic automation to industries with the help of not only technical but also non­technical individuals as well [18].

2.3 Application Programming Interface ­ AI for the General Developer

In this section Application Programming Interface is explained as the subject. In subsection 2.3.1 the general overview and concept knowledge of APIs are presented. Subsection 2.3.2 presents the HTTP protocol with its most common methods, oftentimes utilized in APIs. Subsection 2.3.3 introduces the concept of microservices. Subsection 2.3.4 presents the evolution and history of APIs.

2.3.1 What is an API?

An API, which is the acronym for Application Programming Interface, is a software intermediary that allows multiple applications to interact between each other [6].

11 CHAPTER 2. THEORETICAL BACKGROUND

The purpose of an API is to be accessed making it possible to integrate in many different applications or websites. What these applications are developed in only matters if it is not possible to send and receive data from the API.

Web APIs are also used as a key inter­connectivity mechanism to access software services, but in this case over the Internet [32]. Web services are purpose built Web servers that support the needs of a site or any other application. Client programs use APIs to communicate with Web services. An API is generally speaking a set of data that functions to facilitate the interactions between computer programs, and allows them to exchange information[15].

The way APIs inter­connect the applications provides better services for the users, for example by using the Google Maps API to display the closest business location to a user. This service is delivered to the business for a much lower price than they would have to invest, both from a price and time perspective. There are different actual implementations of Web APIs, such as Representational State Transfer (REST). REST, which sometimes also is called RESTful is a software architectural style and uses a subset of HTTP [5]. The REST architectural style is commonly applied to the design of APIs for modern Web services, a Web API conforming to the REST architectural style is a REST API [15]. Nonetheless, Web APIs can follow various protocols and Web services, but in this paper the term Web API is used to describe any protocol unless otherwise specified.

2.3.2 HTTP Methods used in APIs

The HTTP methods are derived from the HTTP protocol, which is used to transfer data over the web. HTTP is a generic and stateless protocol, and is used heavily in the formation of APIs. This protocol has multiple purposes, often used for its request methods, error codes and headers. HTTP is a protocol that is used to communicate all over the World Wide Web and there are many different types of data that are being sent through the HTTP protocol. Examples of these data are HTML files, image files, query results, etc. HTTP methods, specifically, are oftentimes used as a tool in order to contact the server and send a request of some sort. In this study the most common HTTP methods are described and used, since these are of a vital element in the creation process of APIs.[33]

The first HTTP methods that are of relevance in the study is the GET method, which is

12 CHAPTER 2. THEORETICAL BACKGROUND used to retrieve any type of data and is predefined by the request URL. The entity that receives the GET request will process it and return the data that is requested.

The second HTTP method that is of relevance in the study is the POST method, which is used to encapsulate data in a request body to be able to send to a server.

The third HTTP method of relevance in this study DELETE method, which is used to send a request to the server to perform a DELETE action. To explain a typical pattern is that a new instance is usually initiated with POST while GET is to retrieve data from that instance. The DELETE method will remove everything associated with the instance.

The fourth HTTP method of relevance to the study is the PUT method, which is similar to POST since both send data to a server. The difference is that PUT will always produce and return the same result while POST will create the same resource multiple times.

The fifth HTTP method of relevance to the study is not as known to the broad audience as the earlier mentioned, the PATCH method. This method is of high similarity to the POST method, although PATCH only modifies a part of the resource .

The sixth HTTP method of relevance to the study is the HEAD method, which aims to retrieve data from a server in an identical manner that the GET method operates. Therefore the HEAD method and the GET method are identical, with one exception. The exception in this case is that the HEAD requests do not have to return the message ­body in the response.

The seventh HTTP method of relevancy to the study is the OPTIONS method, which is utilized in the manner that it aims to return what methods are available on API server. Oftentimes, this method is utilized to find fatal errors.[34]

13 CHAPTER 2. THEORETICAL BACKGROUND

2.3.3 Microservices

Microservices is an architectural approach that separates an application into smaller standalone independent applications that can be deployed on different server instances. These different servers can then talk to each other over the network using REST APIs to provide the functionality of the complete application. The advantages to microservices are that applications can be deployed flexibly since they are on different servers and can be developed by different teams in different languages. Additionally, they can be scaled separately, this entails that if there is a spike in service it could be scaled to appropriately. An example, as seen in Figure 2.3.1, is showcasing a microservice architecture. For instance, if there was a spike in account creations that service could be scaled and deployed without having to change the codebase of any other service. [11]

Figure 2.3.1: Showcasing the architecture of a Microservice .[16]

2.3.4 Evolution of API

APIs have been a part of computer and application development historically, but the primary use for APIs was for exchange between two or more programs [24].

The evolution from using APIs solely on computer platforms into APIs on the web, which is nowadays referred to as web APIs, was witnessed around the year 2000[15]. Since this point in time APIs have been popularized by developers and researchers to the point that some argue that we are now living in the API economy. The support to

14 CHAPTER 2. THEORETICAL BACKGROUND this claim is the fact that the society is more inter­connected than before, and the APIs are primarily the holder of the power of the interconnection that connects applications, systems and people [12].

With the increase of APIs in the available market, many studies are made based on studying and researching everything from the usability of the APIs to its documentation. These studies are systematic mappings, but instead of focusing on the API, they are mapping previous studies about different proposals on how these matters can be solved.[27][21]

2.4 Systematic Mapping Study

A study[23] conducted to explain how a systematic mapping should be conducted mentions a few essential steps. The steps are “definition of research questions, conducting the search for relevant papers, screening of papers, keywording of abstracts and data extraction and mapping.”, as seen in Figure 2.4.1.

All of the steps have an outcome but the whole process ends with a systematic map.

Figure 2.4.1: Visual presentation of the project research strategy[23]

The study[23] continues to explain that after a classification scheme is organized the relevant articles are sorted to a predefined scheme. The classification might be done before the data extraction but evolves while doing the data extraction. This is due to new categories being discovered or some categories being merged and vice versa split. The data extraction is usually performed on an excel table document. The table would then contain each category of the classification scheme. When a certain data will be added to the scheme a short description why this data should be in a certain category is optimal. For the final table there is much information that can be extracted like frequency in a category could be calculated.

15 CHAPTER 2. THEORETICAL BACKGROUND

When analyzing the result like the frequency of the collected data in each category for a study. This makes it possible to see how the focus has been distributed in earlier research, this also concludes gaps that could be researched in the future. The study discussed many different formats of graphs and maps to present the results, but came to the conclusion to choose the bubble graphs to present the result in the best manner. Thus, since the conclusion the study had gotten was that this form of graph would better show frequency in their study than a frequency table.

2.5 Similar Work

Systematic mapping studies are well very well known, these studies are conducted by formatting and comprising data in a field of interest. A similar study was found which was titled “A Systematic Mapping Study in AIOps”[20], the study researched the field of IT systems where AI is introduced to tackle modern IT administration. The study mentions that the contributions to this field are scattered, unorganized and missing. Thus, leading to discoveries in these technologies being difficult and impractical. The study provided an in­depth mapping study by collecting and organizing all AIOps that were developed at the time. The goal of the work was to provide foundation for future contributions while also providing efficient comparison for future papers treating similar problems. The collected AIOps were categorized by the choice of algorithm, data sources and the target component[20].

16 Chapter 3

Research Methodology

This chapter presents the used research methodology that was planned in order to reach the study goal. To be able to systematically organize AI APIs, in order to simplify the process of discovering them for both the users of the APIs and researchers in the field by providing a present time systematic map. Starting with determining the adequate research strategy in section 3.1, in order to establish a strategy for the study. Section 3.2 presents the four research phases of the study that was planned in order to gain structure and planning for the conduction of the study. Section 3.3 presents the research methodologies and section 3.4 presents the research instruments for the study. Section 3.5 presents the validity threats and section 3.6 presents the ethical requirements for the study. Section 3.7 presents the experience that was gained from conducting the research.

3.1 Research Strategy

Today’s market has had focus on the research subject, AI, which makes this thesis of high relevancy and the information gathered therefore more valuable. As mentioned in the earlier chapters this positive aspect brings one difficulty, which is that the novelty of AI brings researchers to improve and research the field further in order to find new state­of­the­art technologies. Instead, this research is opting to evaluate what is already developed and provided, therefore the research strategy is different from what would usually be for this field.

The chosen research strategy is visually presented in Figure 3.1.1. As seen in Figure

17 CHAPTER 3. RESEARCH METHODOLOGY

3.1.1, the five key parts of the chosen are as follows: (1) choice of methods, (2) research phases, (3) validity, (4) ethics, (5) research tools. These five key parts are later more deeply defined in this chapter.

Figure 3.1.1: Visual presentation of the project research strategy.

3.2 The Four Research Phases

This section describes the four research phases that will be conducted during the study seen in Figure 3.2.1, which is influenced and inspired by Section 2.4 on how to conduct a systematic mapping study.

The stages are as followed: 3.2.1 First Phase ­ Pre­Study, which describes and explains the literature study that will be conducted in order to be able to continue the study. 3.2.2 Second Phase ­ Preliminary work of finding APIs, which describes and explains the scavenging of APIs. 3.2.3 Third Phase ­ Iterative work and criteria making, explains the iterative process of making criterias and categorising. 3.2.4 Fourth Phase ­ Finalization of Schematic Map, explains the final phase in which the final product will be acquired.

3.2.1 First Phase ­ The preparatory literature study

The research started with a preparatory literature study divided into three parts, so called sub­phases: (1) Study of Systematic Mapping, (2) Study AI and APIs, (3) Study creating criterias for categorisation, as seen in Figure 3.2.2.

18 CHAPTER 3. RESEARCH METHODOLOGY

Figure 3.2.1: Visual presentation of the project research phases.

Figure 3.2.2: Visual presentation of the project sub­phases.

Starting with the first sub­phase, Studying of the Systematic Mapping, in order to gain clarity of what needed to be done in order to get to the end goal of the thesis.

The second sub­phase was Study AI and APIs, in order to fully understand the different key characteristics of APIs and AI.

When the knowledge about systematic mapping, AI and APIs is clear the next step in line, as seen in Figure 3.1.1 was the third sub­phase. The third sub­phase ‘Study creating criterias for categorizing’ was entered last in order to have already gained knowledge and insight about the whole picture, being the systematic mapping, but also knowledge about AI and APIs which is what will determine the different categories later on.

3.2.2 Second Phase ­ Preliminary work of finding APIs

This part of the study is the second phase, where the aim is to find APIs, as seen in Figure 3.2.3.

Figure 3.2.3: Visual presentation of the project research phases.

19 CHAPTER 3. RESEARCH METHODOLOGY

The scavenging for AI APIs need to follow a methodological process, namely followed by the steps; (1) Searching for API sources, (2) Determining the reliability of the API sources, (3) Chose source based on reliability.

When the sources are established and ready to use the next methodological process will be used in order to determine which AI APIs that will be used for this study. The method used for this process is as following; (1) Being of a reliable source, (2) The source needs to provide clear documentation.

If the source is providing APIs made by freelancing developers, these can still be used in the study since there are many API provider websites that have been proven to be reliable. In that case, there needs to be documented feedback from users as reviews that shows the usability of the API and/or the chosen website.

3.2.3 Third Phase ­ Iterative work and criteria making

The third phase is the iterative work phase where the categories and the criterias are created, as seen in Figure 3.2.4. This phase is followed through by first creating the categories and their corresponding criterias, and then matching the APIs to their corresponding category following the criterias. The categories are derived from examining all available characteristic attributes of all API to create possible categories and their corresponding criterias. This process is then repeated, by creating another set of categories and their corresponding criterias, matching the APIs to their corresponding categories again.

This section is further iterated until satisfaction is achieved, and is therefore called the iterative work and criteria making.

Figure 3.2.4: Visual presentation of the project research phases.

20 CHAPTER 3. RESEARCH METHODOLOGY

3.2.4 Fourth Phase ­ Finalization of Schematic Map

To be able to take something from the work conducted a systematic map is created. A systematic mapping does not have the goal to answer a specific question. Instead the goal is to provide a description and a catalogue of a specific topic. On top of that information gathered about each API will also be included. This is to decrease the knowledge gap existing in the chosen subject.

After the fourth phase is finalized the next step is to reiterate from the first phase, shown by the arrow in Figure 3.2.5. Thus, making this thesis adhere to the moSCoW prioritization technique by planning for time constraints.[19]

Figure 3.2.5: Visual presentation of the project research phases.

3.3 Research Methodologies

In this section the chosen research methodologies are presented. 3.3.1 Qualitative Research Method presents and explains the qualitative research method as well as its reasoning for its suitability for this study. 3.3.2 Exploratory Research Method presents and explains the exploratory research method as well as its reasoning for its suitability for this study.

3.3.1 Qualitative Research Method

One of the chosen suitable research methods for the study is the qualitative research approach, due to the data collection being gathered using qualitative methods. The qualitative research approach is relying on gaining in­depth knowledge within the field with the purpose of providing new knowledge within the domain, interpreted from previously existing knowledge. Qualitative research is interpretive, and is focused on gaining in­depth knowledge within an unexplored and unstructured domain. Mapping within AI APIs is almost nonexistent, which makes the qualitative model a perfect fit

21 CHAPTER 3. RESEARCH METHODOLOGY for this study.

3.3.2 Exploratory Research Method

Exploratory research method is often explained as the ”the preliminary research to clarify the exact nature of the problem to be solved.”. Since this study is in need of exact clarification of the exact problem and the proper approach in order to solve it, the exploratory research method was chosen.

Additionally, a secondary part of the exploratory research is to conduct Secondary research. Secondary research is the method of reviewing available literature and/or data which is conducted in the first phase, the Pre­Study.

Using the exploratory research is a way of allowing the researchers to be creative in order to gain the most amount of insight into the subject.

3.4 Research Instruments

In this section the research instruments for the study are presented, in order to assure that the used research instruments were adequately correct used, the six following points were followed: (1) Valid and reliable (2) Based on a conceptual framework, or the researcher’s understanding of how the particular variables in the study connect with each other (3) Must gather data suitable for and relevant to the research topic (4) Able to test hypothesis and/or answer proposed research questions under investigation (5) Free of bias and appropriate for the context, culture, and diversity of the study site (6) Contains clear and definite instructions to use the instrument.

In subsection 3.3.1­3.3.4 the Literature study, API documentation, API providers and the instructions in order to find valid research to use as research instruments is presented.

22 CHAPTER 3. RESEARCH METHODOLOGY

3.4.1 Literature Study

Literature studies on available data and literature has been chosen as one of several research instruments for this study due to its significance in order to collect already available data. The number of fields in which there needs to be knowledge in, in order to conduct the study are many. Such as the fields of AI, APIs, systematic mapping and the process of creating criteria in order to categorize a large amount of items. Following the six points of valid research instruments by Columbia University[29], the found literature that will be used in the study will have to be of reliable source, validated research, relevant, suitable, free of bias and appropriate for context, culture and diversity of the study.

3.4.2 API documentations

The API documentations will be of high importance for the study and a powerful research instrumentation, due to the fact that most of the criterias and categorisation will be made directly by the information acquired by the API documentation. The API document, following the six points by Columbia University[29] for valid research instruments, the found API document will need to be valid, reliable, relevant, suitable, free of bias and appropriate for context, culture and diversity of the study and last clear definite instructions in the API documentation.

3.5 Validity Threats

As mentioned in 3.2.1 Qualitative Research Method, this study is mainly conducted by following a qualitative research strategy. Therefore there are four main standard criterias that are being considered in order to be certain that the study is valid, more specifically being certain there are no threats to the validity of the study[31]. These four criteria are namely (1) credibility, (2) transferability, (3) dependability and (4) confirmability. As follows, subsection 3.5.1 presents the credibility threat, 3.5.2 the transferability threat, 3.5.3 the dependability threat and last, t3.5.4 the confirmability threat.

Hence, would one or more of the four criteria fail, the study, results and/or conclusion would be considered invalid.

23 CHAPTER 3. RESEARCH METHODOLOGY

3.5.1 Credibility

The first criteria of the validity threats is the credibility criteria, which implies the concept of credibility to the justification of reliable research findings. Meaning that the goal of this criteria is to eventually evaluate whether the findings and results of the study makes sense and are valid. Would there be any incoherent findings the credibility of the study would be questioned.

Due to the fact that this study is based on using already made systems, in some cases APIs made by other freelancing developers, this criterion is posing a serious validity threat to the study as there are no other independent evaluators that would test the credibility of the results.

3.5.2 Transferability

The second criteria of the validity threats is the transferability criteria, which regards the concept of transferability to the amount of generalization that can be made, and in which alternative ways it could be generalized.

Due to the fact that the study goal is to create a schematic map of AI APIs which will be used by developers in the field with different backgrounds leads the research results and conclusions to be of high transferability.

3.5.3 Dependability

The third criteria of the validity threats is the dependability criteria, which regards the concept which aspires to describe how repeatable the findings of study are, in what grade, and how stable the findings are in a greater perspective or in a long period of time. This is a validity threat that is of relevance for this study. On a general level, the concepts can be repeatable over a long period of time but the specific implementation may be inconvenient to reproduce in the future due to the fact that technical advancements tend to bring new standards and tools, which may be incompatible with a specific implementation.

3.5.4 Confirmability

The fourth and last criteria of the validity threats is the confirmability criteria, meaning that there is a requirement for an explanation whether the study has included elements

24 CHAPTER 3. RESEARCH METHODOLOGY that could have been affected by bias introduced by the authors or researchers. In order to support neutrality regarding the findings and this criteria has been introduced, as well as supporting overall validity of this study. Thus, by publicly sharing the methodologies and processes used to collect, evaluate and conclude the used data one can ensure the confirmability requirement to be fulfilled. Regrettably, no independent entity or party was introduced in order to confirm nor evaluate the findings of this study which causes confirmability to be the most significant validity threat to this study.

3.6 Ethical Requirements

In order to steer to achieve high research quality, the aspect of being ethical was introduced. In particular the following four ethical principles, in order to assure that the ethical requirements have been met, (1) the information requirement, (2) the consent requirement, (3) the confidentiality requirement and (4) the utilization requirement[31].

As follows, subsection 3.6.1 presents the information requirement, 3.6.2 the consent requirement, 3.6.3 the confidentiality requirement and last, 3.6.4 the utilization threat.

If the case would be that any of the ethical requirement would not to be followed, the study will not be considered ethical.

3.6.1 Information Requirement

The first criteria of the ethical requirements is the information requirement, meaning that the researchers should inform all participants regarding information about the study, the purpose of the study and the terms and conditions the participants are participating under clearly. This criteria is less impactful on this study due to the fact that no participants nor commissioned parties involved to consider.

3.6.2 Consent Requirement

The second criteria of the ethical requirements is the consent requirement, meaning that the participants of the study have ability, and control to choose their own participation during the study. This criteria is less impactful on this study due to the

25 CHAPTER 3. RESEARCH METHODOLOGY fact that no participants nor commissioned parties involved to consider.

3.6.3 Confidentiality Requirement

The third criteria of the ethical requirements is the confidentiality requirement, meaning that all participating entities in the study are required to gain as much confidentiality as possible. Including how personal data is stored, and handled in order to protect the personal data. This criteria is less impactful on this study due to the fact that no participants nor commissioned parties involved to consider.

3.6.4 Utilization Requirement

The fourth and last criteria of the ethical requirements is the utilization requirement, meaning that the participants should be protected and that the collected data would only be used for the purpose of the study. This criteria is less impactful on this study due to the fact that no participants, personal data nor commissioned parties involved to consider.

26 Chapter 4

Preparatory Work

In this chapter the preparatory work of the study, the preliminary surveying or research, for the thesis is explained. Starting with section 4.1 which presents an overview of the methodology that was used in order to create the systematic mapping. Further, in section 4.2 explaining the methodology that was used to make criterias for the grouping of the AI APIs in order to turn it into a systematic map. Last, the sources and the methodology in choosing both sources and AI APIs for the study is presented in section 4.3.

4.1 Overview of the Systematic Mapping Method

This section presents the overview of the process that was used in order to create the systematic mapping. Starting with presenting the Overview of Phases in 4.1.1, further on in 4.1.2 an overview of the sources are presented, and at last in 4.1.3 an overview of the grouping criterion is presented.

4.1.1 Overview of Iterations

The thesis goal is to create an systematic map of AI APIs, in order to create it systematically the method was divided into iterations as following; (1) First Iteration, (2) Second Iteration, (3) Third Iteration, (4) Fourth iteration, (5) Final Iteration. The iterations as well as the process for mapping is visually presented in Figure 4.1.1.

27 CHAPTER 4. PREPARATORY WORK

Figure 4.1.1: Overview of all iterations in the study.

4.1.2 Overview of Sources

Finding reliable sources for the AI APIs of the study is of great importance, in order to build strength in the validity of the study. The main sources for the APIs were RapidAPI[25], the IBM Cloud[10] and the Google Cloud platform[9]. Although RapidAPI is a marketplace where freelance developers could present their APIs, the heavy documentation and reviews on the website made this source reliable. IBM and Google has been in the market for decades and proven themselves to be reliable over time and were therefore also chosen.

4.1.3 Overview of Grouping criteria

After the phases are set, and the sources for the AI APIs have been chosen for the study, the last third phase is the phase of setting the criterias for categorizing the AI APIs. First, the domains were searched for and found, and each definition for the domains were made into criterias that the APIs would need to fulfill. The definition of each domain is partly based on the articles[4] describing domains and the authors own understanding. Since this category seemed too substantial at the time, the categorizations started to be more narrow. The chosen first grouping criteria was based on whether the user had to pay for the API or if it was free to use. The making of the criterias had to be as specific as possible in order for the categorization to be precise and therefore every criteria of all categories is incredibly specific.

4.2 Grouping criterias

In this section the grouping criterias are explained into further detail.

28 CHAPTER 4. PREPARATORY WORK

Section 4.2.1 begins with explaining the first category, the first iteration. Section 4.2.2 explains the second category, the second iteration of the study. Section 4.2.3 explains the third category, the third iteration of the study. Section 4.2.4 explains the fourth category, the fourth iteration of the study.

4.2.1 Criterias for the First Iteration

The first category was determining whether the users of the AI APIs would have to pay for the AI API or not. During this iteration the definition for “Free API” was that no card numbers or payment methods would be needed to hand out in order to access the full API. Naturally, the “Priced API” criteria was defined by whether a payment method is necessary in order to access the full API. Thereby including any freebies in the Priced API criteria, since these ultimately are in need of payment methods. To determine in which category an API would fit in, all APIs pricing descriptions had to be examined. Motivation for categorizing by Priced API and Free API is to be able filter and find the field that is wanted. In Table 4.2.1 below a visual representation of the criterias for the first iteration are presented.

Table 4.2.1: Criterias for the first iteration

4.2.2 Criterias for the Second Iteration

The second category was to determine which HTTP Method the AI APIs were using, this was considered important since this can be important information for the developer looking for an API to use. The criterias for this category was chosen in a simpler manner than in the previous subsection 4.2.1, due to the fact that it is straightforward to determine which HTTP method the APIs was utilizing by reading the documentation.

The determined categories for the second iteration were POST, GET, PUT, PATCH and DELETE. Some HTTP Methods were left out since none of the examined APIs did not provide it. Motivation for categorizing by HTTP Methods is for faster findings for needs

29 CHAPTER 4. PREPARATORY WORK as well as better understanding on first encounter with the API. The criterias for the methods can be seen as a visual representation in Table 4.2.2 below.

Table 4.2.2: Criterias for the second iteration

30 CHAPTER 4. PREPARATORY WORK

4.2.3 Criterias for the Third Iteration

The primary categories, domains, that were chosen for the third iteration are as the following: Automotive, Economy/Business, Environment, Education/Learning, News/Information, Research, Healthcare, Technology, Retail, Art / Media and Other. The domains are presented in Table 4.2.3 below. These criterias are closely described in order to make the categorization and decision making more reliable and consistent.

Table 4.2.3: Criterias for the third iteration

31 CHAPTER 4. PREPARATORY WORK

4.2.4 Criterias for the Fourth Iteration

The fourth, and last category that was conducted in this study was the category which determined the main task that would be performed by the AI APIs. This category would determine the sole purpose and main task that the AI API was developed in order to serve. For this reason, some of the most common tasks which AI APIs are aimed to serve were chosen. These tasks are as follows: Virtual Bots/Assistant, Image Analyser, Algorithms/Prediction, Text Analyser, Scrapers, NLP, Storage, Monitoring, Converter, Text Generator and Translators. The criteria for these was that either the description of the AI API, or the revelation from the source­code should determine which category the API would fall under. The criterias for the fourth iteration are visually presented in Table 4.2.4.

Table 4.2.4: Criterias for the fourth iteration

32 CHAPTER 4. PREPARATORY WORK

4.3 Sources

It is of importance to find reliable sources of the chosen AI APIs when forming the systematic mapping of them to ease the process of finding relevant AI APIs for the developer. Therefore an important part of the AI API­mapping is that the actual AI APIs and their documentation is from reliable sources.

Subsection 4.3.1 explains the first source RapidAPI, where most of the used APIs are retrieved from. Subsection 4.3.2 presents the reasonings for retrieving APIs from the IBM Cloud and 4.3.3 presents the reasonings for retrieving APIs from the Google Cloud Platform.

4.3.1 RapidAPI

The chosen sources of APIs that will be used in order to create a systematic map was firstly RapidAPI[25]. The reasoning for choosing RapidAPI was that firstly that they provided their users with many APIs, although there was not any proper categorization on the AI APIs there was proper documentation.

Second reason for choosing RapidAPI was that there was an extensive amount of user feedback for them on different websites, such as Trustpilot[26]. Majority of the users on TrustPilot[26] have been recommending RapidAPIs to other users and companies, and RapidAPI has gotten a meanvalue from their user, 4.1 out of 5, which they present as “4.1 out of 5 stars”.

Last reason for choosing RapidAPI, was that their users could read and write reviews on the APIs that they had used. This is a crucial reason due to the fact that RapidAPI lets developers have their own created API on their website, and there could be major bugs that makes the API useless for the developer whose process the goal is to ease. Since the chosen AI APIs would not explicitly be evaluated and analyzed it was of big importance that there was some information on them from previous users confirming the reliability of the APIs that were chosen in this study.

4.3.2 The IBM Cloud

The second source for APIs that was being used in the creation of a systematic map was the IBM Cloud[10], where IBM collects their own APIs. The IBM Platform stores their

33 CHAPTER 4. PREPARATORY WORK

APIs in a similar manner as RapidAPI, with the difference that they put more effort in the documentation and explanations of the AI APIs.

4.3.3 The Google Cloud Platform

The last source of APIs that was used in this study was the Google Cloud Platform[9], where the collection of APIs are owned by Google. The Google Cloud Platform stores their APIs in a similar manner as IBM, but with even further extensive documentation and explanations.

34 Chapter 5

Mapping conducted in Iterations

In this chapter multiple iterations that have been done to form a final mapping result of all the APIs are presented. Section 4.1 begins with presenting all found AI APIs that were going to be evaluated and categorized according to selected criterias in the following iteration.

Section 4.2 begins the iterative process with the first iteration, explaining the criterias, presenting the results for the first iteration. Section 4.3 ­ 4.6 describes each iteration conducted in the same order as explained in the previous section 4.2. Section 4.7 presents the final result, namely the final systematic map, compiled from all previous iterations.

5.1 The Selected Collection of APIs

First, the AI API searching phase started on RapidAPI, where the access to specifically AI APIs was gathered through the search function on the website. The search function made it easily accessible to find APIs that are utilizing AI technology. From the RapidAPI website we were able to collect 45 AI APIs, as seen in Table 5.1.1 below.

After finding all available AI APIs on the RapidAPI website, the search continued on the IBM Platform and on Google Cloud Platform. More specifically 15 AI APIs were found on the IBM Platform, as seen in Table 5.1.1 below, and 7 AI APIs on the Google Cloud platform as seen in Table 5.1.1 below.

All AI APIs as well as descriptions of them them are available in Appendix A.

35 CHAPTER 5. MAPPING CONDUCTED IN ITERATIONS

Table 5.1.1: All selected APIs for the study.

36 CHAPTER 5. MAPPING CONDUCTED IN ITERATIONS

5.2 First iteration ­ Priced API or Free API

The division of the sample set of 66 APIs resulted in even distributed APIs. The number of completely free APIs were 33, while the APIs that require cost or a payment method of any sort were shown to be 36. Which can be seen in Table 5.2.1 below, as well as the criterias which were presented in Table 4.2.1.

Further information regarding the APIs is presented in Appendix A.0.1.

Table 5.2.1: Results from the first iteration

5.3 Second iteration ­ HTTP Methods

During the second iteration phase the HTTP methods for the APIs were analysed. These methods are as follows: POST, GET, DELETE, PUT, PATCH and unknown. The

37 CHAPTER 5. MAPPING CONDUCTED IN ITERATIONS results from this iteration are presented in Table 5.3.1 below. The criterias for this iteration were presented in Table 4.2.2.

Table 5.3.1: Results from the second iteration

38 CHAPTER 5. MAPPING CONDUCTED IN ITERATIONS

5.4 Third iteration ­ Domains

During the third iteration phase the domains for the APIs were analysed according to the criterias presented in Table 4.2.3. The results from this iteration are presented in Table 5.4.1 below.

This category includes the domains as follows: ‘Automotive’, ‘Economy/Business’, ‘Environment’, ‘Education/Learning’, ‘News/Information’, ‘Research’, ‘Healthcare’, ‘Technology’, ‘Retail’, ‘Art/Media’ and ‘Other’. The results from this iteration are all the APIs categorised by the mentioned domains and are presented in Table 5.4.1 below.

Table 5.4.1: Results from the third iteration

5.5 Fourth iteration ­ Main Tasks

During the fourth iteration phase the main tasks that the APIs were developed in order to perform were analysed according to the criterias presented in Table 4.2.4.

The tasks in the tables are as follows: ’Virtual Bots/Assistant’, ’Image Analyser’, ’Algorithms/Prediction’, ’Text Analyser’, ’Scrapers’, ’NLP’, ’Storage’, ’Monitoring’, ’Converter’, ’Text Generator and Translators’. The results from this iteration are presented in Table 5.5.1 below.

39 CHAPTER 5. MAPPING CONDUCTED IN ITERATIONS

Table 5.5.1: Results from the fourth iteration

5.6 Final Iteration ­ The Final Systematic Map

In the final iteration all previous iterations had been summarized into one final systematic map including 66 public AI APIs. The presented table below, Table 5.6.1, is ordered by the domains of the APIs. The highest priority category was domains, therefore all APIs that are included in a domain are listed underneath. The other categories are specified to the right of every API, listing the ‘Main Tasks’, ‘Payment’ and HTTP Methods.

The source for Table 5.6.1 is presented in Appendix B.

40 CHAPTER 5. MAPPING CONDUCTED IN ITERATIONS

Table 5.6.1: Results from the fourth iteration

41 CHAPTER 5. MAPPING CONDUCTED IN ITERATIONS

42 Chapter 6

Analysis and Discussion

In this chapter the analysis and discussion of the results from the thesis is presented. Starting with a general analysis and discussion of the study in section 6.1, to further analyzing the phases of the study in section 6.2. In section 6.3 the Iterations are discussed and in section 6.4 the criterias are discussed. Last, in section 6.5 the Final systematic map is analysed and discussed.

6.1 General Analysis

This section describes each of the sections in chapter 6 on an introductory level. Subsection 6.1.1 analyses the 4 research phases on the surface. Subsection 6.1.2 will re­introduce the four iterations conducted in the study. Subsection 6.1.4 is a general introduction to section 6.4.

6.1.1 Research Phases General Analysis

The research phase is done in four stages. The four stages were (1) Literature study, (2) Scavenging for APIs, (3) Iterative work and criteria making and (4) Finalization of Schematic Map. These stages were intended to occur after one another. After the first iteration when the study reached stage 3 a realisation that earlier stages had to be polished, this reoccurred pattern under the study period multiple times and led to a positive outcome for the result. However these four stages did not change from their original intention but became slightly optimized, more about this can be seen in section 6.2.

43 CHAPTER 6. ANALYSIS AND DISCUSSION

6.1.2 Iteration General Analysis

The third stage of the research phase was conducted in Iteration and was in four stages. The four stages were is more closely explained in section 6.3 and consist of: (1) Closer look into the First Iteration, (2) Closer look into the Second Iteration, (3) Closer look into the Third Iteration and (4) Closer look into the Fourth Iteration.

These iterations are discussing and analysing the work that have been conducted to complete the systematic map.

6.1.3 Validity General Analysis

The Validity threats of this study is presented in section 3.5 and consist of credibility, transferability, dependability and confirmability. The consurning factors about each of the validity threat concerning this study is discussed in section 6.4

6.2 Research Phase Analysis

This section discusses and analyzes the four research phases that were performed in the study. Subsection 6.2.1 Analyses the literature study. Subsection 6.2.2 Discusses and analyzes how the scavenging for the APIs was perceived. Subsection 6.2.3 takes a closer look at how iterative work was conducted. Subsection 6.2.4 takes a closer look at the finalization of the systematic map.

6.2.1 First Phase ­ Literature Study

The first phase was literature study and the reasoning to study mapping methods as well as general API concepts was to ensure that a plan could be derived for the phases coming after. The literature study was conducted before the plan of phases to create the optimal plan for the study. The work was divided in iterative work to optimize discovery. In each iterative stage of the study a custom literature study was conducted. This is because each iteration has a different subject requiring preparation phase for that subject.

One decision made in the literature study was that the more famous systematic mapping figure for the final map, which is usually a bubble graph, would not be

44 CHAPTER 6. ANALYSIS AND DISCUSSION used. It was concluded that a bubble graph would not accurately represent the study result.

6.2.2 Second Phase ­ Scavenging for AI APIs

The second phase was directed to collecting material and data for the study. Majority of this data was collecting APIs that correlated with the study. The amount of APIs collected was 66 and surely could be increased. There is no statement that the APIs in this study are all available APIs, although the amount of APIs are only intended to provide a mapping on as many APIs in the field of AI as possible. Future AIs are developed and more APIs already exist but are harder to find on the web. Future workers could contribute to the work and extend with more discovered APIs.

The requirement was that the API had to include AI in some form. We had to be reserved to misleading information since we only read the information about the API through its documentation and trust that AI was used.

6.2.3 Third Phase ­ Iterative work and criteria

Every iteration had a stage where criterias was created, these criterias were created by researching things APIs could have in common. Majority of the time spent during this phase ended up brainstorming what the next stage of the study could be and what criterias it would have.

During the first iteration the criterias was only established but not explained, which ultimately resulted in clouded grouping due to the fact that no fine line could be drawn where every API would fit. Thus, a new plan was formulated where every criteria was described as clear and precise as possible in order for the grouping stage to go through smoother.

6.2.4 Fourth Phase ­ Finalization of Schematic Map

The last phase aimed to summarize each iteration in order to comprise the last result, the final systematic map. The focus went to deciding on the hierarchy in the final schematic map, in order to present the result most efficiently for the developer to gain knowledge. The domains landed in the top of the hierarchy making all APIs be divided by domains, this was due to the fact that the domain ultimately would be the biggest

45 CHAPTER 6. ANALYSIS AND DISCUSSION factor for the developers to choose from when looking for an appropriate API. The results and information from the other category iterations was presented beneath the domains in the same table in order to combine adequately.

The biggest decision during this iteration was to decide which graph type that would be the most suitable for this mapping study. Similar studies use bubble graphs, but due to the fact that the conclusion had been that too much information is lost when using this format, this study aimed for tables instead.

6.3 Iteration Analysis

This section discusses and analyzes the four iterations that were conducted in chapter 5. Subsection 6.3.1 takes a closer look at the First Iteration. Subsection 6.3.2 takes a closer look at the Second Iteration. Subsection 6.3.3 takes a closer look at the Third Iteration. Subsection 6.3.4 takes a closer look at the last Fourth Iteration.

6.3.1 Closer look into the First Iteration

The first iteration examined all the APIs and divided them into two categories, Free API vs Priced API. This was done by creating a developer account on the websites to be able to see available price options. Many APIs advertised as being free but at closer look it comes out that a limited plan is free and everything past the plan is additional costs. This had to be discussed when creating the criterias Priced and Free. On one hand the API promotes itself as free but a line had to be drawn on what is considered as truly free. That’s where the description “Unlimited use with no extra cost” and “No time limit, have to stay free” as seen in Figure 4.2.1 in section 4.2.1.

Dividing into the two categories, Priced and Free were intended for freelance developers or anyone wanting to add a specific feature to a project without breaking the bank. This can be hard since when searching for APIs with the search “Text to Speech APIs” in google results in APIs developed by big corporations and usually cost money. This is due to these corporations’ marketing team making them appear as the top result instead of the freelancing developer that created a similar API but for no cost. To take into consideration none of the APIs where tested in the study so no conclusion can be taken whether the Priced API is better than the Free and vice versa. In this iteration the division between Priced and Free both resulted in easier filtering based

46 CHAPTER 6. ANALYSIS AND DISCUSSION on budget needs but also being able to find a counter API to a cost variant. Some APIs provided the same feature but existed in both the Free category as well as in the Priced category.

6.3.2 Closer look into the Second Iteration

The second iteration had five HTTP methods as categories POST, GET, DELETE, PUT and PATCH. There were originally more than five HTTP methods in this iteration. After finalizing the third iteration, all HTTP methods that were not used were excluded.

To be able to find what methods an API provides varies depending on platform. RapidAPI website for example, the methods are the first noticeable characteristic about each API while Google cloud platform makes a user navigate around five pages before reaching the API documentation. This documentation includes all information about the API.

Categorising by HTTP methods provides a quick look of what methods an API provides. The meaning by all categories is to group similar APIs together as well as give as much information in a short manner. In later iteration the task provided by the API is categorized but also paired with the HTTP method. Including the API provided tasks in the final mapping allows for a short summary of the API without reading the documentation.

6.3.3 Closer look into the third Iteration

The third iteration had 15 categories: Art / Media, Automotive Economy/Business, Environment, Education, Research, Healthcare, Technology, Retail and Other.

Reasoning behind the division of domains was to separate APIs that don’t have the same subject in common, the subject is the domain definition. Before categorizing, many domains had to be collected with their definition so that as many API can find the right domain to fit in to. At first only 17 domains were collected, but two of the domains could not be used since no API would fit into their description. Thus, being the reasoning behind this study using the 15 domains that were used.

Additionally, one point to consider is the fact that many APIs that have been categorized by domains also have common factors with that domain. For example the

47 CHAPTER 6. ANALYSIS AND DISCUSSION

Healthcare domain, the API that has any characteristics close to the healthcare domain would naturally fall under this domain. Therefore, the documentation provided on the about page about each API determined the domain it would fall under.

Moreover, the most considerable reasoning behind categorizing by domain first is to separate the API that do not serve the same purpose in order to collect the similar ones into one group. Thus, for example, the developer that is in need of an API for the hospital would not need to filter through the APIs about Art/Media, and would instead immediately search in the Healthcare domain catalogue.

6.3.4 Closer look into the fourth Iteration

The fourth iteration had groupings created by the authors, these were the main tasks that the API would perform: Virtual Bots/Assistant, Image Analyser, Algorithms/Prediction, Text Analyser, Scrapers, NLP, Storage, Monitoring, Converter, Text Generator and Translator.

In order to be able to create these categories the work had to be done in iterations, by studying each API while documenting what the main task the API performs is, and doing so in iterations. When the main task is understood, the API has also performed its purpose which is called the main task.

After the analysis has been done on all APIs categories were selected that directly correlated with their main tasks. These categories had to be broad enough so that not too many API would fit in the same category, but limited enough in order for at least a few to group together. The purpose of this grouping was to include the APIs that provided the same task together, and thus giving the developer options while disclosing the information for easier findings.

6.4 Validity Threat Analysis

In this section the validity threats mentioned in section 3.5 are analysed and discussed.

Subsection 6.4.1 analyses the credibility threats, 6.4.2 analyses the transferability threats, 6.4.3 analyses the dependability threat and 6.4.4 analyses the confirmability threats to the study.

48 CHAPTER 6. ANALYSIS AND DISCUSSION

6.4.1 Credibility Threat Analysis

The first criteria of the validity threats, the credibility criteria, which implies the concept of credibility to the justification of reliable research findings. Due to the fact that the study was based on using already made systems, in some cases APIs made by other freelancing developers, this criterion was posing a serious validity threat to the study as there are no other independent evaluators that could test the credibility of the results. This criteria has been maintained by ensuring reliable sources for the research findings. The documentation on a few of the chosen APIs was poor and therefore some further research was made on the specific APIs to ensure validity.

6.4.2 Transferability Threat Analysis

The second criteria of the validity threats, the transferability criteria, which regards the concept of transferability to the amount of generalization that can be made, and in which alternative ways it could be generalized. Due to the fact that the study goal was to create a schematic map of AI APISs which will be used by developers in the field with different backgrounds leads the research results and conclusions to be of high transferability. The transferability criteria was maintained by ensuring that both all resources to conclude the systematic mapping is provided in Appendix A as well as each categorization being described so future addition can be made.

6.4.3 Dependability Threat Analysis

The third criteria of the validity threats, the dependability criteria criteria, which regards the concept which aspires to describe how repeatable the findings of study are, in what grade, and how stable the findings are in a greater perspective or in a long period of time. The study will act as a backbone of the field AI. The amount of available APIs will increase and the chosen APIs for this study could be taken down. The point is not only to provide the map for easy access for APIs. instead to create a standard how the field could be systematically mapped and further work could make it stand time.

49 CHAPTER 6. ANALYSIS AND DISCUSSION

6.4.4 Confirmability Threat Analysis

The fourth and last criteria of the validity threats, the confirmability criteria, which regards requirements for an explanation whether the study has included elements that could have been affected by bias introduced by the authors or researchers. The study has categorised all available APIs found under the time frame. In this tame categories had been defined making all decisions biased. On top of that the majority of the APIs are made from freelanced developers making them available on an API market without giving much information on its origin. The big corporation developing API had been treated in the grouping stages just as small freelanced APIs was treated. This study has no association with any developers or corporation making this study unbiased.

6.5 Final Systematic Map Analysis

The systematic map in the study was customized to the authors opinion to bring out the optimal information from all the iterations conducted in the research phase. The Systematic Mapping created in this study has the potential to give more information than what meets the eye first, since a quick look only brings direct information of each API available on the market at this date. One of the goals is to give insight on how the availability of AI is for the general public and developers. The map of the study cannot include all APIs available on the internet considering the time constraint. Although, by limiting to all APIs found, which were 66 APIs, the map can provide answers.

One thing that stood out was imbalances in domains, as seen in Figure 5.4.1. The four most substantial domains were ‘Technology’, ‘Research’, ‘Economy/Business’ and ‘Education’. Reservations have to be taken since the authors definition on these fields could be defined in a manner that makes more APIs fall inline with them.

Although, the question that remains for the authors at this point is the fact that the domains ‘Health’ and ‘Automotive’ solely contain two APIs each, in the selection of 66 APIs. Thus indicating that the domains ‘Health’ and ‘Automotive’ are lacking in AI API. Therefore these domains should be expanded, as more developed AI APIs in these domains are needed since they are both big domains and surely have AI incorporated in different ways than APIs.

Taking a closer look at the result from the fourth iteration, as seen in Figure 5.5.1, when examining the Main Tasks of all APIs, one can determine that the majority fall under

50 CHAPTER 6. ANALYSIS AND DISCUSSION

Image analyser, text analyser and Algorithms/Predictions.

One can conclude the common pattern for freelance developers to develop small APIs, which are developed to provide a smaller feature, like scanning a picture, removing, tagging part of the picture or similar features that are done to text. Thus indicating the finding that more functionalities using AI API to its full potential needs to be developed.

Another category was the ‘Bots/Assistance’, in this category all APIs that provided some sort of assistance using AI capabilities to think in manners like a human. Due to the fact that there were only 4 out of 66 APIs that could fulfill this criteria, this is also another category that could be expanded.

As mentioned earlier, in section 2.4, many mapping API studies are researching usability and documentations. Moreover, the common studies in the field are researching the many theoretical proposals on the API experience and how the approach should be in order for the user to gain the optimal experience. These studies often include research on existing studies, which include opinions of others regarding the API experience, and by the frequency of the opinions a conclusion is often drawn.

This study is more biased from opinions and will therefore not suggest a proposal, instead the context is more about the APIs themselves and their place in the market.

51 CHAPTER 6. ANALYSIS AND DISCUSSION

52 Chapter 7

Conclusions

The field of AI has received extensive research in the past few years, which has led to information in the field being scattered, unorganized and missing. The purpose of this study is to tackle this unorganized field by creating a systematic map of available AI APIs. The goal would be to2 benefit developers and researchers by providing the information in an accessible manner.

This chapter concludes the study and presents the Conclusions in section 7.1, and Future works in section 7.2.

7.1 Conclusions

By the conducted literature study in the first phase of the thesis a conclusion can be drawn on the lack of studies on the actual use of the technology AI. Currently no other studies on AI APIs that are directed to structuring the field are found but others creating systematic mapping on a different subfield of AI is found. This could be taken as that the area of AI is lacking studies trying to piece together the findings of the AI technologies. Majority of the studies revolve around the theoretical level of AI and misses mentioning its benefits with a practical example that has been conducted in the study. This theses would bring a fresh perspective to the subject of AI since the authors are not experienced enough to develop the technology itself, instead the focus will be on its use. The idea behind this is to bring as much light as possible to the field while giving room for more studies to take advantage of its findings.

The goal of the study was to provide structure to the field of AI. By structuring a

53 CHAPTER 7. CONCLUSIONS systematic map on AI API a partial structure is believed to be achieved. The compiled map conducted in this study includes 66 APIs that reaches over several different domains. This is a hefty number considering the timeframe of this study. The final systematic mapping introduced in section 5.6 is believed to be the best option to present the mapping data in an efficient form. We can conclude that the work done to create the systematic mapping of AI APIs in this study will be beneficial no matter its sampling size of APIs.

Since the thesis purpose can be seen as passing on experience it is necessary to define who the target group is.This thesis will not be beneficial to a researcher seeking new strides in the subject AI. It will rather benefit future developers, since it will contribute in learning more about APIs and AI in web development.

7.2 Future work

Due to the high production pace around developing new ideas in the field of AI, it seemed like the developers had been forgotten. Since the general developers are the ones bringing the made technology and integrating it further into the depths of society, this group is now also considered.

Future work on this study is mainly an endorsement to continue this study. As an API developer, continue this study by examining these study results, and develop AI APIs in the domain areas that have been neglected such as ‘Healthcare’. For example, some areas in the final presented Systematic Map in the results section 5.6 ‘Final Iteration’, one can observe that some domains have been neglected and are lacking in number of APIs compared to other domains. If there is need for further adding APIs in those domains, this is an endorsement for continued future work.

As a researcher, or thesis writer in the fields of computer science, continue this study by further prolonging the study and including some AI APIs in this systematic map. As a general developer, freelancing or company employee, feel free to use this systematic map for your benefit in choosing an AI API.

54 CHAPTER 7. CONCLUSIONS

55 Bibliography

[1] Builtin, . “What is Artificial Intelligence? How Does AI Work? | Built In”. In: (2020). URL: https://builtin.com/artificial-intelligence.

[2] “DAIS ­ Distributed Artificial Intelligence Systems”. In: (2021). URL: dais - project.eu.

[3] Davenport, T. and Ronanki, R. “ARTICLE TECHNOLOGY Artificial Intelligence for the Real World Don’t start with moon shots”. In: (2018). URL: https://www. kungfu.ai/wp-content/uploads/2019/01/R1801H-PDF-ENG.pdf.

[4] “Domain (Industry) Knowledgebase”. In: (2020). URL: https : / / www . technofunc.com/index.php/domain-knowledge-2.

[5] Fielding, R. T. “Fielding Dissertation: CHAPTER 5: Representational State Transfer (REST)”. In: (2000). URL: https://www.ics.uci.edu/~fielding/ pubs/dissertation/rest_arch_style.htm.

[6] Fisher, S. “Introduction to Operating Systems”. In: (1989). URL: http://cis2. oc.ctc.edu/oc_apps/Westlund/xbook/xbook.php?unit=01&proc=page&numb= 2.

[7] “Goal 10”. In: Department of Economic and Social Affairs. (2020). URL: https: //sdgs.un.org/goals/goal10.

[8] Google. “AI Demo”. In: (2021). URL: https://cloud.google.com/text-to- speech#section-2.

[9] “Google Cloud”. In: (). URL: https://cloud.google.com/products/ai.

[10] “IBM”. In: (). URL: https://cloud.ibm.com/catalog?category=ai#services.

[11] ibm. “microservices”. In: (2021). URL: https://www.ibm.com/cloud/learn/ microservices.

56 BIBLIOGRAPHY

[12] J. Ofoeda R. Boateng, J. Effah. “Application Programming Interface (API) Research: A Review of the Past to Inform the Future”. In: 15 (2019). DOI: 10. 4018 / ijeis . 2019070105. URL: https : / / www . igi - global . com / article / application-programming-interface-api-research/232166.

[13] Krittanawong, C. “The rise of artificial intelligence and the uncertain future for physicians”. In: European Journal of Internal Medicine 48 (2018). DOI: 10. 1016 / j . ejim . 2017 . 06 . 017. URL: https : / / pubmed . ncbi . nlm . nih . gov / 28651747/.

[14] Marsland, Stephen. Machine learning: an algorithmic perspective. CRC press, 2015.

[15] Massé, . REST API design rulebook: Designing consistent RESTful web service interfaces. 2012. ISBN: 9781449310509. URL: https://books.google. se/books?id=eABpzyTcJNIC&lpg=PR3&ots=vAPD_4mdMz&dq=REST%5C%20api& lr&pg=PA5#v=onepage&q=REST%5C%20api&f=false.

[16] microservices. “What are microservices?” In: (2021). URL: https : / / microservices.io/.

[17] Mohri, Mehryar, Rostamizadeh, Afshin, and Talwalkar, Ameet. Foundations of machine learning. MIT press, 2018.

[18] Moore, A. “When AI Becomes an Everyday Technology”. In: Harvard Business Review (2019). URL: https://www.investkl.gov.my/assets/multimediaMS/ file/H05003-PDF-ENG_FINAL.PDF.

[19] “MoSCoW prioritization technique”. In: (). URL: https://www.volkerdon.com/ pages/moscow-prioritisation.

[20] Notaro, Paolo, Cardoso, Jorge S., and Gerndt, Michael. “A Systematic Mapping Study in AIOps”. In: CoRR abs/2012.09108 (2020). arXiv: 2012.09108. URL: https://arxiv.org/abs/2012.09108.

[21] Nybom, Kristian, Ashraf, Adnan, and Porres, Ivan. “A Systematic Mapping Study on API Documentation Generation Approaches”. In: 2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA). 2018, pp. 462–469. DOI: 10.1109/SEAA.2018.00081. URL: https: //ieeexplore.ieee.org/document/8498248.

57 BIBLIOGRAPHY

[22] Patel, N. P., Parikh, D. R., Patel, D. A., and Patel, R. R. “AI and Web­ Based Human­Like Interactive University Chatbot (UNIBOT)”. In: 2019 3rd International conference on Electronics, Communication and Aerospace Technology (ICECA) (2019). DOI: 10.1109/ICECA.2019.8822176. URL: https: //ieeexplore.ieee.org/abstract/document/8822176.

[23] Petersen, K., Feldt, R., Mujtaba, S., and Mattsson, M. “Systematic Mapping Studies in Software Engineering”. In: (2008). DOI: 10.14236/ewic/EASE2008. 8. URL: https://www.scienceopen.com/hosted-document?doi=10.14236/ ewic/EASE2008.8.

[24] Preibisch, Sascha. API Development. Springer, 2018. URL: https : / / link . springer.com/content/pdf/10.1007/978-1-4842-4140-0.pdf.

[25] RapidAPI. “What is RapidAPI?” In: (2020). URL: https://docs.rapidapi. com/docs/what-is-rapidapi.

[26] “RapidAPI TrustPilot”. In: (). URL: https://www.trustpilot.com/review/ rapidapi.com.

[27] Rauf, Irum, Troubitsyna, Elena, and Porres, Ivan. “A systematic mapping study of API usability evaluation methods”. In: Computer Science Review 33 (Aug. 2019), pp. 49–68. DOI: 10.1016/j.cosrev.2019.05.001. URL: https://www. researchgate.net/publication/334840538_A_systematic_mapping_study_ of_API_usability_evaluation_methods.

[28] Razavian N. Knoll, F. and Geras, K.J. “Artificial Intelligence Explained for Nonexperts”. In: Seminars in Musculoskeletal Radiology (2020). URL: https: //www.ncbi.nlm.nih.gov/pmc/articles/PMC7393604/.

[29] “Research Instrument Examples”. In: Teachers College, Columbia University (). URL: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web& cd=&ved=2ahUKEwiCvpappdnwAhXXFXcKHcpCCnsQFjABegQIAxAD&url=https%3A% 2F%2Fwww.tc.columbia.edu%2Fmedia%2Fadministration%2Finstitutional- review - board - %2Firb - submission --- documents % 2FPublished _ Study - Material-Examples.pdf&usg=AOvVaw0vAzP3ae64dItm_3P0W4K4.

[30] Samiei, Saba. “On The Danger of Artificial Intelligence”. In: (2019). URL: https: //openrepository.aut.ac.nz/bitstream/handle/10292/12967/SamieiS. pdf?sequence=4&isAllowed=y.

58 BIBLIOGRAPHY

[31] “So you want to do research? 3. An introduction to qualitative methods”. In: British journal of Community Nursing 8 (2003). URL: https : / / www . researchgate.net/publication/9037262_So_you_want_to_do_research_ 3_An_introduction_to_qualitative_methods.

[32] Sohan S.M. Anslow, C. and Maurer, F. “A Case Study of Web API Evolution. 2015”. In: Proceedings ­ 2017 32nd Youth Academic Annual Conference of Chinese Association of Automation, YAC 2017 (2015). URL: https : / / ieeexplore.ieee.org/abstract/document/7196531.

[33] tutorialpoint. “HTTP ­ Overview”. In: (2021). URL: https : / / www . tutorialspoint.com/http/http_overview.htm.

[34] w3. “9 Method Definitions”. In: (2016). URL: https://www.w3.org/Protocols/ rfc2616/rfc2616-sec9.html.

59 Appendix ­ Contents

A All APIs 61

B Final Systematic Map 63

60 Appendix A

All APIs

Due to the Table A.0.1 below being blurry, this table can also be inspected by following this URL­link.

Following the same link, all iterations,all results, the final schematic map can also be viewed and downloaded, preferably in the paper size A3.

61 APPENDIX A. ALL APIS

Table A.0.1: Results from the first iteration

Namn Existing category About 1. Text Summarizer None Summarize your text with NLP 2. Text Summarizer Text Analysis Connexun’s “Text Summarizer” API generates an extractive summary from any given text. Test our api for free and do not hesitate to reach us out for further information. adverifai None Artificial Intelligence for fake news detection and automated fact checking AIception Visual Recognition Identify objects, faces and a person’s age from an image AirVisual Travel, Transportation The world’s smartest air quality monitor helping you track, foresee, and take action against invisible threats in the air. AiSara Hyperparameter Tuning Machine Learning Hyperparameter Tuning is the process of identifying the best configuration for a machine learning or a deep learning model. amply None Returns the web site meta data by passing an URL using Artificial Intelligence Annotator for Clinical Data AI / Machine Learning IBM Annotator for Clinical Data is an AI-powered service on IBM Cloud that delivers meaningful insights from unstructured data, purpose-built for healthcare and life sciences domains. AppraisalQC Business Automated Appraisal Validation process by PropMix using Artificial Intelligence – machine learning and image recognition Bicedeep AI Data Data Science Artificial Intelligence (AI) As A Service. Detects data types. Suggests deep learning models works on your data. BOTlibre Communication BOT libre’s goal is to foster an open, safe community of artificial intelligent chat bots and their developers. BrainShop.AI Tools BrainShopBOT libre allows enables you your to create to create your AI own brains, artificial which intelligent enables chatyour bot,application train them, or devices and share to converse them with with others. people You in are human free language.to use this website, and create your own bots for personal, commercial, or recreation usages. Category prediction for News Articles & blogs Machine Learning Get categories predicted by the AI text classification model trained on over 100,00 articles. ClarifaiPublicModels Visual Recognition Identify and tag objects in images with Clarifai’s models. CloudPronouncer Translation Convert Text To Speech with 285 voices in 49 languages Co-Guard Scrape Data Co-Guard Scraper uses Artificial Intelligence and Big Data to let you see in real time the content of the web-page you are interested in. The API returns the whole html of the web-scraped page. ContentAI.net - Text Generation Machine Learning Text generation using artificial intelligence language models. Texts available in 40 different categories. API with very low response time for rapid content generation. E2open, LLC. Business Software E2open is a cloud-based, real-time operating platform that orchestrates the global supply chains of the world’s best-known brands. Email Finder Data GetEmail.io find the email address of anyone on earth. Emergency Vehicles Detection, EmergDet, detects and localizes lit beacons of emergency vehicles (police, ambulance, fire fighters) in an input photo using artificial intelligenge and a powerful cloud Emergency Vehicles Detection Visual Recognition infrastructure Flood Detection Visual Recognition Flood Detection API, Flooderizer, detects flooded areas in an input photo using artificial intelligence and a powerful cloud infrastructure. Fog Detection Visual Recognition Fog Detection API, Fogerizer, detects fog in an input photo (not a radar photo) with a certain probability score using artificial intelligence and powerful cloud infrastructure. Foxy AI Visual Recognition Unlock The Hidden Value Of Your Real Estate Photos Geneea Interpretor NLP Text Analysis NLP, Sentiment, Named and General entity extraction, Language identification 30+ Hail Rain Detection Visual Recognition Hail Rain Detection API, HailQuid, detects hail rain or hail storm in an input photo (not a radar photo) with certain probability score using artificial intelligence and powerful cloud infrastructure Healthcare Natural Language AI AI and Machine Learning Gain real-time analysis of insights stored in unstructured medical text. IBM Match 360 with Watson AI / Machine Learning Use IBM Match 360 with Watson to quickly build data pipelines for analytics and other data science use cases using master data. Icy Road Detection Visual Recognition Icy Road Detection API, IcyRoadDet, detects icy roads from an input photo using artificial intelligence and powerful cloud infrastructure. Remove backgrounds using deep convolutional neural networks (ConvNets). Our network learns to recognize the difference between the foreground (your content, text, etc.) and the background Image background removal v2 Video, Images (the rest of the picture), allowing our software to automatically separate the two. This API is a package of Natural Language Processing (NLP) and Artificial Intelligence (AI) tools that allow to extract relevant information and insights from resumes, Job and Resume Matching For HR Management Systems Text Analysis job openings and employee feedback. Teach Watson the language of your domain with custom machine learning models that identify entities and relationships unique to your industry in unstructured text. Knowledge Studio AI / Machine Learning Build your models in a collaborative environment designed for both developers and domain experts. Language Translator AI / Machine Learning Neural Machine Translation comes standard for each language pair. Corpus customization allows you to create your own translation models which account for regional or industry-specific terms. Instantly translate your content into multiple languages. From translating documents, apps, and websites to creating multilingual chatbots, what will you build? logoraisr Video, Images Get graphic design task done. Instantly. Machine Learning AI / Machine Learning IBM Watson Machine Learning is a full-service IBM Cloud offering that makes it easy for developers and data scientists to work together to integrate predictive capabilities with their applications. Nameror Machine Learning Classify the gender and name type of names using the Nameror gender API, which is backed by crowdsourced data and machine learning. The powerful pre-trained models of the Natural Language API empowers developers to easily apply natural language understanding (NLU) to their applications with features including sentiment analysis, Natural Language AI and Machine Learning entity analysis, entity sentiment analysis, content classification, and syntax analysis. Natural Language Classifier AI / Machine Learning The Natural Language Classifier service uses advanced natural language processing and machine learning techniques to assign custom categories to inputted text. Natural Language Understanding AI / Machine Learning Use advanced NLP to analyze text and extract meta-data from content such as concepts, entities, keywords, categories, sentiment, emotion, relations, and semantic roles. PDF.co Business Software Converts, splits and merges PDF, adds virtual signatures, makes PDF searchable, generates and reads barcodes, fill in forms and more. The Perfect Tense API is an artificial intelligence powered spelling and grammar API. All you have to do is provide a piece of text and the Perfect Tense API will automatically return a Perfect Tense None proofread version of the text, along with all spelling and grammar mistakes. Watson Personality Insights derives analyzes transactional and social media data to identify psychological traits which determine purchase decisions, intent and behavioral traits; Personality Insights AI / Machine Learning utilized to improve conversion rates. Photo Reader Business Ditto Labs is a leading provider of vision-as-a-service for enterprises. Our cloud-based Photo Reader API uses artificial intelligence to tag brands, faces, smiles, objects and context in visual media. ProxyCrawl Scraper Data The Scraper API is a fast and easy solution to extract parsed data from a long list of web pages. It is a ready-made scraper suitable for any of your crawling or scraping projects. Recommendations AI AI and Machine Learning Deliver highly personalized product recommendations at scale. Retail Pricing Optimizer Business Software This API aims at providing retailers access to a combination of artificial intelligence and data to calculate full margin, retail-adjusted, psychological, and other pricing strategies. Road Accidents Detection Visual Recognition Road Accidents Detection API, Accidentor, detects roads accidents ahead the way from an input photo, using artificial intelligence and powerful cloud infrastructure. semanti.ca Web Article Data Extraction None semanti.ca extracts data from any Web article on any website (news, magazines, blogs) using Artificial Intelligence, Computer Vision, and Machine Learning. No programming needed. ShopperAssist Tools Most Personalization (Recommendation) API are Blind and Mechanical. Here, we present, Slurplick, a Personalization API with Human Like Vision, Understanding and Intelligence. I Speech to Text AI / Machine Learning The Speech to Text service converts the human voice into the written word. Speech-to-Text AI and Machine Learning Accurately convert speech into text using an API powered by Google’s AI technologies. SpotGarbage Tools Detects garbage in images using Deep Learning (Artificial Intelligence). Text to Speech AI / Machine Learning The Text to Speech service converts written text to natural-sounding speech. Text-to-Speech AI and Machine Learning Convert text into natural-sounding speech using an API powered by Google’s AI technologies. Thermographic PV Inspection Machine Learning Thermographic Inspection of Photovoltaic (PV) Installations People show various tones, such as joy, sadness, anger, and agreeableness, in daily communications. Tone Analyzer leverages cognitive linguistic analysis to identify a variety of tones at Tone Analyzer AI / Machine Learning both the sentence and document level. Video AI AI and Machine Learning Video Intelligence API has pre-trained machine learning models that automatically recognize a vast number of objects, places, and actions in stored and streaming video. Vision AI AI and Machine Learning Assign labels to images and quickly classify them into millions of predefined categories. Detect objects and faces, read printed and handwritten text, and build valuable metadata into your image catalog. WatchSignals Data Watchsignals is a luxury watch comparison platform powered by artificial intelligence (AI). Watermark Removal AI Machine Learning Remove watermark automatically using AI. You don't even have to tell us which part of the photo is the watermark. Watson Assistant AI / Machine Learning Watson Assistant lets you build conversational interfaces into any application, device, or channel. Add a natural language interface to your application to automate interactions with your end users. Common applications include virtual agents and chat bots that can integrate and communicate on any channel or device. Train Watson Conversation service through an easy-to-use web application, designed so you can quickly build natural conversation flows between your apps and users, and deploy scalable, cost effective solutions. Watson Discovery AI / Machine Learning Add a cognitive search and content analytics engine to applications to identify patterns, trends and actionable insights that drive better decision-making. Watson Knowledge Catalog AI / Machine Learning Simplify data science and data compliance with IBM Watson Knowledge Catalog. Make your data easy to find and share while controlling access to ensure appropriate use. Watson OpenScale AI / Machine Learning IBM Watson® OpenScale™ tracks and measures outcomes from AI throughout it's lifecycle, and adapts and governs AI in changing business situations Watson Studio AI / Machine Learning Watson Studio democratizes machine learning and deep learning to accelerate infusion of AI in your business to drive innovation. Weather with AI Weather Global weather forecast powered by Artificial Intelligence. WolframAlpha Search Get search results from this computational knowledge engine.

62 Appendix B

Final Systematic Map

Due to the Table B.0.1 below being blurry, this table can also be inspected by following this URL­link.

Following the same link, all iterations,all results, the final schematic map can also be viewed and downloaded, preferably in the paper size A3.

63 APPENDIX B. FINAL SYSTEMATIC MAP

Table B.0.1: Final Systematic Map

64 TRITA-EECS-EX-2021:290

www.kth.se