Security Vetting Process of Smart-Home Assistant

Security Vetting Process of Smart-Home Assistant

Security Vetting Process of Smart-home Assistant Applications: A First Look and Case Studies Hang Hu Limin Yang Virginia Tech University of Illinois Urbana-Champaign hanghu@vt:edu liminy2@illinois:edu Shihan Lin Gang Wang Fudan University University of Illinois Urbana-Champaign shlin15@fudan:edu:cn gangw@illinois:edu ABSTRACT encryptions and unpatched OS/software. While most existing stud- The popularity of smart-home assistant systems such as Amazon ies focus on the system and device-level flaws, limited efforts are Alexa and Google Home leads to a booming third-party applica- investigated to vetting the security of third-party applications, and tion market (over 70,000 applications across the two stores). While more importantly, helping developers to improve the security of their existing works have revealed security issues in these systems, it is applications. not well understood how to help application developers to enforce In this paper, we perform a preliminary case study to examine security requirements. In this paper, we perform a preliminary the mechanisms that Amazon and Google implemented to vet the case study to examine the security vetting mechanisms adopted security of third-party applications for their smart home assistants. by Amazon Alexa and Google Home app stores. With a focus on More specifically, before a third-party application (or “skill”) can the authentication mechanisms between Alexa/Google cloud and be published to the app stores, they must go through a series of third-party application servers (i.e. end-points), we show the cur- automated tests and manual vetting. In this paper, we seek to un- rent security vetting is insufficient as developer mistakes cannot derstand (1) what aspects the security vetting process is focused be effectively detected and notified. A weak authentication would on, and (2) how effective the vetting process is to help developers allow attackers to spoof the cloud to insert/retrieve data into/from to improve security. the application endpoints. We validate the attack through ethical As a preliminary study, we focus on the authentication mecha- proof-of-concept experiments. To confirm vulnerable applications nism used by the third-party application’s server (called “endpoint”) have indeed passed the security vetting and entered the markets, to authenticate the Alexa/Google cloud (namely, cloud authenti- we develop a heuristic-based searching method. We find 219 real- cation). We choose cloud authentication because cloud-endpoint world Alexa endpoints that carry the vulnerability, many of which interaction is a key component that makes smart-home assistant 1 are related to critical applications that control smart home devices skills structurally different from the conventional smartphone and electronic cars. We have notified Amazon and Google about apps. Smart-home assistant skills need to route their traffic to a our findings and offered our suggestions to mitigate the issue. central cloud to translate a voice command to an API call in order to interact with the application server. 1 INTRODUCTION Method and Key Observations. Amazon Alexa runs both au- Smart home assistant systems such as Amazon Alexa and Google tomated vetting and manual vetting before a skill can be published, Home are entering tens of millions of households today [37]. As a while Google Home only runs manual vetting. Our methodology result, the corresponding app marketplace is also expanding quickly. is to build our own (vulnerable) skills and walk them through the Just like installing apps on smartphones, users can enable third- required testing to understand the vetting process. Our results show party applications for smart-assistant devices. These applications concerning issues in terms of the enforcement of cloud authentica- are called “skills” or “actions”. So far there are collectively more tion. First, we find that the Google Home vetting process does not arXiv:2001.04520v1 [cs.CR] 13 Jan 2020 than 70,000 skills available [1, 2], many of which are security/safety- require the endpoints to authenticate the cloud and their queries, critical. For example, there are skills that allow users to manage which leaves the endpoints vulnerable to spoofed queries. Second, bank accounts, place shopping orders, and control smart-home Amazon Alexa requires skills to perform cloud authentication, but devices through a voice interface. does a poor job enforcing it on third-party developers. Alexa per- Considering the sensitive nature of smart-home assistants, re- forms automated vetting that is supposed to detect and inform searchers have looked into the security aspects of these systems developer mistakes in the skill implementation. However, the se- and their third-party applications. For example, recent studies show curity tests are erroneous and have missed important checks (e.g., that it is possible to craft a voice clip with hidden commands em- application identifiers). As a result, a vulnerable skill, in theory, can bedded that are recognizable by the Alexa device but not by human pass the security vetting process to enter the app store. observers [6, 12, 46, 47]. In addition, researchers demonstrate the Proof-of-Concept. To illustrate the problem, we run controlled feasibility of a “skill squatting” attack to invoke a malicious ap- experiments to show how an outsider can spoof the cloud to query plication whose name sounds like the legitimate one [38, 50]. A recent survey study [8] investigated the network interfaces of many 1For convenience, we refer smart-home assistant application as “skills” for both Ama- IoT devices (including smart-assistant devices) to reveal their weak zon Alexa and Google Home. the target endpoint. More specifically, an attacker can build its own skill application, and use this skill to obtain a valid signature from the cloud for the attack traffic. Then the attacker can replay the signed traffic to attack the target endpoints. The attack is possible because the cloud uses the same private key to sign all the traffic for all the skills. The signature obtained by the attacker’s skill works on the victim endpoint too. We validate the feasibility of the attack and show that vulnerable skills can bypass both the automated tests and the manual vetting process to enter the app markets. Vulnerable Applications in the Wild. To confirm that there are indeed vulnerable skills in practice, we perform a scanning experiment. Since all Google Home skills are by default vulnerable, this experiment focused on searching for vulnerable Alexa skills. Figure 1: The execution of a simple voice command. We leverage ZMap to locate live HTTPS hosts and replay a spoofed but non-intrusive query to see if a given HTTPS host returns a valid converts the text-based response to audio, and (») plays it at the response. In this way, we located 219 vulnerable real-world Alexa edge-device. Note that the edge device never directly interact with endpoints. A closer analysis shows that some of these vulnerable the endpoint, and every request is routed through the cloud. For endpoints are related to important skills such as those that con- certain skills, users need to explicitly “enable” them in the skill store. trol electric cars, smart locks, thermostats, security cameras, and However, many skills can be directly triggered/used by calling the watering systems. skill’s name. We make three main contributions: Skill developers need to implement the endpoint to respond to • First, we present the first empirical analysis of the security user requests. For simple skills that do not require a database, both vetting process used by Amazon and Google to vet their Alexa and Google provide a “serverless” option for developers to smart-home assistant skills. We find that the current vetting hard-code the responses in the cloud. For sophisticated skills, an process is insufficient to identify and notify developers of endpoint is needed. the authentication issues in their endpoints. Authentication between Entities. The system contains three • Second, we validate the security problem by running a proof- main entities: the edge device, the cloud, and the endpoint. Both of-concept cloud spoofing attack, in an ethical manner. Alexa and Google require HTTPS for all network communications. • Third, we discover real-world applications that carry the This also helps the “clients” to authenticate the “servers”: The edge vulnerability. We notified Amazon and Google about our device can verify the identity of the cloud, and the cloud can also findings and offered our suggestions to mitigate the issue. verify the identity of the endpoint. Our work makes a concrete step towards enhancing the security For the other direction, two main authentications are used for vetting process of smart-assistant applications. Give the increasing the “servers” to authenticate the “clients”. First, in step ·, the cloud number and diversity of IoT applications and web of things, we needs to authenticate the edge device. This can be done because an argue that developers need more help to build their applications in “access token” has been exchanged when the user first sets up the a secure way. Effective security vetting and feedback will be critical edge device at home. Second, in step ¸, the endpoint also needs to improving the security of the IoT ecosystem. to authenticate the cloud. This step helps the endpoint to ensure that the queries are indeed coming from the Alexa/Google cloud 2 BACKGROUND & MOTIVATION instead of outsiders. We call it “cloud authentication”, which is done in different ways for Google and Alexa. We start by introducing the background for third-party applications in Amazon Alexa and Google Home. We then discuss the security • Amazon Alexa uses a public-key based method. The cloud vetting process on both platforms. signs its request payload with a private key, and the skill Alexa and Google Home Skills. Both platforms support third- endpoints can verify the signature using the cloud’s public party applications, which are called “Skills” on Alexa and are called key when receiving the request.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us