Self-Sorting Recycling Machine

Final Technical Report The George Washington University School of Engineering and Applied Science Department of Mechanical and Aerospace Engineering

Authors: Michaela Altland, Alexandra Morganti, Matthew Rosenstein, Noah Thomas

May 5th, 2019

1. Table of Contents

2. Abstract ………………………………………………………………………….2

3. Team Member Roles …………………………………………………………….3

4. Introduction ……………………………………………………………………...4

5. Design Description ………………………………………………………………6

6. Evaluation & Testing ……………………………………………………………10

7. Summary & Recommendations ………………………………………………....13

8. References ……………………………………………………………………….15

9. Appendices ………………………………………………………………………18

1 2. Abstract

In America, the concept of recycling as a means of waste management has gained more traction over recent time. Typically, this process involves consumers sorting their garbage into trash cans and their recyclables into a separate recycling can. When all recyclables are meant to be placed in one bin, this is referred to as “single-stream recycling.” These recycled items are taken to a facility where they are sorted into separate streams to be repurposed, with any non-recyclable items being sorted back into trash and sent to landfills. Within this system, when recycling bins contain more than 50% trash, the entire receptacle is treated as waste rather than being sorted apart. The challenge here is inherent to the method: the brunt of the responsibility of recycling falls on the consumer. The average person who is trying to dispose of their waste has to know, based on their current location, whether or not items are recyclable, or if they are too contaminated to be recycled by their nearest recycling facility. While currently the responsibility of recycling falls on the consumer, implementing green technology into this process would reduce the rates of human error and increase recycling rates. Artificial intelligence integrated anywhere into the waste stream process would likely increase recycling rates, and a local device attached directly to a trash can could increase the accuracy of the recycled items in a given can. In 2014, The George Washington University launched its Zero Waste Initiative to increase the sustainability and recycling rate around campus. GW’s Zero Waste Plan set the goal to minimize the university’s trash output and, conversely, maximize the recycling rate. New, easily readable signs were installed on the sides of the cans with the appropriate materials clearly advertised. The university also began the process of standardizing all trash/recycling cans located across campus, distributing cans in places around their Foggy Bottom campus where they had been deemed to be lacking, and ensuring that all trash cans have a recycle bin located close by [7]. While this initiative has slightly improved recycling rates and accuracy on campus, a ​ ​ self-sorting recycling bin that could sort items as either trash or recycling would remove any potential for human error in the process. On The George Washington University’s campus, an ideal self-sorting recycling machine would utilize the trash cans and recycling bins already located around campus, and would fit on top of the containers to be able to sort items into one bin or the other. Users would be able to place their items into the lid, and the Smart technology would do the rest of the work. This project aimed to create a self-contained lid for the trash receptacles located around GW’s campus to reduce the contamination rate within the recycling containers on campus. The self-sorting recycling machine uses ’s Cloud Vision API to identify a picture of a given item and determine recyclability. The user places the item onto a platform and presses a button, which signals a Raspberry Pi to capture a picture and cross-reference its annotations with a list of recyclable and non-recyclable items. After determining recyclability, the platform rotates to drop the item into the designated can. While GW’s campus recycling currently has a contamination rate of about 30%, this machine consistently has lower than a 15% contamination rate, indicating success [Appendix 9].

2 3. Team Member Roles

Michaela Altland: Subsystem lead: PiCamera, PIR Sensor/Button Assisted in subsystems: Raspberry Pi, Servo motor, Construction

Alexandra Morganti: Subsystem lead: Raspberry Pi, Servo motor Assisted in subsystems: PiCamera, Servo motor, Construction

Matthew Rosenstein: Subsystem lead: Google sorting code Assisted in subsystems: Raspberry Pi, Servo motor, Construction

Noah Thomas: Assisted in subsystems: Servo motor, Construction

3 4. Introduction

The purpose of this project was to design an inexpensive and energy efficient trash can cover that could fit over two standard sized “slim jim” trash bins located across The George Washington University’s campus. This cover would be able to distinguish recyclable materials from non-recyclable materials and sort them into their respective trash or recycling bin. The contamination rate of recycling bins is a very important issue that this project tackles head-on as GW has high contamination rates that need to be reduced [Appendix 1 and 2]. Another important aspect was the efficiency of the project, since a sorting machine that takes too much time would not be utilized as often. The hope for this project is that a successful product can be created, refined and reworked so that many more units could be produced and placed all over GW’s campus to help reduce the contamination rate of recyclables throughout the GW community. Functional Requirements 1. Objects are placed in the device one at a time 2. Identify placed items as recyclable or non-recyclable 3. Sort these objects into two separate bins 4. Less than 30% contamination rate in the recycle bin 5. Reduce recycling contamination 6. Aimed at GW’s campus and indoor trash cans Review of technical literature The recycling of plastic specifically (or lack thereof) is becoming a massive issue. Globally, more than 80% of plastic ends up in landfills, and in the US, that number is higher than 90% [8].

A self-sorting recycling machine would rely heavily on its code to interpret images and decide their recyclability. This project uses an online API to access the images taken by the PiCamera and recognize the product to identify whether or not it is recyclable. This is being accomplished through the use of a Raspberry Pi, on which a Python code is executed, to access Google’s Cloud Vision API. Machine learning is a relatively new development, especially for mainstream use, and Google is on the forefront of these developments being intended for the public.

Throughout the US, different methods have been used to assist in recycling. Currently there is no such attachable device for recycling in mass production [5]. This is due to the fact that the standards for recyclables changes throughout each state, and sometimes throughout a specific company or university. For instance, in DC paper cups are recycled and on George Washington University’s campus glass is recyclable. This project is specifically designed for GW’s campus but could be adapted for other locations if the inclusion and exclusion parameters were edited based on the area’s specific recycling requirements.

API, or Application Programming Interface, technology allows a user to access certain features of a code without having to understand all of the underlying complexity [10]. Artificial Intelligence, or AI, is a computer that has the ability to practice machine learning [11]. For example, Google’s Cloud Vision API (which is being used in this sorting mechanism) allows

4 users to take a picture, access Google’s Cloud database, access their pattern recognition AI, and get back an output of multiple possible results with confidence values [1]. Machine learning is an extremely complicated process, which is the advantage of having access to an API. This allows for programmers to write codes that can locally access machine learning without having to have an internal artificially intelligent computer. Computers are trained to make predictions based on data, and then are provided with more and more knowledge to be able to improve their prediction ability [12]. In the context of Cloud Vision, every time someone completes the Google verification to ensure they are not a robot (known as ReCAPTCHA) by clicking on the images that contain a road sign, the AI is better trained to identify objects in the foreground [13]. In 2007, both Google and IBM announced plans to build data centers and allow access to students for “cloud computing” [14]. Both companies knew they had infinitely more computing power than they could utilize, so they began to make it available to more people. Without these companies making their resources available, this project would be much more difficult to complete. This project relies on Google API combined with green technology. Green technology is a field that is lacking and does not have the proper funding. It is a relatively young market with a lot of interest. Green technology comes in many forms, all with the goal of making technology that is mindful of environmentally friendly methods. With an inexpensive device such as this, fewer funds would be needed to help this community and help in the mission to become greener. The only way to maintain our reusable resources is to consciously reuse them, and this product would help in that mission. According to The Federal Ministry for the Environment, Nature Conservation and Nuclear Safety, the five methods for establishing waste management and recycling are “prevention, preparation for reuse, recycling, recovery, and disposal” [6]. With a focus on prevention and recycling, the clean technology used in this device is making significant changes to recycling rates and has the ability to make a difference across an entire community.

5 5. Design Description

Image 1: Final Product

The final product is an enhanced trash can cover that fits over two “slim jim” trash cans placed side by side. The main structure of the sorting machine is made of wood reinforced with screws. The electronic components, including the Raspberry Pi, the PiCamera, and all of the circuitry, are housed in a box located on top of the main platform. The sorting platform is also made of wood with a quarter inch thin layer of plastic on top that will help protect the wood and the integrity of the platform. The platform is able to rotate to +45° and -45° from a servo motor that is connected to and controlled by the Raspberry Pi. Subsystems: 1. PIR Sensor/Button: ​ A Passive Infrared (PIR) Sensor was used to determine when an object was placed in the device. This has a six-foot scope of detection and activates when there is a change in heat in this range [20]. As revealed in testing, the PIR Sensor was not as reliable as the team had hoped, so it was replaced with a more efficient button. The functional requirements now require the user to press a button after placing an object in the box which activates the code to run.

6 2. Raspberry Pi: ​ A Raspberry Pi is a small computer that can be programmed to do anything that is desired by the user, such as coding, using the internet, or machine learning. This is why it was such a useful tool when creating a project such as this that needs its own operating system. A monitor, keyboard, mouse, an HDMI cable, and the given power cable are needed when customizing the Raspberry Pi. When first plugged in it had to be booted, and Raspbian was chosen as the operating system because it was advertised as the most useful and versatile. The Raspberry Pi is built with GPIO pins to connect the Servo motor and button, accompanied by a breadboard [Appendix 6]. PiCamera: A PiCamera is specifically designed to be compatible to take photos powered ​ by a Raspberry Pi [16]. The PiCamera ribbon slides directly into a slot on the Raspberry Pi, specifically designed for the PiCamera. The Camera and SSH were enabled in the “Raspberry Pi Configuration” menu upon first boot, and running “import PiCamera” in the terminal enabled its usage [21].

SD Card: For a Raspberry Pi to first boot, an SD card must be inserted with NOOBS ​ downloaded onto it. NOOBS is the system needed to install the operating system, Raspbian. The SD card stores the sorting code.

WiFi: To communicate with Google modules for this project, the Raspberry Pi was ​ connected to the internet using Wi-Fi. This allows for more flexibility in its location, and is connected by running a few simple commands through the terminal. The “nano” command was used to access the “wpa_supplicant.conf” file, and the contents of the file were changed to connect it to the “eduroam” network, shown in Appendix 4.

Remote access: The codes on the Raspberry Pi are run with remote access from a separate ​ computer. Ensuring that the Raspberry Pi was turned on and connected to WiFi, “ssh -l pi” was run in the terminal window of another computer. The Raspberry Pi’s password was entered and then any code on the Pi could be executed from this remote access computer. The sorting code only needs to be opened once in this method, and it will continue to loop every time the button is pressed until the script is interrupted [26]. This remote access also allows the code to be opened and edited without booting up the Raspberry Pi.

3. Google sorting code: ​ The innovative technology of this project harnesses Google Cloud to determine an items’ recyclability. The code, seen in Appendix 3, follows the pseudo-code flowchart seen in Image 2. A connection with Google’s API was established, and the image captured by the Raspberry PiCamera gets sent to the server and labels for what is contained in the picture are returned. Inclusion parameters were chosen by the team based on what objects are recyclable on The George Washington University’s campus. Exclusion parameters are checked, followed by the inclusion parameters. Finally, the code designates the object as recyclable or trash.

7

Image 2: Pseudo flow chart

First, the code accesses Google’s API and Cloud Vision and ensures a reliable connection, making sure information and photos from the Raspberry Pi can be uploaded to Google and the annotations can be downloaded [11]. Lines 1-2 in Figure 3 import default settings for a Python code. To access Cloud Vision’s annotations feature, the “types” module must be imported. The credentials are then checked to ensure proper permissions. If permissions are not verified, or if the system cannot access Google, an error is returned. Next, the image is uploaded to Google’s API. The location of the image taken by the camera is declared in line 8, and opened in line 9. Line 10 tells Google to annotate the image, and then the annotations are received in line 11 and saved as a list in line 12. In line 13 the variable “recyclable” is declared for the first time and set equal to 1. This variable, at the end of the code, will have been updated to be 1 if the object is recyclable and 0 if it is not. The variable starts at 1 because the exclusion parameters are checked first, and if the recyclable variable changes to 0 after the exclusion parameters, the item is automatically not recyclable. The only exclusion parameter introduced is ‘food’, because even recyclable material with food waste is no longer recyclable. The loop in lines 14-20 checks each individual tag to see if it contains any of the exclusion parameters, in which case it would set “recyclable” to 0 and break to end the code. Otherwise, after checking all the tags for the exclusion parameters, the code moves on to check for inclusion parameters. The tags are checked in lines 21-30 for the inclusion parameters, which were determined through label testing to see how recyclable items are typically labeled, compared with DC regulations for recyclable items. ‘Can’, ‘drink’, ‘bottle’, and ‘glass’ are all easily identifiable recyclable items by the API. Water bottles and other clear plastic bottles return ‘water’. ‘White’ was included because that is the label returned by clean paper, and ‘beverage’ was included because empty paper drink containers and other recyclable items are labeled as such. A new variable, “test”, is used to check each inclusion parameter; this is so if the item contains food, the code will skip the inclusion parameter test. “Test” is set to 1 if any matches are found between the image tags and the inclusion parameters, otherwise it is set to 0. Finally, the code checks to see if the image contains the parameters. If the image does not contain food, then “recyclable” is still set to 1. If none of the inclusion parameters are found, test is set to 0. Google Modules: Google has a vast library of packages that can be downloaded onto a Raspberry Pi. Modules including “google”, “google-cloud”, and “google-cloud-vision” were installed using the “pip install” command for this project [19]. This is an extremely crucial and innovative section as it is the technology that powers the design on the project. Supplemented with the credentials, this let the Raspberry Pi use the Google packages, including Google Cloud Vision.

8

Credentials: In order to use the large amount of technology to users, credentials must be created. Credentials allow the user to register with Google and give the ability to locally utilize the APIs that Google has available. On the Google Cloud Console website, the team chose to start a new project and created a “Service account key” in a JSON file format [18]. The role of “owner” was chosen as this gives the user the most freedom for different uses of the file. This JSON file contains the credentials from Google and needed to be saved in the same folder as the code that is being run by the Raspberry Pi [Appendix 5].

4. Servo motor: ​ A servo motor was used as the driving force for rotating the sorting platform. The servo motor selected was a LewanSoul LD-27MG standard digital servo motor because of its large torque capacity which was necessary to make sure that it could rotate the entire sorting platform along with any objects that were placed on it. The servo was sandwiched in between two pieces of wood to make sure that all rotational force went into rotating the platform exclusively and in order to keep the servo motor in place. The servo also needed a 5-7V power supply which required the use of a battery pack that contained four AA batteries that provided 6V of power to the servo.

5. Removable can lid: ​ The lid of the cans has been constructed out of wood to contain all the electronics, as well as the rotating platform. A 22’’ by 20’’ box was built out of plywood, and a shelf was installed on one side to support the electronics. This was designed so that it can be moved to any set of “slim jims” around campus. It easily slides on top and sits sturdily on top, but is easily removed to change location or for trash removal [Appendix 7]. Electronics box: ​ A smaller box with only two sides and a lid was constructed of dimensions 7.5’’ by 4’’, to contain the Raspberry Pi, breadboard, battery pack, and to hold the camera in its proper place. This keeps the electronics out of the way of the user to ensure a continued connection, and was added for aesthetic [Appendix 7].

Rotating platform: An item is placed into the lid and set on top of a platform, which rotates after it decides recyclability. The platform tilts 45˚ either way to tip the object into the designated can. The platform is constructed out of a piece of plywood cut to 11’’ by 14’’, with a sheet of 0.25 inch polyethylene glued to the top to encourage items to slide off. The servo motor is attached to a 0.25 inch diameter steel rod, which is welded to a flat piece of metal screwed into the plywood. The rod is supported on the servo side by a piece of wood screwed into the platform, and on the other side through a hole cut in the side of the lid [Appendix 7].

9 6. Evaluation & Testing Testing for the design was for the most part done using the Python code and physical tests. Each subsystem was tested individually to ensure that they functioned since each subsystem had to work for the overall project to succeed. Then, the subsystems were tested with each other to ensure that they worked together. Testing was done throughout the stages of the project with the majority of the final touches occurring when the whole project was finished being assembled.

1. PIR Sensor/Button: The original design to activate the PiCamera was the PIR sensor. The testing phase for this only required the sensor, the Raspberry Pi, the PiCamera, and the code [22]. The code was run and multiple team members acted out placing their garbage in front of the system. The code then outputted “motion” and “picture taken”. An issue encountered during testing was that it took about 5 seconds after the actual movement for the code to read “motion” then another 12 seconds before “picture taken”. The team elected, due to this poor performance during testing, to remove the PIR sensor and instead install a button in its place. The same components were used for button testing [Appendix 6]. This time, when the button was pressed, the code output “pressed” and “picture taken” within 1 second of the button being pressed. This lead the group to keep the button due to its speed, accuracy, and reliability [24].

2. Raspberry Pi: ​ The Raspberry Pi and boot were tested on each use; it worked properly and turned on when there was a properly formatted SD card inserted. PiCamera: The command “Raspi still” is used in the command window to test the ​ PiCamera [23]. This command ensures the camera is working properly by saving a photo on the Pi without having to run the code.

WiFi: The wifi was tested by connecting to the school’s EDUROAM network using the ​ code shown in Appendix 4. The ethernet cables were removed from the system and the system was still connected to the internet, as shown in the top right hand corner of the monitor.

3. Google sorting code: ​ The Google sorting code included making the Google module work and activating the credentials. The sorting code was originally tested on a remote computer. The group then used the code to call photos from the computer and read the five outputs and ended up with a high percentage of accuracy. When the full module was built, further testing was done with the rotating platform. The testing process for this was putting objects in front of the camera and pressing the button, then seeing which direction the platform turned. Results were recorded and can be seen in Appendix 8. The results proved to be less than 30% contamination rate, being

10 closer to 15%, which was the team’s original goal. Some results and the image are displayed in Image 3.

Image 3: PiCamera Photo with Sorting Code Output

4. Servo motor: The servo motor was originally tested individually with just the code. A random number generator chose between the values of 0 and 1 for trash and recyclables respectively. Issues came up in the first trials since an Arduino was used. The Arduino code did not work and had difficulty connecting to the Raspberry Pi code and receiving commands. The code was rewritten and rewired to work solely with the Raspberry Pi because it was less electronics and quicker to operate. The test was generating the number and then ensuring it turned the right direction and degree for the corresponding number. This had a 100% success rate. The team noticed that the servo had issues during testing turning the platform due to it being heavy. The battery pack was added to provide more power for the servo move the platform. The servo accompanied by the battery pack always turned the correct direction that the code outputs depending on 0 or 1.

5. Removable can lid: ​ The lid fits on top of two “slim jim” trash cans when they are placed side by side. The lid was put on and removed many times throughout the testing process, proving how easy it is to move to other “slim jims”.

11

Rotating platform: The rotating platform was also tested when it was attached to the ​ servo. The weight that the rotating platform could handle was calculated and tested before being implemented. The testing was based on materials that would be disposed of within the George Washington University’s campus, with data provided by GW Zero Waste.

Overall, the testing proved to be a success. The results of the completed tests are shown in Appendix 8 and 9. They show the percent contamination in both the waste and recycling streams tested by the device. As shown, the results have less contamination then the current contamination of GWU as shown in Appendix 1 and 2. The contamination rate in the projects recycling bin was low, around 15% while the waste stream had 35% of recyclables. This is an acceptable rate because keeping trash out of the recycling bin is what is most important. The project has continued to lower its contamination rate with further testing and if continued would hopefully have little to no errors.

12 7. Summary and Recommendations

FR# 1 2 3 4 5 6

Functional Object is Identify Sort these Less than 30% Reduce Aimed at GW’s Requirements: placed in placed objects objects into two contamination recycling campus and device one at a as recyclable separate bins rate in the contamination indoor trash time or trash recycle bin rate cans

Comparison: Necessary and Code outputs Platform tilts to Our tests have Our ~15% is Designed to be achieved recyclability 45˚, allows revealed a less than the located indoors and sorts via items to fall contamination >30% and are fitted to the platform into one bin or rate of 10-15% contamination on-campus the other rate seen on “slim jims” campus Image 4: Functional Requirement Comparison

The strengths of the project came with the use of our coding. Google Cloud Vision is a platform that labels and allows the user to categorize images, and the team was able to modify it for the specific use of identifying recyclability. By using the Raspberry Pi alone instead of in conjunction with the Arduino, the entire code was able to be created in Python. Google has some resources with suggestions on how to implement their different modules, and combining these suggestions with our own intuition and additional research lead to a code that works successfully every time it is executed. Python codes are simple to access remotely, which makes it easy to initiate the set up every time. The button, PiCamera, and servo motor all fully integrate into the Pi, which means the code can continue to loop as many times as the user presses the button. Another strength of this project was how inexpensive it was to create. Due to the limitation of a maximum budget of $600, the design was heavily streamlined to be as cost effective as possible. All of the materials for construction were carefully chosen to be as cheap as possible while still being sufficiently strong and sturdy. The most costly components of this project ended up being the electronic components, but even these were still carefully chosen as to not take up too big a portion of the budget. The total cost of the project ended up being $156.88, keeping it well underneath the budget of $600 [Appendix 10]. This was a very big accomplishment for the team since it was proven that a working product could be made without spending large sums of money or going over budget. Another weakness of the project is the slightly asymmetrical nature of the pieces of wood that were cut to build the base and the sorting platform. These slight imperfections resulted from rushed manufacturing jobs due to lack of resources and machine shop time, and while they do not affect the function of the product, they do affect the aesthetic aspect which is a very important part of a product that will be on display to the public. One other aspect that affected the end product was the materials chosen for construction. As stated above, the main structure was constructed out of plywood due to the ease of manufacturing. Ideally the final product would be mainly constructed from plastics and metal which would also involve a more difficult manufacturing process but would ultimately result in a better end product.

13 If this design were to be pursued in the future, one way it would be improved would be the addition of a motion sensor. This project originally used a PIR sensor to activate the camera. The PIR sensor, which activates by sensing a change in heat, proved to be difficult for the user to activate, and since it was designed to be indoors, it was difficult for the sensor to detect a small heat increase. The group then elected to use a button, which was more reliable, quicker to activate the camera, and more user friendly. This worked for the project in its current state, but if it were to continue, it was recommended by the professor and the customer to have less user interaction. Thus, the project would have a sensor that would be able to detect motion. This would then require a higher budget than the one available for this iteration of the device. The goal of this project was to combat the high percentage of contamination in The George Washington University’s recycling stream. A self-sorting recycling machine, such as this one, would eliminate human error by allowing all recycling decisions to be made via artificial intelligence. The self-sorting recycling machine was determined to allow 15% contamination into the recyclable container, which is significantly lower than both the current numbers on GW’s campus and the functional requirement goal. The potential for Smart technology to integrate into the recycling world is greatly increasing, and can certainly reduce errors inherent to the current recycling process. For tackling the issue of recycling while on GW’s campus, a self-sorting recycling machine would definitely make significant changes in reducing contamination within the recycling stream, and has been proven to be more effective than consumers deciding the fate of their own waste items.

14 8. References

Patents 1. Mallet, Scott R, et al. “US8560460B2 - Automated Waste Sorting System.” Google , ​ ​ Google, Sept. 2003, patents.google.com/patent/US8560460.

2. Yona Becher, and Daniel M Lemieux. “US5447017A - Automatic Waste Recycling Machine and Disposal System.” Google Patents, Google, Sept. 1995, ​ ​ patents.google.com/patent/US5447017A/en.

3. Bergler, Frank. Application Program Interface. US 5572675 A1, United States Patent and Trademark Office, 5 Nov. 1996.

Similar and Existing Products 4. “R3D3 Smart and Connected Sorting Bin.” Green Creative, ​ ​ www.green-creative.com/en/r3d3-sorting-bin#presentation. ​

5. Brown, Bruce. “Oscar the A.I. Trash Can Sorts Recyclables From Garbage.” Digital Trends, ​ ​ Digital Trends, 12 July 2018, www.digitaltrends.com/home/oscar-ai-trash-can-sorts-recyclables-garbage/. ​

Recycling Data/Information 6. “Waste Management and Recycling”. Federal Ministry for the Environment, Nature ​ ​ conservation and Nuclear Safety. 2016. Web. 3 April 2019 ​

7. Colonial Composting Pilot Program | Sustainability at GW | The George Washington ​ University, George Washington University, 29 Aug. 2016, ​ sustainability.gwu.edu/gw-roadmap-zero-waste.

8. Treat, Jason, and Ryan Williams. “We Depend On Plastic. Now, We're Drowning in It.” We ​ Depend on Plastic. Now We're Drowning in It., 16 May 2018, ​ www.nationalgeographic.com/magazine/2018/06/plastic-planet-waste-pollution-trash-cris is/. ​

9. SCS Engineers. “Summary Report of GW University Waste Characterization-Spring 2018” GW Waste Audit Report. 2018.

Image Recognition Code Help/Sources 10. “Application programming interface.” Wikipedia, Wikimedia Foundation, 16 Nov. 2018, en.wikipedia.org/wiki/Application_programming_interface.

11. “Artificial Intelligence.” Wikipedia, Wikimedia Foundation, 16 Nov. 2018, en.wikipedia.org/wiki/Artificial_intelligence.

15 12. “Machine learning.” Wikipedia, Wikimedia Foundation, 11 Nov. 2018, en.wikipedia.org/wiki/Machine_learning.

13. “ReCAPTCHA.” , Google, https://www.google.com/recaptcha/intro/v3.html#the-recaptcha-advantage

14. Lohr, Steve. “Google and I.B.M. Join in ‘Cloud Computing’ Research.” California State University at Northridge, New York Times, 8 Oct. 2007, www.csun.edu/pubrels/clips/Oct07/10-08-07E.pdf.

15. Abadi, Martín. TensorFlow: A System for Large-Scale Machine Learning. USENIX, 2 Nov. 2016, www.usenix.org/system/files/conference/osdi16/osdi16-abadi.pdf. ​ ​

16. Frost, Dan. "Image Recognition on the Raspberry Pi." Linux Format 07 2018: 55-8. ​ ​ ProQuest. Web. 18 Nov. 2018. ​

17. PI Evangelist: The API Journey Or what is the Point of an API, by Tony Hirst ​ (@psychemedia). Chatham: Newstex, 2015. ProQuest. Web. 18 Nov. 2018. ​ ​ ​

18. Grancharov, Constantine. “Custom and Open Source Code: A New Approach to Application ​ Security Management.” 4 Feb 2016. Security Intelligence. Web. 18 Nov 2018. ​ ​

19. “Vision API Client Libraries | Cloud Vision API Documentation | Google Cloud.” Google, ​ ​ cloud.google.com/vision/docs/libraries.

Raspberry Pi and Arduino Information 20. “How PIRs Work”, Adafruit. URL: ​ ​ https://learn.adafruit.com/pir-passive-infrared-proximity-motion-sensor/how-pirs-work [cited 28 January 2014]

21. S. Nisenzon and K. Venkataraman. US patent for a “Camera Modules patterned with pi filter Groups” Docket No. 20130293760, filed 1 May 2012

22. “Using the HC-SR501 PIR Motion Sensor” DroneBot Workshop. URL: https://dronebotworkshop.com/using-pir-sensors-with-arduino-raspberry-pi/ [cited 18 ​ November 2018]

23. “Getting Started With Picamera” Projects Rasberry Pi. URL: https://projects.raspberrypi.org/en/projects/getting-started-with-picamera/4 [cited 18 ​ November 2018]

24. Jose Leon. “Raspberry Pi and Arduino Uno Working Together” International Journal of Computer Science & Information Technology (IJCSIT) Vol 9, No 5, October 2017 URL: https://arxiv.org/ftp/arxiv/papers/1711/1711.09750.pdf [cited 18 November 2018] ​

16 25. D. Jones. “Picamera” Picamera. URL: https://picamera.readthedocs.io/en/release-1.10/recipes1.html [cited 1 November 2018] ​

26. Gokilavani, Navaneethan. “Raspberry Pi Based Robot With Cloud Technology” Sri Shakthi ​ institute of Engineering and Technology, Coimbatore, India. Oct 2016: Volume 6 Issue ​ No 5. Ijesc.org. Web. 18 Nov 2018 ​ ​

17 9. Appendices

Appendix 1: Contamination in the Recycling Stream at GWU [9]

18

Appendix 2: Recyclables in the Waste Stream at GWU [9]

19

Appendix 3: Google Cloud Vision Code including Exclusion Parameters

Appendix 4: EDUROAM Connection Code

20

Appendix 5: Google Cloud Credential Key

Appendix 6: Circuit Diagram of Final Product

21

Appendix 7: Final AutoCAD Drawings with Dimensions

22

Appendix 8: Testing Results of Waste and Recyclables in Respective Bins

Appendix 9: Material Breakdown during Testing

23

Item (Description) Cost

Raspberry Pi and Button $34.99

PiCamera $11.99

PIR Sensor $11.55

Servo motor and battery pack $32.00

Screws $4.99

Wood exterior $40.38

Plastic shell $20.98

Total Cost: $156.88 Appendix 10: Final Budget

24