<<

The Compassion Project

Mitchell Black, Kyle Melton, and Amelia Getty 25 April 2019

Abstract

The Compassion Project is a public, collaborative art installation sourced from approximately 6,500 artists around Bozeman. Each participating artist painted an 8”x8” wooden block with their own interpretation of compassion. To accompany their art, each artist wrote an artist statement defining compassion or explaining how their block relates to compassion.

We created a as a companion to the installation. The primary usage of this app is to allow visitors to lookup the artist statement on their phone from the unique ID number assigned to each block. Additional app functionality includes favoriting pictures, personal user viewing history, and usage statistics.

We also performed a comparative analysis of image descriptors for the blocks. We were interested in the idea using an picture to search for that block artist’s statement. We have images of each block to be used as a thumbnail when users look up a block. To evaluate which image descriptors are most effective at grouping the blocks, we took secondary photos of a subset of the blocks and tested whether the descriptors can pair the thumbnail of a block to our photo of a block.

1

Qualifications

See attached resumes.

2

Bozeman, MT 59718 ⋄ 406-580-3154⋄ [email protected] ⋄ linkedin.com/in/ameliagetty Amelia Getty ​ ​

Computer Science senior with diverse experience poised to transition to software development or engineering. Organized and dependable. Aptitude to learn quickly and work independently and with a team.

PROGRAMMING EXPERIENCE Java ⋄ PHP (Laravel) ⋄ C ⋄ Python ⋄ C++ ⋄ SQL Linux Systems Admin ⋄ Databases ⋄ Graphics ⋄ Networks ⋄ Security (2019)

EDUCATION

♢ Computer Science​.​ ​Montana State University​, 2017 - 2019

♢ BA Modern Languages and Literatures, German​. (+ Pre-vet) ​Montana State University​: ​2007 - 2013

♢ Electrical Engineering​.​ ​Universität Stuttgart​,​ ​semester abroad 2010

PROFESSIONAL PROFILE

Leadership ⋄ Web Development ⋄ Clerical ⋄ Collections ⋄ Records Maintenance ⋄ Customer Service ⋄ Maintaining Inventory ⋄ Stocking ⋄ Filing ⋄ Data Interpretation

♢ Founder, DevOps; ​artfight.net ​08/2015 - present

Programmed web app with a small group of coders and managed the remote linux server using the LAMP stack and Laravel. Lead a small team of volunteer moderators.

♢ Office Specialist; ​Michaels Arts and Crafts ​08/2013 - 12/2015

Promoted from replenishment associate on the recommendation of my peers. Maintained store sales, HR, and payroll records. Prepared daily store bank deposit. Maintained inventory of store use items and supplies. Facilitated inbound direct freight shipments and paperwork and provided superior customer service.

♢ Replenishment Associate; ​Michaels Arts and Crafts ​08/2012 - 08/2013

Unloaded truck. Organized shelves and product. Set up advertising signs, cashiered.

♢ BOREALIS Intern;​ Montana Space Grant Consortium, Montana State University ​Summer 2008

Accepted as an MSGC intern from a pool of applicants with high recommendations. Launched and recovered high altitude balloons. Designed and maintained on-board experiments and data collection devices. Interpreted collected data.

COMMUNITY INVOLVEMENT LANGUAGES

♢ Heart of the Valley Animal Shelter,​ ​Bozeman, MT ​(25 hrs, 2014-2015) ♢ English

♢ Camp Husky Project Spay/Neuter Clinic,​ ​Butte, MT ​(2008) ♢ German

♢ Bioneers Conference,​ ​Bozeman, MT ​(2008)

♢ Physics Tutoring, MSU,​ ​Bozeman, MT ​(2008)

KYLE MELTON 200 Gallatin Hall – Room 412A Bozeman, MT 59715 760-914-2476 [email protected] linkedin.com/in/kyle-melton/ .com/Mammothskier/

EDUCATION Montana State University August 2015 – May 2019 Bozeman, MT – B.S. in Computer Science and Computer Engineering – GPA: 3.14 – Courses in Networks, Software Engineering, Logic Design, and Linux Systems – Proficient in Java, Python, C, SQL, HTML, CSS, and JavaScript

WORK EXPERIENCE Software Intern, Blackmore Sensors and Analytics June 2018 – Present Bozeman, MT – Worked within multidisciplinary team to complete product development goals – Developed software within an agile development model – Used development tools such as Git and Google Test

RELEVANT PROJECTS Bridger Solar Team June 2018 – Present – Designed communication system to remotely monitor solar car – Led team of software and computer engineers to complete weekly objectives – Developed code to gather GPS, IMU, and battery protection system data

Bridger Robotics Team December 2016 – Present – Programmed mining robot that competed in the NASA Mining Competition – Developed to display telemetry data – Integrated known software patterns into existing code base

Giants Minecraft Plugin February 2014 – November 2016 – Created open source project to add functionality to base game – Provided technical support and continuous updates based on user demand – Downloaded by 28,000 users

LEADERSHIP Founding President, oSTEM at Montana State University December 2017 – Present Montana State University – Created MSU Student Organization to support LGBT students – Led an officer team to complete club objectives – Awarded Lavender Leader Award in May 2018 Mitchell Black 201 South 11th Ave Apt 23 Bozeman, MT 57915 406-491-4194 [email protected]

Education

Montana State University, Bozeman, MT Bachelors of Science, Computer Science and Mathematics, May 2019, GPA: 3.94

Relevant Experience

Senior Project The Compassion Project Sep 2018-Present I am part of a team of three students that built a mobile app for local art project The Compassion Project. The Compassion Project had thousands of community members paint a wooden block on the theme of compassion and write an artist statement on how their painting relates to compassion. Our app allows visitors to the exhibit to look up artists’ statements for each block, as well as to favorite blocks and access their viewing history.

Summer Intern Los Alamos National Laboratory May-Aug 2018 I interned with the Filesystems Team in the High Performance Computing division. My project was to write scripts to gather data from the Lustre filesystems on LANL’s computing clusters. I also made dashboards in Splunk - a data visualization tool - to allow cluster administrators to monitor the current state of the filesystem and alert them to problems.

USP Funded Undergraduate Research Montana State University Jan-May 2018 I conducted research on the Minimum Road Trips problem, an NP hard graph problem. I primarily researched related graph problems in graph flow and flow decomposition.

Other Recent Work Experience

Student Custodian Montana State University Jan 2019-Present

Student Data Entry Employee Montana State University Alumni Foundation Sep 2015-May 2018

Honors and Awards

MSU Mathematics Department Outstanding Scholar Award Spring 2017, 2018 COMAP Mathematic Modeling Competition: Honorable Mention 2018, 2019 Montana Mathematical Modeling Competition: Finalist, Presentation Portion Oct 2017 Upsilon Pi Epsilon Computer Science Honors Society Spring 2018 Pi Mu Epsilon Mathematics Honors Society Spring 2017 Montana University System Honors Scholarship 2014

Background

The Compassion Project is a program designed to bring the Gallatin Valley together by studying compassion. Each participant in the program receives an 8x8 inch wooden block that they can use to express their compassion. Once completed, the wooden block will be displayed at several sites around the Bozeman community. There are 6,000 painted blocks, and each of these have an accompanying artist statement. With the limited space and bountiful blocks, an app is needed to provide the user with the artist statement and installation site.

The Compassion Project app can be viewed as an automated tour guide, in that it gives viewers to the exhibit the ability to independently get information on each piece. Prior to smartphones, many museums used cassettes or other audio players for automated tour guides with numbered tracks to allow museum goers to get information on individual exhibits by playing the corresponding track. Nowadays, many museums have developed smartphone apps that serve as automated tour guides. These apps use a variety of technologies to provide users with descriptions of artworks. A notable example of this is “My Visit to the Louvre,” an application that provides users with audio guides and written descriptions of pieces in the museum and suggests exhibits to the user. This app provides a lookup feature where museum-goers can look up works by an ID number to get a description of this piece [1]. This app is most similar to the Compassion Project app, but other museums have taken different approaches. One such approach is SFMOMA’s app “SFMOMA Audio.” This app uses the GPS location of museum-goers to determine their location in the museum and to provide an audio description of the artwork they are standing in front of [2]. Another approach is a paper out of Stanford which explores the idea of using image recognition to allow users to take a picture of a painting and to return a description of the painting in the picture [3]. We chose a simpler approach as a matter of triage. It would only be marginally more convenient for the user to take a picture of a block to get a description than to search by number, but it would be significantly more difficult to implement the former approach than the latter approach. Because of this, we felt we could improve our app much more by implementing a simple search strategy and focusing on other components of our app than to implement a difficult search strategy.

3

We were still interested in the problem allowing visitors to The Compassion Project installation to look up the artist statement for a block simply by taking a picture of the art rather than searching by number though, even if this did not make the final version of the app. To do such a look up, a way of identifying a specific block in a set of pictures is needed. [3] explored the problem of reverse image specifically in the setting of art galleries. They used the eigen-images feature vectors; however, this requires resizing our images to squares where keypoint descriptors take input images of any proportions. [4] suggests a feature vector that uses the local color distribution of the image. This method is not size dependent and is fairly simple; however, the paint of the block is fairly glossy and has glare in photos. We implemented our look up using keypoint descriptors. OpenCV - an open source computer vision library - has many built in keypoint descriptors[5]. We performed a comparative analysis of these descriptors.

This Software Factory project is sponsored by Dr. Kayte Kaminski from the College of Education, Health and Human Development, and is the current executive director of The Compassion Project. We successfully deployed a fully working app on both Google Play and Apple App stores in time for the exhibition of the Compassion Project on April 15th, 2019.

Work Schedule

Responsibilities

Most of the work for this project was actually coding the app. All of us are about equally equipped to handle this, so we deferred assigning responsibility for certain portions of app until those parts of the development started. Portions of the app were assigned to individuals as needed. As we are using an Agile development cycle, it was simple enough to assign individual responsibilities on the fly. However, there were certain areas that some of us are more equipped to handle than others. We assign those responsibilities now. ● Mitchell Black: Image recognition, iOS deployment ● Kyle Melton: Scripts for uploading data to the database, Android deployment ● Amelia Getty: UI, Database design

4

Milestones

29 November 2018: Final proposal due, design finished 15 March 2019: App accepted to the Apple app store 29 March 2019: App accepted to the Android app store 15 April 2019: Exhibition of Compassion Project, app in use

Lifecycle

We used an Agile lifecycle approach. The goals of Agile Development is to deliver working software as quickly as possible as well as adapting to change. The first feature of Agile was especially important for us because of earlier deadlines of April 15th for the app to be complete. The iterative approach fit with the size of our team, the complexity of the app, and the short timeline. We also left some time for solving problems that come up later during development and based on feedback from our beta testers and the director of The Compassion Project. With this in mind, Agile development made the most sense for this project. We used 2 week sprints for our development process. Each sprint was dedicated to a certain element of our app. For instance, our first sprint was dedicated to AWS setup. A Gantt chart with a rough outline of our work schedule is included below. Because of the early deadline for the app, our sprints prior to February 1st were focused on app development. After we have submitted to the app store, our focus shifted somewhat to the image recognition portion of our project; however, more time than anticipated was needed to debug the project and submit updated versions of the app to the app stores.

5

Proposal

Functional and non-functional requirements

Below we list the functional requirements for our app. ● Search for a block by block number ● Fetch block thumbnail from Amazon S3 ● Fetch block information from DynamoDB ● Record previous searches and provide these to the user ● Favorite a block and provide the user their favorited blocks ● Record favorite statistics in DynamoDB ● Record view statistics in DynamoDB ● Provide an About screen with information about The Compassion Project ● Provide a Sponsors screen with a list of Compassion Project sponsors

Below is a list of non-functional requirements. ● Easy to use ● Clean and clear user interface design ● Low cost for cloud services and other fees ● Lightweight

6

A significant non-functional requirement is being able to afford the services that the app relies on. We examine this requirement now. These services are the Apple annual developer fee, Google Play Store listing fee and the AWS service fees. Fortunately, these fees are not particularly large and well within the budget of the Compassion Project, and we were able to get two out of the 3 services free. Below is the cost of the project.

Service Final Cost

AWS Service Fee Free with education credits

Apple App Store listing fee Free for nonprofits

Android App Store listing fee $25 one-time fee

Total $25

Performance requirements

The Compassion Project app is decidedly not computationally taxing. The biggest performance bottleneck for our app is downloading a thumbnail of the block from an AWS server to be displayed alongside the artist’s statement. The thumbnail photos were taken on an iPhone, which takes photos in a higher resolution than needed for our application. To speed up load times, we resized all the images to 500x500 pixels. This significantly decreased wait time for the Search screen. For the History and Favorites screen, many thumbnails are loaded at once. This creates long wait times as well. The image displayed to the user of these screens is much smaller than the image displayed to the user on the Search screen, so we created a second copy of each image that was 250x250 pixel to be used on these screens. This approach significantly improved wait times and UX.

Interface requirements

The main features of our app are the ability to search for blocks, view these blocks, favorite these blocks, view your favorited blocks, and view your search history. Our app consists of five stacks and six screens to allows the user to perform these and other tasks. How these components fit together is explained in the Architectural Design and Methodology section; this section focuses on how the user interacts with each screen.

7

The main screen is the Search screen. The Search screen allows user to search for a block. The Search screen also features prominent buttons for quick navigation to the About and Sponsors screen. When the user searches for a block, they are directed to the Block screen. The Block screen displays the artist statement and information about the artist to the user. The user can also favorite the Block on the Block screen. The blocks favorited by a user are accessible on the Favorites screen.

The user can move to screens other than the Search and Block screens by using the drawer menu. The Sponsors and About screens provide information about The Compassion Project and its sponsors respectively. The History and Favorites screens provide equivalent functionality. Both screens display a list of blocks to the user: previously viewed and favorited blocks respectively. Each block is represented by a row with a picture of the block, the block number and information about the block. These blocks are sorted chronologically based on when they were accessed last. The user can touch a row representing a block and be directed to the Block screen for that block.

Below are screenshots of the various screens.

8

Architectural design

9

10

Development Standards, Tools Used, etc.

Our guiding principle for choosing tools is as follows: when two options seem equivalent, choose one immediately instead of going into depth comparing the two. We did this as we felt that choosing one software and having more time to get familiar with it would yield better results than the marginal benefits of choosing one similar tool over another. The app was written in . React Native allows us to write a single Javascript project which can be compiled into native binaries for iPhone and Android. This is the primary reason we used React Native. However, we chose React Native over similar languages like FlutterIO as React Native has existed since 2015 and development resources are extensive. FlutterIO, as of October 2018, hadn’t been fully released yet and tutorials are limited. The backend of our app will be run on Amazon Web Services. AWS was chosen as it has cloud storage (which stores the block thumbnails) and database services (which stores artists’ statements and other text information.)

AWS provides a React Native API called AWS Amplify that we will use to communicate with AWS servers. While other cloud services may also meet our functional and cost requirements, we deferred to our guiding principle. We wanted as much time to get acquainted with our cloud service as possible. The use case diagram in the Architectural Design section provides a sample interaction with the database. When the user searches a block, the thumbnail is pulled from AWS S3 and the text information is pulled from AWS DynamoDB. The number of views is also updated in AWS DynamoDB. Likewise, the block number is recorded with React Native’s persistent storage component AsyncStorage. This information is used to generate the History screen. The process of liking a block is similar to the process of viewing a block in terms of data transfer.

We used OpenCV for the image recognition portion of our project. OpenCV is a free and open-source computer vision library that is cross platform, commonly used, and has many tools that could be useful for the image matching portion of this project.

11

Methodology

We define a few terms that are needed to understand the organization of our app. A ​stack​ is the coarsest grain of organization in our app, and each stack encapsulates a basic functionality of our app. For instance, the Search Stack controls the user searching for a block. A ​screen represents what might be shown to the instant at a single moment. A stack consists of a set of (not necessarily disjoint) screens that can be navigated between while the user is in that stack. The Search Stack consists of the Search and Block screens. At any moment, the user is in a stack and a screen; this combination represents the current state of the app.

Our app is organized around the drawer menu. The drawer menu consists a list of stacks; these stacks are listed as menu items. The user can move between stacks using the drawer menu, but can only be in one stack at a time. Likewise, the user can only be in one screen at a time. While in a certain stack, the user can move between the screens belonging to the stack. Originally, our app was designed using an explicit state pattern. That is, we differentiated between the current state of the app and the current screen of the app; however, the ReactNavigation library merges these and provides a simple API to switch between stacks and screens. Thus, we virtually designed our app with the State pattern in mind but represented the state of the app as the current Stack and Screen.

A complete list of stacks and screens belonging to each stack is given in the UML diagram in the Architectural Design section. The sequence diagram in the Architectural Design section also illustrates the difference between stacks and screens. A user is initially brought to the Search stack and the Search screen. When the user performs a search, they remain in the Search stack but are brought to the Search screen. However, when the user wants to view their view History, they must navigate out of the Search stack entirely and be brought to the History stack and the History screen.

We now describe a few of these state transitions in more detail. If the user is in the Search Stack and the Search screen, they are presented with a text box to enter a block number. When they enter a valid block number, the navigate method of the ReactNavigation library is called and they are redirected to the Block screen. The entered block number is passed as an argument to

12

this method, and an instance of the Block component is created with the block number. The instantiation of a Block component makes request to the AWS backend to fetch the block thumbnail and the block information. Additionally, the block number is logged using React Native’s AsyncStorage API in a list of viewed blocks to be used when generating the History screen.

If the user is in the History Stack and the History Screen, the user is displayed a list of previously viewed blocks. A list of viewed blocks is logged using React Native’s AsyncStorage API. A subset of this list is displayed to the user as instances of the ​BlockListView ​component. The BlockListView ​component functions the same as the ​Block ​component, except the render method displays the block as a horizontal stripe instead of taking up the entire screen. The BlockListView ​component is also touchable, and when the user touches a ​BlockListView instance, ​ReactNavigation ​is called and they are redirected to the ​Block ​screen with an instance of the corresponding Block components.

The Favorites stack functions almost exactly the same as the History stack, except the list of blocks displayed to the user are those blocks that the user has favorited. The Sponsors and About Stacks each only contain one screen, so we won’t describe these except to say that each contain clickable buttons that redirect the user to web pages outside of the app.

We used keypoint based matching to perform reverse image search into our pictures of block thumbnails. A ​keypoint​ is simply a pixel with “interesting” local structure. What makes a pixel “interesting” varies across keypoint descriptors and can be quite complicated. As such, we used these keypoint descriptors as black boxes. Keypoint based search works by finding keypoints in the user’s picture of a block, computing descriptors of these keypoints, and comparing them to descriptors of keypoints in pictures of the block. The more keypoints they have in common, the more confidence we can have that the pictures are of the same block. To determine if two keypoints are the same, we use the distance between them. A distance between two keypoints is simply a positive number; the smaller this number, the more we can be confident of the keypoints’ similarity. For a keypoint in our search image, we find the two closest keypoints in a query image. If the closest keypoint is significantly closer the the second closest, we conclude with high probability the keypoints depict the same object.

13

OpenCV is an open source computer vision library, implementing several keypoint descriptors. We performed a comparative analysis of these descriptors to find which, if any, of them would be effective in searching among the blocks. The keypoint descriptors we tested were: KAZE, AKAZE, ORB, and BRISK [5].

Whenever a user searches for a block in the app, a thumbnail of the block is displayed. We used these thumbnails as the pictures to be compared against. We took secondary pictures of 93 of the blocks in the data set. For each of our test images, according the above methods, we found the closest image in our set of thumbnails. We considered this as a successful match if the program matched the photo with the thumbnail.

Results

The Compassion Project app was accepted to the Apple app store on March 15th and the Google Play store on March 26th. The Compassion Project opened on April 15th with a celebration at the Emerson Center for the Arts and Culture, which saw a spike in downloads on both stores. As of April 23rd, there have been over 100 downloads of the app. Below are charts of downloads on both app stores.

14

We tested the ability of the KAZE, AKAZE, ORB, and BRISK feature descriptors to search for a picture of a block amongst a set of pictures of blocks. Below are the results of this experiment.

Keypoint Average Time to Average Time to Search Success Rate Descriptor Compute

KAZE 2.6 sec 20 sec 54.8%

AKAZE 0.5 sec 12.7 sec 73.1%

BRISK 0.1 sec 7.7 sec 63.4%

ORB 0.1 sec 0.7 sec 9%

AKAZE had the highest success rate, but still failed over a quarter of the time. Furthermore, even assuming that the descriptors of the thumbnails are computed in advance, it still took 13 seconds to search for a single image from only a subset of all thumbnails on a desktop computer. People aren’t willing to wait that long for the results of a search, and would consider it basically unusable. We thus concluded that, while both the success rate and search time could be slightly optimized, using keypoint descriptors to perform image search is infeasible on our dataset with our methods and resources.

15

One of the faults of the keypoint method is that the physical boundaries of the block are detected and matched to an edge in the search image, rather than detecting features of the art itself. Below is an example of this failure. These two blocks were matched using the BRISK descriptor. The lines between the two pictures show which keypoints were determined to be good matches. Note that the majority of keypoints in the larger image were matched to a single keypoint in the smaller image.

16

References

[1] “The Louvre App My Visit to the Louvre.” The​ Seated | Louvre Museum | Paris​, 23 June 2016, www.louvre.fr/en/louvre-app. [2] Chun, Rene. “The SFMOMA's New App Will Forever Change How You Enjoy Museums.” Wired​​ , Conde Nast, 3 June 2017, www.wired.com/2016/05/sfmoma-audio-tour-app/. [3] Gire, Vincent, and Sharareh Noorbaloochi. Painting​ Recognition Using Camera-Phone Images ​. Stanford University, May 2007, web.stanford.edu/class/ee368/Project_07/reports/ee368group02.pdf. [4] ​https://www.pyimagesearch.com/2014/12/01/complete-guide-building-image-search-engine-python-opencv [5] https://docs.opencv.org/2.4/modules/nonfree/doc/feature_detection. [5] Kaminski, Katherine. “The Compassion Project.” Montana​ State University: The Compassionate Project​, 2018, www.montana.edu/thecompassionproject/.

17 Code Appendix

/* SCREENS */

/*Search Screen*/ import React from 'react'; import { Image, StyleSheet, Text, TextInput, View, StatusBar, Platform, Alert } from 'react-native'; import { Button } from 'react-native-elements'; import styles from '../constants/style.js'; export default class SearchScreen extends React.Component { constructor(props) { super(props); this.state = { number: '', }; }

/* * null method to pass as callback. * this callback rerenders the history and favorites screen, * but does nothing for the search screen */ onGoBack = () => {

}

/* ensure the number entered is non empty before passing to BlockScreen */ checkEntry() { if(this.state.number != ''){ var enteredNumber = parseInt(this.state.number); /* Cast enteredText to an int */ this.setState({ number: ''}); this.props.navigation.navigate('Block', { number: enteredNumber, onGoBack: this.onGoBack.bind(this)}) } else { Alert.alert("Please enter a number") this.setState({number: ''}) } }

/* redirect user to sponsors screen when they click sponsors button */ onSponsors() { this.props.navigation.navigate('Sponsors') }

/* redirects user to about screen when they click about button */ onAbout() { this.props.navigation.navigate('About') }

render() { return ( {/* TCP MT Logo */} {/* text input and purple boundary */}

Enter the number of the block you want to search for.

{/* about button */}