The Compassion Project
Mitchell Black, Kyle Melton, and Amelia Getty 25 April 2019
Abstract
The Compassion Project is a public, collaborative art installation sourced from approximately 6,500 artists around Bozeman. Each participating artist painted an 8”x8” wooden block with their own interpretation of compassion. To accompany their art, each artist wrote an artist statement defining compassion or explaining how their block relates to compassion.
We created a mobile app as a companion to the installation. The primary usage of this app is to allow visitors to lookup the artist statement on their phone from the unique ID number assigned to each block. Additional app functionality includes favoriting pictures, personal user viewing history, and usage statistics.
We also performed a comparative analysis of image descriptors for the blocks. We were interested in the idea using an picture to search for that block artist’s statement. We have images of each block to be used as a thumbnail when users look up a block. To evaluate which image descriptors are most effective at grouping the blocks, we took secondary photos of a subset of the blocks and tested whether the descriptors can pair the thumbnail of a block to our photo of a block.
1
Qualifications
See attached resumes.
2
Bozeman, MT 59718 ⋄ 406-580-3154⋄ [email protected] ⋄ linkedin.com/in/ameliagetty Amelia Getty
Computer Science senior with diverse experience poised to transition to software development or engineering. Organized and dependable. Aptitude to learn quickly and work independently and with a team.
PROGRAMMING EXPERIENCE Java ⋄ PHP (Laravel) ⋄ C ⋄ Python ⋄ C++ ⋄ SQL Linux Systems Admin ⋄ Databases ⋄ Graphics ⋄ Networks ⋄ Security (2019)
EDUCATION
♢ Computer Science. Montana State University, 2017 - 2019
♢ BA Modern Languages and Literatures, German. (+ Pre-vet) Montana State University: 2007 - 2013
♢ Electrical Engineering. Universität Stuttgart, semester abroad 2010
PROFESSIONAL PROFILE
Leadership ⋄ Web Development ⋄ Clerical ⋄ Collections ⋄ Records Maintenance ⋄ Customer Service ⋄ Maintaining Inventory ⋄ Stocking ⋄ Filing ⋄ Data Interpretation
♢ Founder, DevOps; artfight.net 08/2015 - present
Programmed web app with a small group of coders and managed the remote linux server using the LAMP stack and Laravel. Lead a small team of volunteer moderators.
♢ Office Specialist; Michaels Arts and Crafts 08/2013 - 12/2015
Promoted from replenishment associate on the recommendation of my peers. Maintained store sales, HR, and payroll records. Prepared daily store bank deposit. Maintained inventory of store use items and supplies. Facilitated inbound direct freight shipments and paperwork and provided superior customer service.
♢ Replenishment Associate; Michaels Arts and Crafts 08/2012 - 08/2013
Unloaded truck. Organized shelves and product. Set up advertising signs, cashiered.
♢ BOREALIS Intern; Montana Space Grant Consortium, Montana State University Summer 2008
Accepted as an MSGC intern from a pool of applicants with high recommendations. Launched and recovered high altitude balloons. Designed and maintained on-board experiments and data collection devices. Interpreted collected data.
COMMUNITY INVOLVEMENT LANGUAGES
♢ Heart of the Valley Animal Shelter, Bozeman, MT (25 hrs, 2014-2015) ♢ English
♢ Camp Husky Project Spay/Neuter Clinic, Butte, MT (2008) ♢ German
♢ Bioneers Conference, Bozeman, MT (2008)
♢ Physics Tutoring, MSU, Bozeman, MT (2008)
KYLE MELTON 200 Gallatin Hall – Room 412A Bozeman, MT 59715 760-914-2476 [email protected] linkedin.com/in/kyle-melton/ github.com/Mammothskier/
EDUCATION Montana State University August 2015 – May 2019 Bozeman, MT – B.S. in Computer Science and Computer Engineering – GPA: 3.14 – Courses in Networks, Software Engineering, Logic Design, and Linux Systems – Proficient in Java, Python, C, SQL, HTML, CSS, and JavaScript
WORK EXPERIENCE Software Intern, Blackmore Sensors and Analytics June 2018 – Present Bozeman, MT – Worked within multidisciplinary team to complete product development goals – Developed software within an agile development model – Used development tools such as Git and Google Test
RELEVANT PROJECTS Bridger Solar Team June 2018 – Present – Designed communication system to remotely monitor solar car – Led team of software and computer engineers to complete weekly objectives – Developed code to gather GPS, IMU, and battery protection system data
Bridger Robotics Team December 2016 – Present – Programmed mining robot that competed in the NASA Mining Competition – Developed user interface to display telemetry data – Integrated known software patterns into existing code base
Giants Minecraft Plugin February 2014 – November 2016 – Created open source project to add functionality to base game – Provided technical support and continuous updates based on user demand – Downloaded by 28,000 users
LEADERSHIP Founding President, oSTEM at Montana State University December 2017 – Present Montana State University – Created MSU Student Organization to support LGBT students – Led an officer team to complete club objectives – Awarded Lavender Leader Award in May 2018 Mitchell Black 201 South 11th Ave Apt 23 Bozeman, MT 57915 406-491-4194 [email protected]
Education
Montana State University, Bozeman, MT Bachelors of Science, Computer Science and Mathematics, May 2019, GPA: 3.94
Relevant Experience
Senior Project The Compassion Project Sep 2018-Present I am part of a team of three students that built a mobile app for local art project The Compassion Project. The Compassion Project had thousands of community members paint a wooden block on the theme of compassion and write an artist statement on how their painting relates to compassion. Our app allows visitors to the exhibit to look up artists’ statements for each block, as well as to favorite blocks and access their viewing history.
Summer Intern Los Alamos National Laboratory May-Aug 2018 I interned with the Filesystems Team in the High Performance Computing division. My project was to write scripts to gather data from the Lustre filesystems on LANL’s computing clusters. I also made dashboards in Splunk - a data visualization tool - to allow cluster administrators to monitor the current state of the filesystem and alert them to problems.
USP Funded Undergraduate Research Montana State University Jan-May 2018 I conducted research on the Minimum Road Trips problem, an NP hard graph problem. I primarily researched related graph problems in graph flow and flow decomposition.
Other Recent Work Experience
Student Custodian Montana State University Jan 2019-Present
Student Data Entry Employee Montana State University Alumni Foundation Sep 2015-May 2018
Honors and Awards
MSU Mathematics Department Outstanding Scholar Award Spring 2017, 2018 COMAP Mathematic Modeling Competition: Honorable Mention 2018, 2019 Montana Mathematical Modeling Competition: Finalist, Presentation Portion Oct 2017 Upsilon Pi Epsilon Computer Science Honors Society Spring 2018 Pi Mu Epsilon Mathematics Honors Society Spring 2017 Montana University System Honors Scholarship 2014
Background
The Compassion Project is a program designed to bring the Gallatin Valley together by studying compassion. Each participant in the program receives an 8x8 inch wooden block that they can use to express their compassion. Once completed, the wooden block will be displayed at several sites around the Bozeman community. There are 6,000 painted blocks, and each of these have an accompanying artist statement. With the limited space and bountiful blocks, an app is needed to provide the user with the artist statement and installation site.
The Compassion Project app can be viewed as an automated tour guide, in that it gives viewers to the exhibit the ability to independently get information on each piece. Prior to smartphones, many museums used cassettes or other audio players for automated tour guides with numbered tracks to allow museum goers to get information on individual exhibits by playing the corresponding track. Nowadays, many museums have developed smartphone apps that serve as automated tour guides. These apps use a variety of technologies to provide users with descriptions of artworks. A notable example of this is “My Visit to the Louvre,” an application that provides users with audio guides and written descriptions of pieces in the museum and suggests exhibits to the user. This app provides a lookup feature where museum-goers can look up works by an ID number to get a description of this piece [1]. This app is most similar to the Compassion Project app, but other museums have taken different approaches. One such approach is SFMOMA’s app “SFMOMA Audio.” This app uses the GPS location of museum-goers to determine their location in the museum and to provide an audio description of the artwork they are standing in front of [2]. Another approach is a paper out of Stanford which explores the idea of using image recognition to allow users to take a picture of a painting and to return a description of the painting in the picture [3]. We chose a simpler approach as a matter of triage. It would only be marginally more convenient for the user to take a picture of a block to get a description than to search by number, but it would be significantly more difficult to implement the former approach than the latter approach. Because of this, we felt we could improve our app much more by implementing a simple search strategy and focusing on other components of our app than to implement a difficult search strategy.
3
We were still interested in the problem allowing visitors to The Compassion Project installation to look up the artist statement for a block simply by taking a picture of the art rather than searching by number though, even if this did not make the final version of the app. To do such a look up, a way of identifying a specific block in a set of pictures is needed. [3] explored the problem of reverse image specifically in the setting of art galleries. They used the eigen-images feature vectors; however, this requires resizing our images to squares where keypoint descriptors take input images of any proportions. [4] suggests a feature vector that uses the local color distribution of the image. This method is not size dependent and is fairly simple; however, the paint of the block is fairly glossy and has glare in photos. We implemented our look up using keypoint descriptors. OpenCV - an open source computer vision library - has many built in keypoint descriptors[5]. We performed a comparative analysis of these descriptors.
This Software Factory project is sponsored by Dr. Kayte Kaminski from the College of Education, Health and Human Development, and is the current executive director of The Compassion Project. We successfully deployed a fully working app on both Google Play and Apple App stores in time for the exhibition of the Compassion Project on April 15th, 2019.
Work Schedule
Responsibilities
Most of the work for this project was actually coding the app. All of us are about equally equipped to handle this, so we deferred assigning responsibility for certain portions of app until those parts of the development started. Portions of the app were assigned to individuals as needed. As we are using an Agile development cycle, it was simple enough to assign individual responsibilities on the fly. However, there were certain areas that some of us are more equipped to handle than others. We assign those responsibilities now. ● Mitchell Black: Image recognition, iOS deployment ● Kyle Melton: Scripts for uploading data to the database, Android deployment ● Amelia Getty: UI, Database design
4
Milestones
29 November 2018: Final proposal due, design finished 15 March 2019: App accepted to the Apple app store 29 March 2019: App accepted to the Android app store 15 April 2019: Exhibition of Compassion Project, app in use
Lifecycle
We used an Agile lifecycle approach. The goals of Agile Development is to deliver working software as quickly as possible as well as adapting to change. The first feature of Agile was especially important for us because of earlier deadlines of April 15th for the app to be complete. The iterative approach fit with the size of our team, the complexity of the app, and the short timeline. We also left some time for solving problems that come up later during development and based on feedback from our beta testers and the director of The Compassion Project. With this in mind, Agile development made the most sense for this project. We used 2 week sprints for our development process. Each sprint was dedicated to a certain element of our app. For instance, our first sprint was dedicated to AWS setup. A Gantt chart with a rough outline of our work schedule is included below. Because of the early deadline for the app, our sprints prior to February 1st were focused on app development. After we have submitted to the app store, our focus shifted somewhat to the image recognition portion of our project; however, more time than anticipated was needed to debug the project and submit updated versions of the app to the app stores.
5
Proposal
Functional and non-functional requirements
Below we list the functional requirements for our app. ● Search for a block by block number ● Fetch block thumbnail from Amazon S3 ● Fetch block information from DynamoDB ● Record previous searches and provide these to the user ● Favorite a block and provide the user their favorited blocks ● Record favorite statistics in DynamoDB ● Record view statistics in DynamoDB ● Provide an About screen with information about The Compassion Project ● Provide a Sponsors screen with a list of Compassion Project sponsors
Below is a list of non-functional requirements. ● Easy to use ● Clean and clear user interface design ● Low cost for cloud services and other fees ● Lightweight
6
A significant non-functional requirement is being able to afford the services that the app relies on. We examine this requirement now. These services are the Apple annual developer fee, Google Play Store listing fee and the AWS service fees. Fortunately, these fees are not particularly large and well within the budget of the Compassion Project, and we were able to get two out of the 3 services free. Below is the cost of the project.
Service Final Cost
AWS Service Fee Free with education credits
Apple App Store listing fee Free for nonprofits
Android App Store listing fee $25 one-time fee
Total $25
Performance requirements
The Compassion Project app is decidedly not computationally taxing. The biggest performance bottleneck for our app is downloading a thumbnail of the block from an AWS server to be displayed alongside the artist’s statement. The thumbnail photos were taken on an iPhone, which takes photos in a higher resolution than needed for our application. To speed up load times, we resized all the images to 500x500 pixels. This significantly decreased wait time for the Search screen. For the History and Favorites screen, many thumbnails are loaded at once. This creates long wait times as well. The image displayed to the user of these screens is much smaller than the image displayed to the user on the Search screen, so we created a second copy of each image that was 250x250 pixel to be used on these screens. This approach significantly improved wait times and UX.
Interface requirements
The main features of our app are the ability to search for blocks, view these blocks, favorite these blocks, view your favorited blocks, and view your search history. Our app consists of five stacks and six screens to allows the user to perform these and other tasks. How these components fit together is explained in the Architectural Design and Methodology section; this section focuses on how the user interacts with each screen.
7
The main screen is the Search screen. The Search screen allows user to search for a block. The Search screen also features prominent buttons for quick navigation to the About and Sponsors screen. When the user searches for a block, they are directed to the Block screen. The Block screen displays the artist statement and information about the artist to the user. The user can also favorite the Block on the Block screen. The blocks favorited by a user are accessible on the Favorites screen.
The user can move to screens other than the Search and Block screens by using the drawer menu. The Sponsors and About screens provide information about The Compassion Project and its sponsors respectively. The History and Favorites screens provide equivalent functionality. Both screens display a list of blocks to the user: previously viewed and favorited blocks respectively. Each block is represented by a row with a picture of the block, the block number and information about the block. These blocks are sorted chronologically based on when they were accessed last. The user can touch a row representing a block and be directed to the Block screen for that block.
Below are screenshots of the various screens.
8
Architectural design
9
10
Development Standards, Tools Used, etc.
Our guiding principle for choosing tools is as follows: when two options seem equivalent, choose one immediately instead of going into depth comparing the two. We did this as we felt that choosing one software and having more time to get familiar with it would yield better results than the marginal benefits of choosing one similar tool over another. The app was written in React Native. React Native allows us to write a single Javascript project which can be compiled into native binaries for iPhone and Android. This is the primary reason we used React Native. However, we chose React Native over similar languages like FlutterIO as React Native has existed since 2015 and development resources are extensive. FlutterIO, as of October 2018, hadn’t been fully released yet and tutorials are limited. The backend of our app will be run on Amazon Web Services. AWS was chosen as it has cloud storage (which stores the block thumbnails) and database services (which stores artists’ statements and other text information.)
AWS provides a React Native API called AWS Amplify that we will use to communicate with AWS servers. While other cloud services may also meet our functional and cost requirements, we deferred to our guiding principle. We wanted as much time to get acquainted with our cloud service as possible. The use case diagram in the Architectural Design section provides a sample interaction with the database. When the user searches a block, the thumbnail is pulled from AWS S3 and the text information is pulled from AWS DynamoDB. The number of views is also updated in AWS DynamoDB. Likewise, the block number is recorded with React Native’s persistent storage component AsyncStorage. This information is used to generate the History screen. The process of liking a block is similar to the process of viewing a block in terms of data transfer.
We used OpenCV for the image recognition portion of our project. OpenCV is a free and open-source computer vision library that is cross platform, commonly used, and has many tools that could be useful for the image matching portion of this project.
11
Methodology
We define a few terms that are needed to understand the organization of our app. A stack is the coarsest grain of organization in our app, and each stack encapsulates a basic functionality of our app. For instance, the Search Stack controls the user searching for a block. A screen represents what might be shown to the instant at a single moment. A stack consists of a set of (not necessarily disjoint) screens that can be navigated between while the user is in that stack. The Search Stack consists of the Search and Block screens. At any moment, the user is in a stack and a screen; this combination represents the current state of the app.
Our app is organized around the drawer menu. The drawer menu consists a list of stacks; these stacks are listed as menu items. The user can move between stacks using the drawer menu, but can only be in one stack at a time. Likewise, the user can only be in one screen at a time. While in a certain stack, the user can move between the screens belonging to the stack. Originally, our app was designed using an explicit state pattern. That is, we differentiated between the current state of the app and the current screen of the app; however, the ReactNavigation library merges these and provides a simple API to switch between stacks and screens. Thus, we virtually designed our app with the State pattern in mind but represented the state of the app as the current Stack and Screen.
A complete list of stacks and screens belonging to each stack is given in the UML diagram in the Architectural Design section. The sequence diagram in the Architectural Design section also illustrates the difference between stacks and screens. A user is initially brought to the Search stack and the Search screen. When the user performs a search, they remain in the Search stack but are brought to the Search screen. However, when the user wants to view their view History, they must navigate out of the Search stack entirely and be brought to the History stack and the History screen.
We now describe a few of these state transitions in more detail. If the user is in the Search Stack and the Search screen, they are presented with a text box to enter a block number. When they enter a valid block number, the navigate method of the ReactNavigation library is called and they are redirected to the Block screen. The entered block number is passed as an argument to
12
this method, and an instance of the Block component is created with the block number. The instantiation of a Block component makes request to the AWS backend to fetch the block thumbnail and the block information. Additionally, the block number is logged using React Native’s AsyncStorage API in a list of viewed blocks to be used when generating the History screen.
If the user is in the History Stack and the History Screen, the user is displayed a list of previously viewed blocks. A list of viewed blocks is logged using React Native’s AsyncStorage API. A subset of this list is displayed to the user as instances of the BlockListView component. The BlockListView component functions the same as the Block component, except the render method displays the block as a horizontal stripe instead of taking up the entire screen. The BlockListView component is also touchable, and when the user touches a BlockListView instance, ReactNavigation is called and they are redirected to the Block screen with an instance of the corresponding Block components.
The Favorites stack functions almost exactly the same as the History stack, except the list of blocks displayed to the user are those blocks that the user has favorited. The Sponsors and About Stacks each only contain one screen, so we won’t describe these except to say that each contain clickable buttons that redirect the user to web pages outside of the app.
We used keypoint based matching to perform reverse image search into our pictures of block thumbnails. A keypoint is simply a pixel with “interesting” local structure. What makes a pixel “interesting” varies across keypoint descriptors and can be quite complicated. As such, we used these keypoint descriptors as black boxes. Keypoint based search works by finding keypoints in the user’s picture of a block, computing descriptors of these keypoints, and comparing them to descriptors of keypoints in pictures of the block. The more keypoints they have in common, the more confidence we can have that the pictures are of the same block. To determine if two keypoints are the same, we use the distance between them. A distance between two keypoints is simply a positive number; the smaller this number, the more we can be confident of the keypoints’ similarity. For a keypoint in our search image, we find the two closest keypoints in a query image. If the closest keypoint is significantly closer the the second closest, we conclude with high probability the keypoints depict the same object.
13
OpenCV is an open source computer vision library, implementing several keypoint descriptors. We performed a comparative analysis of these descriptors to find which, if any, of them would be effective in searching among the blocks. The keypoint descriptors we tested were: KAZE, AKAZE, ORB, and BRISK [5].
Whenever a user searches for a block in the app, a thumbnail of the block is displayed. We used these thumbnails as the pictures to be compared against. We took secondary pictures of 93 of the blocks in the data set. For each of our test images, according the above methods, we found the closest image in our set of thumbnails. We considered this as a successful match if the program matched the photo with the thumbnail.
Results
The Compassion Project app was accepted to the Apple app store on March 15th and the Google Play store on March 26th. The Compassion Project opened on April 15th with a celebration at the Emerson Center for the Arts and Culture, which saw a spike in downloads on both stores. As of April 23rd, there have been over 100 downloads of the app. Below are charts of downloads on both app stores.
14
We tested the ability of the KAZE, AKAZE, ORB, and BRISK feature descriptors to search for a picture of a block amongst a set of pictures of blocks. Below are the results of this experiment.
Keypoint Average Time to Average Time to Search Success Rate Descriptor Compute
KAZE 2.6 sec 20 sec 54.8%
AKAZE 0.5 sec 12.7 sec 73.1%
BRISK 0.1 sec 7.7 sec 63.4%
ORB 0.1 sec 0.7 sec 9%
AKAZE had the highest success rate, but still failed over a quarter of the time. Furthermore, even assuming that the descriptors of the thumbnails are computed in advance, it still took 13 seconds to search for a single image from only a subset of all thumbnails on a desktop computer. People aren’t willing to wait that long for the results of a search, and would consider it basically unusable. We thus concluded that, while both the success rate and search time could be slightly optimized, using keypoint descriptors to perform image search is infeasible on our dataset with our methods and resources.
15
One of the faults of the keypoint method is that the physical boundaries of the block are detected and matched to an edge in the search image, rather than detecting features of the art itself. Below is an example of this failure. These two blocks were matched using the BRISK descriptor. The lines between the two pictures show which keypoints were determined to be good matches. Note that the majority of keypoints in the larger image were matched to a single keypoint in the smaller image.
16
References
[1] “The Louvre App My Visit to the Louvre.” The Seated Scribe | Louvre Museum | Paris, 23 June 2016, www.louvre.fr/en/louvre-app. [2] Chun, Rene. “The SFMOMA's New App Will Forever Change How You Enjoy Museums.” Wired , Conde Nast, 3 June 2017, www.wired.com/2016/05/sfmoma-audio-tour-app/. [3] Gire, Vincent, and Sharareh Noorbaloochi. Painting Recognition Using Camera-Phone Images . Stanford University, May 2007, web.stanford.edu/class/ee368/Project_07/reports/ee368group02.pdf. [4] https://www.pyimagesearch.com/2014/12/01/complete-guide-building-image-search-engine-python-opencv [5] https://docs.opencv.org/2.4/modules/nonfree/doc/feature_detection.html [5] Kaminski, Katherine. “The Compassion Project.” Montana State University: The Compassionate Project, 2018, www.montana.edu/thecompassionproject/.
17 Code Appendix
/* SCREENS */
/*Search Screen*/ import React from 'react'; import { Image, StyleSheet, Text, TextInput, View, StatusBar, Platform, Alert } from 'react-native'; import { Button } from 'react-native-elements'; import styles from '../constants/style.js'; export default class SearchScreen extends React.Component { constructor(props) { super(props); this.state = { number: '', }; }
/* * null method to pass as callback. * this callback rerenders the history and favorites screen, * but does nothing for the search screen */ onGoBack = () => {
}
/* ensure the number entered is non empty before passing to BlockScreen */ checkEntry() { if(this.state.number != ''){ var enteredNumber = parseInt(this.state.number); /* Cast enteredText to an int */ this.setState({ number: ''}); this.props.navigation.navigate('Block', { number: enteredNumber, onGoBack: this.onGoBack.bind(this)}) } else { Alert.alert("Please enter a number") this.setState({number: ''}) } }
/* redirect user to sponsors screen when they click sponsors button */ onSponsors() { this.props.navigation.navigate('Sponsors') }
/* redirects user to about screen when they click about button */ onAbout() { this.props.navigation.navigate('About') }
render() { return (
/* BlockScreen */ import React from 'react'; import { Platform, View, StatusBar } from 'react-native'; import BlockComp from '../components/Block'; import { Icon } from 'react-native-elements'; import styles from '../constants/style.js'; const navColor = '#664ea0'; export default class BlockScreen extends React.Component { /* create back button */ static navigationOptions = ({ navigation }) => { return { headerLeft: (
searchAgain = () => { this.props.navigation.goBack() }
render() { return (
/* HistoryScreen */ import React from 'react'; import { View, AsyncStorage, FlatList, TouchableHighlight, StatusBar } from 'react-native'; import BlockListComp from '../components/BlockListView'; import styles from '../constants/style.js'; const ITEMS = 8; export default class HistoryScreen extends React.Component {
constructor(props) { super(props); this.state = { keysLoaded: false } }
componentDidMount(){ this.getViewedBlocks(); }
/* get list of viewed blocks from AsyncStorage */ getViewedBlocks = async () => { try { const stringKeys = await AsyncStorage.getItem('History'); const keys = JSON.parse(stringKeys); const items = Object.values(keys).sort((a, b) => parseInt(b.time) - parseInt(a.time)); const viewableItems = items.slice(0, ITEMS); this.setState({ items: items, keysLoaded: true, viewableItems: viewableItems, page: 1, }); } catch(error) { console.log(error); } }
/* load more blocks when user scrolls past a certain threshold */ loadMoreBlocks(){ const { page } = this.state; const start = page * ITEMS; const end = (page + 1) * ITEMS - 1;
const newItems = this.state.items.slice(start, end);
this.setState({ viewableItems: [...this.state.viewableItems, ...newItems], page: page + 1 }) } emptyFunction() { }
/* callback for block screen. reloads viewed blocks to bring most recent block to the top */ onGoBack = () => { this.getViewedBlocks() } render() { return (
/* FavoritesScreen */ import React from 'react'; import { View, AsyncStorage, FlatList, TouchableHighlight, StatusBar } from 'react-native'; import BlockListComp from '../components/BlockListView'; import styles from '../constants/style.js'; const ITEMS = 8; export default class FavoritesScreen extends React.Component {
constructor(props) { super(props); this.state = { keysLoaded: false } }
componentDidMount(){ this.getLikedBlocks(); }
/* get list of liked blocks from async storage */ getLikedBlocks = async () => { try { const stringKeys = await AsyncStorage.getItem('Liked'); const keys = JSON.parse(stringKeys); const items = Object.values(keys).sort((a, b) => parseInt(b.time) - parseInt(a.time)); const viewableItems = items.slice(0, ITEMS); this.setState({ items: items, keysLoaded: true, viewableItems: viewableItems, page: 1, }); } catch(error) { console.log(error); } }
/* load more blocks when user scrolls past a certain threshold */ loadMoreBlocks(){ const { page } = this.state; const start = page * ITEMS; const end = (page + 1) * ITEMS - 1;
const newItems = this.state.items.slice(start, end);
this.setState({ viewableItems: [...this.state.viewableItems, ...newItems], page: page + 1 }) }
/* Empty callback for BlockListComp */ emptyFunction() { }
/* callback for block screen. reloads liked blocks in case the user unlikes the block */ onGoBack = () => { this.getLikedBlocks() }
render() { return (
/* AboutScreen */ import React from 'react'; import {Image, Text, TouchableOpacity, View, StatusBar, Linking, ScrollView, } from 'react-native'; import styles from '../constants/style.js'; import { Icon } from 'react-native-elements'; export default class AboutScreen extends React.Component { constructor(props) { super(props); }
render() { return (
Linking.openURL('https://docs.google.com/document/d/e/2PACX-1vQz9-C1m8APNXjvPX1M9T R449I-1Y0LF-GmsAdOiL2-tG3bAtXddtkeIcI689CbRa6BxT52F9_-CsgX/pub')}>
/* SponsorsScreen */ import React from 'react'; import { Image, ScrollView, Text, View, Linking, TouchableWithoutFeedback, StatusBar } from 'react-native'; import { Card } from 'react-native-elements'; import styles from '../constants/style.js'; class InText extends React.Component { render() { return (
Linking.openURL('http://www.montana.edu/thecompassionproject/images/sponsor-logos/Logo_ EHHD.png')}>
{/*Southern Poverty Law Center*/}
{/*Software Factory*/}
{/*
{/*Office of the President*/}
{/*College of Arts & Architecture*/}
{/*Alumni Foundation*/}
{/*office of student engagement*/}
{/*Element*/}
{/*Kenyon Noble*/}
{/*Continental Cabinetry*/}
{/*Fork and Spoon Kithcne*/}
{/*Fork and Spoon Kithcne*/}
{/*Pecha Kucha Night*/}
{/*logoDancesofUniversalPeace.png*/}
{/*ASMSU*/}
{/*Honors College*/}
{/*Norton Ranch Homes*/}
); } } export default class SponsorsScreen extends React.Component { render() { return (
/* COMPONENTS */
/* Block */ import React from 'react'; import { Image, StyleSheet, View, Text, ActivityIndicator, Platform, Alert, AsyncStorage, Dimensions, ScrollView, TouchableWithoutFeedback } from 'react-native'; import { Icon } from 'react-native-elements';
/* Amplify Imports. RELATIVE PATHS */ import * as queries from '../src/graphql/queries' import * as mutations from '../src/graphql/mutations' import Amplify, { Storage } from 'aws-amplify'; import API, { graphqlOperation } from '@aws-amplify/api'; import aws_exports from '../aws-exports'; Amplify.configure(aws_exports);
/* Sizing Constants */ const blockSide = Dimensions.get('window').width; const leftMargin = 15; const topMargin = 5; const textSize = 16; export default class BlockComp extends React.Component { constructor(props) { super(props); this.state = { imageLoaded: false, textLoaded: false, imageError: false, textError:false, src: null, blockLiked: false, viewRecorded: false, } }
/* Fetch block info from database */ async fetchBlock() { const input = { id: this.props.number }; const blockInfo = await API.graphql(graphqlOperation(queries.getBlock, input )); if(blockInfo.data.getBlock == null){ this.onTextError(); } else { this.setState({ block: blockInfo}); this.onBlockLoad(); } }
/* Increment number of views in the database */ async incrementViews() { var blockInfo = this.state.block.data.getBlock; var viewsAsInt = parseInt(blockInfo.views); viewsAsInt = viewsAsInt + 1; blockInfo.views = viewsAsInt.toString()
var updateInfo = await API.graphql(graphqlOperation(mutations.updateBlock, { input: blockInfo } )); }
/* Increment number of likes in the database */ async incrementLikes() { var blockInfo = this.state.block.data.getBlock; var likesAsInt = parseInt(blockInfo.likes); likesAsInt = likesAsInt + 1; blockInfo.likes = likesAsInt.toString()
var updateInfo = await API.graphql(graphqlOperation(mutations.updateBlock, { input: blockInfo } )); }
/* Decrement number of likes in the database */ async decrementLikes() { var blockInfo = this.state.block.data.getBlock; var likesAsInt = parseInt(blockInfo.likes); likesAsInt = likesAsInt - 1; blockInfo.likes = likesAsInt.toString()
var updateInfo = await API.graphql(graphqlOperation(mutations.updateBlock, { input: blockInfo } )); }
/* Get url for image in cloud */ getImageSource() { var imageName = this.props.number + '-lg.jpg' Storage.get(imageName) .then(url => { this.setState({ src: { uri: url } }); }); }
/* Check whether or not the user has liked this block in AsyncStorage */ async checkForLike() { /* Try to get Liked from AsyncStorage. If the user has not liked anything, getItem will throw an exception. This shouldn't be a problem, though, as blockLiked is set to false by default */ try { var liked = await AsyncStorage.getItem('Liked'); const items = JSON.parse(liked); const keys = Object.keys(items);
if(keys.includes( this.props.number.toString() )){ this.setState({blockLiked: true}); } } catch(error) { console.log(error); } }
/* fetch data from AWS and check for like when the component mounts */ componentDidMount(){ this.getImageSource(); this.fetchBlock(); this.checkForLike(); }
/* change state when the image has loaded */ onImageLoad = () => { this.setState({ imageLoaded: true, imageError: false }, this.onChangeState.bind(this)) }
/* change state when the statement has loaded */ onBlockLoad() { this.setState({ textLoaded: true, textError:false }, this.onChangeState.bind(this)) }
/* method called when user clicks the heart button and the block isn't liked */ async onLike() { Alert.alert("Favorited!", "You can view your Favorited blocks in the Favorites tab"); this.setState({blockLiked: true }); var numAsString = this.props.number.toString(); var time = Date.now(); try { let blockInfo = { [ numAsString ] : { key: numAsString, time: time, } }
await AsyncStorage.mergeItem('Liked', JSON.stringify(blockInfo)); } catch (error) { console.log(error); } if(!this.state.textError){ this.incrementLikes(); } }
/* method called when user clicks the heart button and the block is liked */ async onDislike() { Alert.alert("Unfavorited"); this.setState({blockLiked: false }); var numAsString = this.props.number.toString();
try { var liked = await AsyncStorage.getItem('Liked'); var likedObject = JSON.parse(liked);
delete likedObject[numAsString]; await AsyncStorage.setItem('Liked', JSON.stringify(likedObject)); } catch (error) { console.log(error); } if(!this.state.textError){ this.decrementLikes(); } }
/* * callback for each time imageLoaded, textLoaded, textError, or imageError is changed. * if the block is loaded succesfully, log this in AsyncStorage. * otherwise, call the onError to return to the previous screen */ async onChangeState() { if(this.state.imageLoaded && this.state.textLoaded && !this.state.viewRecorded && !(this.state.textError && this.state.imageError)){ var numAsString = this.props.number.toString(); var time = Date.now(); try { let blockInfo = { [ numAsString ] : { key: numAsString, time: time } }
await AsyncStorage.mergeItem('History', JSON.stringify(blockInfo)); } catch (error) { console.log(error); } if(!this.state.textError){ this.incrementViews(); } this.setState({ viewRecorded: true }) } if(this.state.textError && this.state.imageError) { this.onError() } }
/* run when there is an image error. if there is not also a text error, we will display default images */ onImageError = () => { this.setState({ imageLoaded: true, imageError: true, }, this.onChangeState.bind(this)) }
/* run when there is an text error. if there is not also a image error, we will display default "No artist statement available"*/ onTextError(){ this.setState({ textLoaded: true, textError: true, }, this.onChangeState.bind(this)) }
/* * run when there is both an image and text error. * this displays an error message and returns the user to the previous screen */ onError = () => { Alert.alert("Oops, something went wrong. Try searching for a different number. ") this.props.callback() }
/* change the status of like and call appropriate funtion */ toggleLike(){ if(this.state.blockLiked){ this.onDislike() } else { this.onLike() } }
/* double tap handler. calculate time between taps of picture and toggleLike is sufficiently close together */ lastTap = null delay = 3000 imageTap = () => { const now = Date.now(); if (this.lastTap && (now - this.lastTap) < this.delay) { this.toggleLike(); } else { this.lastTap = now } }
render() { return (
{/*show the logo and an error message when the image can't be found*/} {(this.state.imageError && !this.state.textError && this.state.textLoaded) &&
{ (this.state.textLoaded && this.state.imageLoaded && !this.state.blockLiked) &&
{(this.state.textLoaded && this.state.imageLoaded && this.state.textError) &&
/* BlockListView */ import React from 'react'; import { Image, StyleSheet, View, Text, ActivityIndicator, Alert, Platform } from 'react-native';
/* Amplify Imports. RELATIVE PATHS */ import * as queries from '../src/graphql/queries' import Amplify, { Storage } from 'aws-amplify'; import API, { graphqlOperation } from '@aws-amplify/api'; import aws_exports from '../aws-exports'; Amplify.configure(aws_exports); const margin = 15; export default class BlockListComp extends React.Component { constructor(props) { super(props); this.state = { imageLoaded: false, textLoaded: false, imageError: false, textError:false, src: null, } }
/* get block info from dynamodb */ async fetchBlock() { const input = { id: this.props.number }; const blockInfo = await API.graphql(graphqlOperation(queries.getBlock, input )); if(blockInfo.data.getBlock == null){ this.onTextError(); } else { this.setState({ block: blockInfo}); this.onBlockLoad(); } }
/* get image source from s3 */ getImageSource() { var imageName = this.props.number + '-sm.jpg' Storage.get(imageName) .then(url => { this.setState({ src: { uri: url } }); }); } componentDidMount(){ this.getImageSource(); this.fetchBlock(); }
/* set image load to true */ onImageLoad = () => { this.setState(() => ({ imageLoaded: true })) }
/* set image load to false */ onBlockLoad = () => { this.setState(() => ({ textLoaded: true })) }
/* run on image error */ onImageError = () => { this.setState({ imageLoaded: true, imageError: true, }) }
/* run on text error */ onTextError(){ this.setState({ textLoaded: true, textError: true, }) }//
/* run on text and image error */ onError = () => { Alert.alert("Oops, something went wrong. Try searching for a different number") this.props.callback() } render() { return (
/* NAVIGATOR */ import React from 'react'; import { View, Platform } from 'react-native'; import { createStackNavigator, createDrawerNavigator } from 'react-navigation'; import { Icon } from 'react-native-elements'; import AboutScreen from '../screens/AboutScreen'; import SearchScreen from '../screens/SearchScreen'; import BlockScreen from '../screens/BlockScreen'; import SponsorsScreen from '../screens/SponsorsScreen'; import FavoritesScreen from '../screens/FavoritesScreen'; import HistoryScreen from '../screens/HistoryScreen'; import Colors from '../constants/Colors'; const menuMargin = 18; const iconSize = 35; const SponsorsStack = createStackNavigator( { Sponsors: SponsorsScreen, }, { initialRouteName: "Sponsors", navigationOptions: ({ navigation} ) => ({ drawerLabel: 'Sponsors', title: 'Sponsors', headerTitleStyle: { ...Platform.select({ ios: { fontFamily: 'Arial', }, android: { fontFamily: 'Roboto' }, }), color: "white", }, headerStyle: { backgroundColor: Colors.tintColor }, headerLeft: (
// Force the drawer navigation to have the right options SponsorsStack.navigationOptions = { drawerLabel: 'Sponsors', drawerIcon: ({tintColor}) => (
AboutStack.navigationOptions = { drawerLabel: 'About', drawerIcon: ({tintColor}) => (
SearchStack.navigationOptions = { drawerLabel: 'Search', drawerIcon: ({tintColor}) => (
FavoritesStack.navigationOptions = { drawerLabel: 'Favorites', drawerIcon: ({tintColor}) => (
HistoryStack.navigationOptions = { drawerLabel: 'History', drawerIcon: ({tintColor}) => (
/* IMAGE ANALYSIS */
/* EXTRACTOR.py */ import numpy as np import cv2 import pickle import os import random
def extract_features(image_path, alg): image = cv2.imread(image_path)
try: kps, dsc = alg.detectAndCompute(image, None) except cv2.error as e: print('Error: ', e) return None
return dsc def batch_extractor(images_path, alg, pickled_db_path ): # pull images in image path files = [os.path.join(images_path, p) for p in sorted(os.listdir(images_path))]
result = {} for f in files: print('Extracting features from image %s' % f) name = f.split('/')[-1].lower() result[name] = extract_features(f, alg)
# saving all our feature vectors in pickled file with open(pickled_db_path, 'wb') as fp: pickle.dump(result, fp) def main(): # paths to the thumbnails and the test images thumbnail_path='./Pictures' test_path = './TestImages'
# path to write pickled feature vectors. naming convention: {pictures, test}FV.pck picklePath = 'picturesKaze.pck'
# all the keypoint descriptors kaze = cv2.KAZE_create() orb = cv2.ORB_create() akaze = cv2.AKAZE_create() brisk = cv2.BRISK_create()
batch_extractor(images_path=thumbnail_path, alg=kaze, pickled_db_path='picturesKAZE.pck') batch_extractor(images_path=test_path, alg=kaze, pickled_db_path='testKAZE.pck')
batch_extractor(images_path=thumbnail_path, alg=akaze, pickled_db_path='picturesAKAZE.pck') batch_extractor(images_path=test_path, alg=akaze, pickled_db_path='testAKAZE.pck')
batch_extractor(images_path=thumbnail_path, alg=orb, pickled_db_path='picturesORB.pck') batch_extractor(images_path=test_path, alg=orb, pickled_db_path='testORB.pck')
batch_extractor(images_path=thumbnail_path, alg=brisk, pickled_db_path='picturesBRISK.pck') batch_extractor(images_path=test_path, alg=brisk, pickled_db_path='testBRISK.pck') main()
/* FEATURE VECTOR MATCHER.py */ import numpy as np import cv2 as cv import os import pickle import scipy.spatial import random from timeit import default_timer as timer
''' Search for all the images in test_pck_path in pictures_pck_path
file is the output file hamming is a boolean for hamming or euclidean distance ''' def batch_compare(test_pck_path, pictures_pck_path, file, hamming=False): f = open(file, 'w')
print("unpickling test") with open(test_pck_path, 'rb') as fp: data = pickle.load(fp) test_names = [] test_matrix = [] for k, v in data.items(): test_names.append(k) test_matrix.append(v) test_matrix = np.array(test_matrix) test_names = np.array(test_names) print("donepickling test")
print("unpickling thumbnails") with open(pictures_pck_path, 'rb') as fp: data = pickle.load(fp) thumb_names = [] thumb_matrix = [] for k, v in data.items(): thumb_names.append(k) thumb_matrix.append(v)
thumb_matrix = np.array(thumb_matrix) thumb_names = np.array(thumb_names) print("donepickling thumbnails")
if hamming: FLANN_INDEX_LSH = 6 index_params= dict(algorithm = FLANN_INDEX_LSH, table_number = 6, key_size = 12, multi_probe_level = 1) else: FLANN_INDEX_KDTREE = 1 index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5) search_params = dict(checks=50) # flann implements approximate nearest neighbor search flann = cv.FlannBasedMatcher(index_params,search_params) j = 0 for test_des in test_matrix: most_kps_in_common = -1 best_picture = None
i = 0 for thumb_des in thumb_matrix: good = [] if not thumb_des is None and not test_des is None:
matches = flann.knnMatch(test_des,thumb_des,k=2)
for match in matches: if len(match) > 1: m = match[0] ; n = match[1] if m.distance < 0.75*n.distance: good.append([m])
if len(good) > most_kps_in_common: most_kps_in_common = len(good) best_picture = thumb_names[i]
i = i + 1
# k=1 means 1 success. this also easy counting of successful runs if test_names[j] == best_picture: k=1 else: k=0 # print(test_names[j] + "," + best_picture + ", " + str(k)) f.write(test_names[j] + "," + best_picture + ", " + str(k))
j = j+1 f.close() if __name__=='__main__': start_time = timer() batch_compare('testKAZE.pck', 'picturesKAZE.pck', 'KAZEoutput.csv',hamming=False) end_time = timer() time = end_time - start_time print("time to run kaze: " + str(time))
start_time = timer() batch_compare('testORB.pck', 'picturesORB.pck', 'ORBoutput.csv',hamming=True) end_time = timer() time = end_time - start_time print("time to run orb: " + str(time))
start_time = timer() batch_compare('testAKAZE.pck', 'picturesAKAZE.pck', 'AKAZEoutput.csv', hamming=True) end_time = timer() time = end_time - start_time print("time to run akaze: " + str(time))
start_time = timer() batch_compare('testBRISK.pck', 'picturesBRISK.pck', 'BRISKoutput.csv', hamming=True) end_time = timer() time = end_time - start_time print("time to run brisk: " + str(time))
/* DATA UPLOADERS */
/* CSV TO AWS CONVERTER.py */ import csv import threading import boto3 from PIL import Image from PyQt5.QtGui import * from PyQt5.QtWidgets import * from PyQt5.QtCore import * import picture_checker as pc #['Board #', 'Artist Name', 'Grade', 'School / Workshop Location', 'Facilitator Name', 'Artist Statement', 'Contact Info', 'Notes'] class Art: def __init__(self, number, name="", grade="", artist_statement="", image_path="", location=""): self.number = number self.grade = grade self.artist_statement = artist_statement self.image_path = image_path self.views = 0 self.image = "" self.search_file_path(image_path) self.location = location if grade == "": self.grade = "Community Member"
def set_artist_statement(self, artist_statement): self.artist_state = artist_statement def set_image_path(self, image_path): self.image_path def get_number(self): return self.number def get_artist_statement(self): return self.artist_statement def get_image_path(self, image_path): return self.image_path def search_file_path(self, folder_path): try: self.image = open(folder_path + self.number + ".jpg", 'r') except FileNotFoundError: return 0 return 1 def all_fields_are_valid(self): if self.get_number() == "": return False; if self.grade == "": return False if self.statement == "": return False return True def exists_in_aws(self, table): try: response = table.get_item( Key={ 'blockID': int(self.get_number()) } ) item = response['Item'] except: return False
return True def upload_to_aws(self, table): try:
table.put_item( Item={ 'id': int(self.get_number()), 'grade': self.grade, 'statement': self.artist_statement, 'location': self.location, 'likes': '0', 'views': '0' } ) print(self.get_number(), "successfully uploaded") except: print("Error occured while uploading to database", int(self.get_number())) print("blockID:" + self.get_number()) print("grade:" + self.grade) print("statement:" + self.artist_statement)
def __str__(self): return str(self.get_number() + self.grade + self.artist_statement + self.location + self.image_path)
def __repr__(self): return self.__str__()
def __eq__(self, other): if other.get_number() != self.get_number: return False def load_csv(csv_path, image_path): art = {} with open(csv_path) as csv_file: csv_reader = csv.reader(csv_file, delimiter=',') line_count = 0 for row in csv_reader: if line_count > 1: number = row[0] if number == "" or number in art: line_count += 1
continue print("\nNumbering error found on line:", line_count, "Number:", number) if number in art: print("Duplicate blocks found for number", number) print(row) a = Art( number=row[0], name=row[1].lstrip(' '), grade=row[2].lstrip(' '), artist_statement=row[5].lstrip(' '), image_path=image_path, location=row[-1] ) print(a) art[number] = a line_count += 1 return art def cross_reference(artists, pictures): for block_id in artist.keys(): if block_id not in pictures.keys(): print("Block " + str(block_id) + " is logged in the spreadsheet but doesn't have a picture\n") for block_id in pictures.keys(): if block_id not in artists.keys(): try: print(int(block_id)) except: continue print("Block " + str(block_id) + " has a picture but no entry in the spreadsheet\n")
def upload_to_aws(art): client = boto3.resource('dynamodb') table = client.Table('BlockTable') for k,v in art.items(): print("Uploading: ", k) threading.Thread(target=v.upload_to_aws(table)).start() def main(csv_path="Complete List of Artists and Artist Statements 4.12.19.csv", image_path="MASTER INFORMATION/TCP MASTER PHOTOS/"): art_dictionary = load_csv(csv_path, image_path) pictures = pc.load_pictures(image_path)
upload_to_aws(art_dictionary) if __name__ == "__main__": main()
/* PICTURE CHECKER.py */ import argparse from os import listdir from os.path import isfile, splitext import PIL from PIL import Image import re import boto3 from timeit import default_timer as timer from math import floor, ceil import threading from PIL import ExifTags def print_duplicate_pictures(pictures): total_duplicates = 0 for (block, file_list) in pictures.items(): if len(file_list) > 1: print("Possible duplicate found: ") print("Block: " + block) print("Possible files: " + str(file_list)) print("") total_duplicates += 1 print("Total duplicates: " + str(total_duplicates)) def print_picture_list(pictures): total_blocks = 0 for (block, file_list) in pictures.items(): print(file_list[0]) total_blocks += 1
print("Total Blocks: " + str(total_blocks)) def list_duplicate_pictures(pictures): duplicates = {} for (block, file_list) in pictures.items(): if len(file_list) > 1: duplicates[block] = file_list
return duplicates def list_misnamed_pictures(pictures): pictures_to_ignore = list_duplicate_pictures(pictures).keys() print(pictures_to_ignore) for (block, file_list) in pictures.items(): filename, file_extension = splitext(file_list[0])
if block in pictures_to_ignore: print("Ignoring block " + block + " because duplicate files exist") continue try: int(filename) except: print(file_list[0], " is not named correctly") continue def list_png(pictures): png_list = [] for (block, file_list) in pictures.items(): filename, file_extension = splitext(file_list[0]) if ("png" in file_list[0].lower()): print(file_list[0]) png_list.append(file_list[0]) return png_list def rotate_image(picture):
try: image = picture if hasattr(image, '_getexif'): # only present in JPEGs for orientation in ExifTags.TAGS.keys(): if ExifTags.TAGS[orientation]=='Orientation': break e = image._getexif() # returns None if no EXIF data if e is not None: exif=dict(e.items()) orientation = exif[orientation] if orientation == 3: image = image.transpose(Image.ROTATE_180) elif orientation == 6: image = image.transpose(Image.ROTATE_270) elif orientation == 8: image = image.transpose(Image.ROTATE_90) return image except: return picture def resize_picture(picture, input_folder, output_folder): img = Image.open(input_folder + picture) img = rotate_image(img) if int(img.size[0]) != int(img.size[1]): print(picture, "is not square:", img.size[0], img.size[1], " Attempting to crop") if (img.size[0] > img.size[1]): delta_size = img.size[0] - img.size[1] img = img.crop((floor(delta_size/2), 0, img.size[0] - ceil(delta_size/2), img.size[1])) elif (img.size[1] > img.size[0]): delta_size = img.size[1] - img.size[0] img = img.crop((0, floor(delta_size/2), img.size[0], img.size[1] - ceil(delta_size/2))) if int(img.size[0]) == int(img.size[1]): print("Image crop successful:", img.size[0], img.size[1]) else: print("Image crop not successful:", img.size[0], img.size[1]) return filename, file_extension = splitext(picture) if filename + "-sm.jpg" in listdir(output_folder): pass return None #print(filename, "is already resized") else: small = img.resize((250, 250), PIL.Image.ANTIALIAS) large = img.resize((500, 500), PIL.Image.ANTIALIAS) small.save(output_folder + filename + '-sm.jpg') large.save(output_folder + filename + '-lg.jpg') return (output_folder + filename + '-sm.jpg', output_folder + filename + '-lg.jpg') def resize_all_images(pictures, input_folder, output_folder): images_to_ignore = list_misnamed_pictures(pictures) resized_images = [] for block, file_list in pictures.items(): if images_to_ignore is None or block not in images_to_ignore: resized_image = resize_picture(file_list[0], input_folder, output_folder) if resized_image is not None: resized_images.append(resized_image) return resized_images def load_pictures(input_folder): print("Looking in folder " + input_folder) pictures = {} for file in listdir(input_folder): if "insync" in file: continue if isfile(input_folder + file): filename, file_extension = splitext(file) groups = re.search(r"(\d\B\d+)[\s]?\.{0}", file) if not groups: groups = re.search(r"^(\d)\.{1}", file) if groups is None: continue block_id = groups.group(0).strip() #print("filename:" + filename + "\next: " + file_extension) if block_id not in pictures: pictures[block_id] = [] pictures[block_id].append(file) return pictures class AWSUploader (threading.Thread): def __init__(self, picture, block_id, bucket="block-thumbnails"): threading.Thread.__init__(self) self.picture=picture self.bucket = bucket self.block_id = block_id
def run(self): s3 = boto3.client('s3') print("s3.upload_file(" + self.picture[0], self.bucket, "public/" + str(self.block_id) + "-sm.jpg)") print("s3.upload_file(" + self.picture[1], self.bucket, "public/" + str(self.block_id) + "-lg.jpg)") s3.upload_file(self.picture[0], self.bucket, "public/" + str(self.block_id) + "-sm.jpg") s3.upload_file(self.picture[1], self.bucket, "public/" + str(self.block_id) + "-lg.jpg") def upload_folder_to_aws(input_folder, output_folder):
print("Uploading all images in folder: ", input_folder) pictures = load_pictures(input_folder) print("Resizing images") valid_pictures = resize_all_images(pictures, input_folder, output_folder) beginning_time = timer() threads = [] for picture in valid_pictures: print(picture[0], picture[1]) groups = re.search(r"(\d\B\d+)[\s]?\.{0}", picture[0]) if not groups or groups is None: continue block_id = groups.group(0).strip() if "\(" in block_id: print("errrorrrrr") continue block_thread = AWSUploader(picture, block_id) threads.append(block_thread) block_thread.start() block_thread.join() for thread in threads: thread.join() ending_time = timer() print("Uploading pictures took: ", ending_time - beginning_time) def main(input_folder, output_folder): pictures = load_pictures(input_folder) upload_folder_to_aws(input_folder, output_folder) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument('-idir', '--idirectory', help='directory of input images', required=False, default="./") parser.add_argument('-odir', '--odirectory', help='directory of output images', required=False, default="./") args = parser.parse_args() main(args.idirectory, args.odirectory)