RBE WIKI for INDEPENDENT LEARNING Interactive Qualifying

Total Page:16

File Type:pdf, Size:1020Kb

RBE WIKI for INDEPENDENT LEARNING Interactive Qualifying RBE WIKI FOR INDEPENDENT LEARNING Interactive Qualifying Project Report completed in partial fulfillment of the Bachelor of Science degree at Worcester Polytechnic Institute, Worcester, MA Submitted to: Professor Michael Gennert (advisor) Timon Butler Jonathan Estabrook Joseph Funk James Kingsley Ryan O’Meara April 26, 2011 Advisor Signature Abstract This project, which performed research on web based academic resources, revolved around the focus question “how does one utilize web-based communication media in order to facilitate the self-perpetuating exchange of knowledge in the academic engineering community?”. It exam- ined the social implications of self-sustaining web-based resources, and found that any new web resource needs to fill a previously vacant niche in order to gather the user-base required to be self-sustaining. Acknowledgements The members of the RBE Wiki for Independent Learning Interactive Qualifying Project would like to extend thanks to the students, teaching assistants, and professors of the Worcester Polytech- nic Institute Robotics Program for their time, input and feedback related to the Uki website. Also to our roommates for dealing with the erratic schedules we adopted to complete this project, and to our advisor Professor Michael Gennert for advising this project and helping us turn a simple idea we had into something which has the potential to greatly benefit the WPI community. So again thank you for all of your help from the Uki team. 2 Authorship The RBE Wiki for Independent Learning Interactive Qualifying Project was carried out by five individuals: Timon Butler, Jonathan Estabrook, Joseph Funk, James Kingsley, and Ryan O’Meara. The project as a whole accomplished what it did based upon the efforts of the group as a whole. Specific task leadership was assigned to particular members, as outlined below. Primary User Audience Data Collection Timon Butler Back End Server Operations Jonathan Estabrook James Kingsley Front End Site Operations Jonathan Estabrook Ryan O’Meara Base Site Content Joseph Funk Ryan O’Meara Final Project Report Timon Butler Jonathan Estabrook Joseph Funk James Kingsley Ryan O’Meara 3 Contents 1 Introduction 1 2 Background 4 2.1 RationalefortheProject . ..... 4 2.2 CurrentStateofWebBasedResources . ....... 5 2.2.1 ArticleBasedResources . .. 5 2.2.2 PreviousResearch .............................. 10 2.2.3 PreviousSolutions . .. .. .. .. .. .. .. .. 12 3 Methodology 14 3.1 FocusGroups ..................................... 15 3.2 Survey......................................... 15 3.3 CreationofServer................................ ... 16 3.4 ConstructionoftheSite. ..... 17 3.5 AdditionofBaseContent. .... 17 3.6 ToolSet........................................ 18 3.7 Responsibility,PrivacyandSecurityIssues . ............. 19 4 Procedure 22 4.1 FocusGroups ..................................... 22 4.2 Surveys ........................................ 23 i 4.3 Research........................................ 24 4.4 CreationofServer................................ ... 24 4.5 ConstructionoftheSite. ..... 26 4.6 ReleasetoTestUserGroup. .... 26 5 Results 28 5.1 Surveys ........................................ 28 5.2 UserMotivation.................................. .. 29 5.3 EaseofUsevsVolumeofContent . .... 29 5.4 EffectivelySpreadingtheUserBase . ........ 31 5.5 EffectiveSourcesforContent. ....... 31 5.6 DifficultieswiththisResource . ....... 32 5.7 AdvantagestothisResource . ..... 33 5.8 SelectionofPrimarySupportedMedia . ........ 33 6 Conclusion 35 6.0.1 LessonsLearned ............................... 35 6.0.2 FinalRemarks ................................ 37 7 Recommendations 38 7.1 NotestoConsider................................. .. 39 A HowToUseUki 40 A.1 LocalSettings................................... .. 40 A.2 Extensions...................................... 40 A.3 Server/DatabaseOperations. ....... 41 A.4 SiteOrganization ................................ ... 41 A.4.1 Portals .................................... 42 A.4.2 Namespaces.................................. 42 ii A.5 ArticleFormatting ............................... ... 43 A.6 Analytics ....................................... 43 B Focus Group Data 46 C Survey Data 52 iii List of Figures 2.1 WikipediaArticleCountOverTime[15] . ........ 8 2.2 SampleLayoutofaTypicalWikiPage[15] . ....... 9 A.1 AtypicalportalonUki ............................. ... 42 A.2 AnarticlefromUki................................ .. 43 A.3 HeatmapOverlayofUkiMainPage . .... 44 A.4 DashboardforOpenWebAnalytics . ..... 45 iv List of Tables v Chapter 1 Introduction Information is what allows humanity, as a whole, to move forward from today to tomorrow. We as a race are constantly building new knowledge using old knowledge as a starting point. With the advent of the Internet and new ways to store and retrieve information, knowledge can be retrieved faster and in higher volumes than ever before. The sheer volume of information, opinions, and discussions presented can easily surpass the amount a human can possibly process. This overwhelming amount of available information can lead to something called “information overload” [23]. Without a way to intelligently access and view content, it is difficult for a user to find relevant information amid the sea of available data. Web resources must address this problem when they wish to provide information to their users, and the approach will differ based on the target audience the site is designed for. As much information as is available on the web, in printed media, and in classroom environ- ments, it must be organized into manageable subject divisions in order to be useful. Either this must be done on a single site internally, or be done through searching for information more effec- tively, which is the approach taken by search engines such as Google[5]. However, even searching for information better is not effective if a site is not organized in a useful way. Searching can only infer so many related subjects from a single search query. As a result, this project chose to focus on the organization of a single web-based resource that encompasses the needs of a specific commu- 1 nity. This being effective entails either reorganizing a massive number of web resources, libraries, and classroom plans, or converting information into a common format and organizing the infor- mation as it is being converted. This, in turn, presents the problem of converting massive amounts of data into a common format. In the context of web based resources, there are two common methods for this. The first is used by curated resources, such as the Encyclopedia Britannica[4], where a few approved people are entrusted to enter information which is written to a high literary standard. This is currently accepted as the safer option by the larger academic community. The second method is to let the information be compiled and refined by the very community that uses the resource. Perhaps the largest and most well known example of this approach is Wikipedia[15]. When this method works, the community regulates and corrects its own errors, and through this process constantly improves the quality and presentation of the resources content. Each of these methods are effective techniques to solve the problems of information overload. As a result of these problems, our group questioned how effectively existing information re- sources can be used for practical applications, particularly by engineers. Specifically, how does one utilize web-based communication media in order to facilitate the self-perpetuating exchange of knowledge in the academic engineering community? Our project seeks to perform a study on web based resources which are run by a user base interested in information on engineering topics. Questions the project sought to answer included the specific format which is best suited to pre- senting information to such an audience, how to encourage a resource for this audience to become self-perpetuating, and the positives and negatives of such an approach. Our main vehicle for this endeavour was a case study using a wiki web resource to store knowl- edge relating to the various engineering fields studied at Worcester Polytechnic Institute (WPI). The initial target for this resource was the Robotics Engineering Program, as it encompasses a wide engineering subject base. The resource created was based on the MediaWiki platform [9], which is the basis for similar websites such as Wikipedia. Our project created a platform called Uki, then added a small amount of content, a tool set, and an organizational framework. It was then released to our target audience. Our group conducted surveys of the target audience after the 2 resource had been released to gauge our site and the opinions of the users on what would be the most useful qualities for this type of resource. Our group anticipated the project to result in a small but growing web resource for the WPI Robotics Engineering Program, which would eventually expand to encompass an increasing num- ber of the engineering disciplines studied at WPI and other universities. Further, our group ex- pected a large quantity of data on how the web resource was used during our study, and what format students of engineering disciplines find most useful when searching for information in a web based resource. The importance of this project lies in its potential to be a model for similar resources, which would allow more web based resources tailored to their audiences to be created. This could result in a more centralized knowledge
Recommended publications
  • Application Log Analysis
    Masarykova univerzita Fakulta}w¡¢£¤¥¦§¨ informatiky !"#$%&'()+,-./012345<yA| Application Log Analysis Master’s thesis Júlia Murínová Brno, 2015 Declaration Hereby I declare, that this paper is my original authorial work, which I have worked out by my own. All sources, references and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source. Júlia Murínová Advisor: doc. RNDr. Vlastislav Dohnal, Ph.D. iii Acknowledgement I would like to express my gratitude to doc. RNDr. Vlastislav Dohnal, Ph.D. for his guidance and help during work on this thesis. Furthermore I would like to thank my parents, friends and family for their continuous support. My thanks also belongs to my boyfriend for all his assistance and help. v Abstract The goal of this thesis is to introduce the log analysis area in general, compare available systems for web log analysis, choose an appropriate solution for sample data and implement the proposed solution. Thesis contains overview of monitoring and log analysis, specifics of application log analysis and log file formats definitions. Various available systems for log analysis both proprietary and open-source are compared and categorized with overview comparison tables of supported functionality. Based on the comparison and requirements analysis appropriate solution for sample data is chosen. The ELK stack (Elasticsearch, Logstash and Kibana) and ElastAlert framework are deployed and configured for analysis of sample application log data. Logstash configuration is adjusted for collecting, parsing and processing sample data input supporting reading from file as well as online socket logs collection. Additional information for anomaly detection is computed and added to log records in Logstash processing.
    [Show full text]
  • Using Matomo in EBSCO's Discovery Service
    ARTICLES Analytics and Privacy Using Matomo in EBSCO’s Discovery Service Denise FitzGerald Quintel and Robert Wilson ABSTRACT When selecting a web analytics tool, academic libraries have traditionally turned to Google Analytics for data collection to gain insights into the usage of their web properties. As the valuable field of data analytics continues to grow, concerns about user privacy rise as well, especially when discussing a technology giant like Google. In this article, the authors explore the feasibility of using Matomo, a free and open-source software application, for web analytics in their library’s discovery layer. Matomo is a web analytics platform designed around user-privacy assurances. This article details the installation process, makes comparisons between Matomo and Google Analytics, and describes how an open-source analytics platform works within a library-specific application, EBSCO’s Discovery Service. INTRODUCTION In their 2016 article from The Serials Librarian, Adam Chandler and Melissa Wallace summarized concerns with Google Analytics (GA) by reinforcing how “reader privacy is one of the core tenets of librarianship.”1 For that reason alone, Chandler and Wallace worked to implement and test Piwik (now known as Matomo) on the library sites at Cornell University. Taking a cue from Chandler and Wallace, the authors of this paper sought out an analytics solution that was robust and private, that could easily work within their discovery interface, and provide the same data as their current analytics and discovery service implementation. This paper will expand on some of the concerns from the 2016 Wallace and Chandler article, make comparisons, and provide installation details for other libraries.
    [Show full text]
  • Work Package 2 Collection of Requirements for OS
    Consortium for studying, evaluating, and supporting the introduction of Open Source software and Open Data Standards in the Public Administration Project acronym: COSPA Wor k Package 2 Collection of requirements for OS applications and ODS in the PA and creation of a catalogue of appropriate OS/ODS Solutions D eliverable 2. 1 Catalogue of available Open Source tools for the PA Contract no.: IST-2002-2164 Project funded by the European Community under the “SIXTH FRAMEWORK PROGRAMME” Work Package 2, Deliverable 2.1 - Catalogue of available Open Source tools for the PA Project Acronym COSPA Project full title A Consortium for studying, evaluating, and supporting the introduction of Open Source software and Open Data Standards in the Public Administration Contract number IST-2002-2164 Deliverable 2.1 Due date 28/02/2004 Release date 15/10/2005 Short description WP2 focuses on understanding the OS tools currently used in PAs, and the ODS compatible with these tools. Deliverable D2.1 contains a Catalogue of available open source tools for the PA, including information about the OS currently in use inside PAs, the administrative and training requirements of the tools. Author(s) Free University of Bozen/Bolzano Contributor(s) Conecta, IBM, University of Sheffield Project Officer Tiziana Arcarese Trond Arne Undheim European Commission Directorate-General Information Society Directorate C - Unit C6- eGovernment, BU 31 7/87 rue de la Loi 200 - B-1049 Brussels - Belgium 26/10/04 Version 1.3a page 2/353 Work Package 2, Deliverable 2.1 - Catalogue of available Open Source tools for the PA Disclaimer The views expressed in this document are purely those of the writers and may not, in any circumstances, be interpreted as stating an official position of the European Commission.
    [Show full text]
  • Front-End Engineer @ Mlab
    Front-end Engineer @ mLab About mLab: mLab is one of the fastest growing companies in the cloud infrastructure space. The company is solving mission-critical challenges faced by developers who require innovative database technology to support their applications. Our solution is built on MongoDB, the leading NoSQL database which is disrupting the multi-billion dollar database market. We're headquartered in the Mission/ Potrero area of San Francisco and are well-funded by premier venture and angel investors including Foundry Group, Baseline Ventures, Upfront Ventures, Freestyle Capital, and David Cohen of TechStars. Our users love our Database-as-a-Service (DBaaS) solution, as it allows them to focus their attention on product development, instead of operations. Developers create thousands of new databases per month using our fully managed cloud database service which offers highly available MongoDB databases on the most popular cloud providers. Our customers love our top-tier support, automated backups, web-based management, performance enhancement tools, and 24/7 monitoring. Looking forward, our roadmap includes a suite of new capabilities which will have a massive impact on the efficiency with which developers write and deploy applications. We’re biased (of course), but we believe our culture is one of our greatest assets. What makes us happiest? Innovating, automating, helping software developers, and giving back to our community. To get a better taste for who we are, visit our website at http://mlab.com and read our blog at http://blog.mlab.com. The role: We are looking for an outstanding front-end technologist to lead our UI efforts for our technology products as well as for our public-facing websites.
    [Show full text]
  • Diploma/Master/Student Thesis —
    Institute of Architecture of Application Systems University of Stuttgart Universitätsstraße 38 D-70569 Stuttgart Evaluating the Profitability of the MediaWiki Application under different Cloud Distribution Scenarios María Elena Alonso Mencía Course of Study: Computer Science Examiner: Prof. Dr. Dr. h. c. Frank Leymann Supervisor: Dipl.-Inf. Santiago Gómez Sáez Commenced: April 7, 2016 Completed: September 29, 2016 CR-Classification: C.2.4, C.4, G.1.2 Abstract Cloud computing has gained popularity over the last years, causing a significant increase of available cloud offerings among providers. Therefore, this wide spectrum of options has led to an increment of possibilities for distributing applications in the cloud, by means of selecting specialized services to host each application component. Nevertheless, it also implies the need of finding the optimal solution depending on its purpose, usually based on future economical profitability. Nowadays, instead of considering an application as a whole when deploying it in the cloud, e.g. deploying whole application stack in a virtual machine, investigations focus on how to distribute the application components in heterogeneous cloud environments. Consequently, users have an even higher range of options and should carefully choose good decision criterion, going further than only considering the direct cost for the needed cloud instances. Some challenges are deriving a revenue model - as they tend to be application specific - and customizing the evaluation of different migration configurations of a real application with authentic data metrics. In this sense, this document uses utility analysis as it includes a non-directly countable element, preferences, and allows basing the decision on a trade-off taking into account other aspects which have an influence on the final performance such as users satisfaction or cloud instance availability under different deployment topologies.
    [Show full text]
  • L15N Server Side Programming
    Lecture #15: Server-Side Programming CS106E, Young In this lecture, we learn about programming on the server. We consider several different models of how a program can generate a webpage. We see that there are many different languages and frameworks that can be used on the server-side. The tools used on both client-side and server-side are sometimes referred to as the Development Stack. Finally we consider two different data formats XML and JSON which are used in web programming. Front End and Back End Engineering - We will often divide the components of a webserver into two distinct parts – front-end and back- end. o You may hear people talk about being a front-end developer or a back-end developer or talk about their web applications front-end or back-end. - The Front End consists of the HTML, CSS, and any Client-Side Programs (i.e., JavaScript) - The Back End consists of Server-Side Programs and the Database o Back-end engineers would also work on server-side configuration, load balancing, content delivery network (CDN) and server infrastructure issues. Basic Models of Server-Side Programming - There are two traditional methods for creating a webpage server-side. HTML with Server-Side Code Added ▪ With this approach a webpage on the server-side looks almost like a tradition HTML file, except in a few places, we ask the server to insert new information based on code executed on the server. ▪ Here is a sample using the server-side language PHP. <!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <title>Time of Day</title> </head> <body> <h1>Time of Day</h1> <?php $now = new DateTime(); echo $now->format("h:i:sA"); ?> </body> </html> ▪ Notice how this looks almost exactly like HTML, except for the section enclosed within the <?php and ?>.
    [Show full text]
  • Forensics Investigation of Web Application Security Attacks
    I. J. Computer Network and Information Security, 2015, 3, 10-17 Published Online February 2015 in MECS (http://www.mecs-press.org/) DOI: 10.5815/ijcnis.2015.03.02 Forensics Investigation of Web Application Security Attacks Amor Lazzez, Thabet Slimani College of Computers and Information Technologies, Taif University, Kingdom of Saudi Arabia Email: [email protected], [email protected] Abstract—Nowadays, web applications are popular applications constitute a motivating environment for targets for security attackers. Using specific security attackers to perform security attacks. This involves the mechanisms, we can prevent or detect a security attack on development of various methods to perform a security a web application, but we cannot find out the criminal attack on a web application. The famous are: Cross-Site who has carried out the security attack. Being unable to Scripting, SQL injection, Code Injection, and Buffer trace back an attack, encourages hackers to launch new Overflow [1]. As long as web applications constitute the attacks on the same system. Web application forensics most important mean of data communication over the aims to trace back and attribute a web application security Internet, different techniques have been developed to attack to its originator. This may significantly reduce the protect web applications against hackers. Firewalls and security attacks targeting a web application every day, systems’ security patching are used for attack prevention; and hence improve its security. The aim of this paper is to intrusion detection systems and antivirus are used for carry out a detailed overview about the web application attack detection [1, 4]. forensics.
    [Show full text]
  • Design and Implementation of Web Front-End Based on Mainframe Education Cloud
    IT 15 015 Examensarbete 30 hp Mars 2015 Design and Implementation of Web front-end based on Mainframe education cloud Fan Pan $ % Department of Information Technology Abstract Design and Implementation of Web front-end based on Mainframe education cloud Fan Pan Teknisk- naturvetenskaplig fakultet UTH-enheten Mainframe is a server expert in online transaction and batch job and be widely used in different industries especially banking while mainframe skilled specialists are limited. Besöksadress: Cloud computing makes it possible to share rare hardware and deliver services by Ångströmlaboratoriet Lägerhyddsvägen 1 infrastructure, platform and so on. Hus 4, Plan 0 This text explains how the Z Education Cloud can provide stable and high-value education services that support 21st-century mainframe skill development. Postadress: Additionally, the text outlines design and implementation for the education cloud Box 536 751 21 Uppsala Web-End that can help college mainframe education. Firstly, technology mechanism analysis of Web front-end for Z Education Cloud is Telefon: done with the following aspects: B/S architecture, MVC design pattern, SSH 018 – 471 30 03 development framework are introduced into this project. The author also proposes a Telefax: system which is asynchronous communication mechanism between front-end and 018 – 471 30 00 back-end according to the specialty of mainframe service. Secondly, we do the requirement from Business Requirement and Functional Requirement, define all the Hemsida: function modules and draw the use cases and class diagram with UML. After that, http://www.teknat.uu.se/student based on the requirements, this text explains how the Z Education Cloud Web-end designs and realizes.
    [Show full text]
  • Design Support for Performance Aware Dynamic Application (Re-)Distribution in the Cloud
    Institute of Architecture of Application Systems Design Support for Performance Aware Dynamic Application (Re-)Distribution in the Cloud Santiago Gómez Sáez, Vasilios Andrikopoulos, Frank Leymann, Steve Strauch Institute of Architecture of Application Systems, University of Stuttgart, Germany {gomez-saez, andrikopoulos, leymann, strauch}@iaas.uni-stuttgart.de : @article {ART-2014-12, author = {Santiago G{\'o}mez S{\'a}ez and Vasilios Andrikopoulos and Frank Leymann and Steve Strauch}, title = {{Design Support for Performance Aware Dynamic Application (Re-)Distribution in the Cloud}}, journal = {IEEE Transactions on Service Computing}, publisher = {IEEE Computer Society}, pages = {1--14}, type = {Article in Journal}, month = {December}, year = {2014}, language = {English}, cr-category = {D.2.11 Software Engineering Software Architectures, C.2.4 Distributed Systems, D.2.8 Software Engineering Metrics}, contact = {Santiago G{\'o}mez S{\'a}ez: [email protected]}, department = {University of Stuttgart, Institute of Architecture of Application Systems}, } © 2014 IEEE Computer Society. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2014.2381237, IEEE Transactions on Services Computing JOURNAL OF TRANSACTIONS ON SERVICES COMPUTING, VOL. -, NO. -, SEPTEMBER 2014 1 Design Support for Performance Aware Dynamic Application (Re-)Distribution in the Cloud Santiago Gomez´ S´aez, Vasilios Andrikopoulos, Frank Leymann, Steve Strauch Abstract—The wide adoption of the Cloud computing paradigm by many different domains has increased both the number and type of available offerings as a service, e.g.
    [Show full text]
  • Jsp and Spring Mvc Skills
    edXOps BOOTCAMP edXOps.COM JSP AND SPRING MVC SKILLS 04 WEEKS PROGRAM Production-like Project 18 January 2021 Effective Date Version Change Description Reason Author Sep 18 2019 1.0.0 First version New Sato Naoki Jan 18 2021 1.0.1 Revised version Update Alex Khang Production-like Project edXOps® Bootcamp WEEK ➂ - INTEGRATION OF FRONT-END AND BACK-END DAY ➊ - INTEGRATE FRONT-END TO BACK-END CODE Move the Front-End Web pages and the Framework to the Spring MVC project. Integrate the Front-End Web pages to the Spring MVC Project. GWT and JWT Programming - Ajax Toolkit - Json Web Token - Google Web Toolkit OUTCOME: Knows how to install and use the Web toolkit of at least one of above kits and proficiency to integrate them to the front-end project. DAY ➋ - IMPLEMENTATION OF REPORT FUNCTIONALITY Install and Design Report by one of following the report platform. - JasperReport Design and Viewer. - CrystalReport Design and Viewer. Create and Run smoke test: All test cases are accepted for the the Reporting functionalities. OUTCOME: Knows how to design the Web Reports with at least one Report platform such the JasperReports software or the Crystal Reports software and integrate reports into the Web Application. DAY ➌ - IMPLEMENTATION OF EXPORT FUNCTIONALITY Design and Programming to Export the data to one of following format. - Excel Format / PDF Format / CSV Format. Create and Run smoke test: All test cases are accepted for the the Exporting functionalities. OUTCOME: Knows how to define code to export data from the Web Reports or the Web Pages to Excel or PDF format and integrate these functionality on online hosting server.
    [Show full text]
  • REMOTE Preferredsubmitted to Others Without His Explicit Permission
    For more on Gilbert’s job hunt visit Gilbert’s personal web site Copyright 2021 by Gilbert Healton. Gilbert’s details can not be http://resume.healton.net/ REMOTE PREFERREDsubmitted to others without his explicit permission. Gilbert Healton [email protected] Perl-5, C, Linux with strong mentoring, planning. https://www.linkedin.com/in/ghealton/ Multii-language team software engineer. Now strongest in Perl, C, shell Arenas: commands, SQL. Linux and UNIX. History of Template Toolkit, HTML, Remote work ●●●● JavaScript, Apache, TCP/IP, networking, Git, and SVN. Very strong Back ends ●●●●● design, mentoring, documentation. Great customer support. Telcom ●●●● Net Security ○●●● 2020-08 to Software Developer – SMD Delivery – ESP 2021-06 Hard Skills: bash, zsh, Remote. Coding ●●●● C, perl, lua. Working on cloud based SMTP engine. Analyst ●●●●● AWS (AL1, ✔ Feature improvements written in C and Lua. Perl tests. Design ●●●●● Athena SQL, ✔ Document processes and on boarding processes. Documents ●●●● builds, ...) ✔ Use Circonus graphs to analyze failures. Debugging ●●●● github, git, ✔ Use git and Mercurial tools for reviews and source 2038 Bug fix ●●●● Mercurial management. Networking: (hg, Fisheye, ✔ “Interrupt Team” duties included 24x7 on-call duties, ssh, rsync, ○●●● Bamboo). starting build pipelines, responding to problems & requests. dev/tcp TCP/IP sockets ○●●● 2020-03 to Shell Developer – back end – energy sector Message queues ○●●● 2020-08 EOC Soft Skills: bash, ksh, DXC Technology. Remote multinational. Migrating large distributed AIX applications to Linux. Detail attention ●●●● csh, RHEL Life Learning ●●●● ✔ Quickly migrated scripts from suite with the most scripts. Oracle, perl, Legacy Apps ●●●●● ✔ Writing scripts to scan and automate code changes. AutoSys shell Mentoring ●●●●● Scripts, ✔ Bringing automated tests to environment without any.
    [Show full text]
  • Hammontree--Jacob 41
    Kanvasroom LLC Jacob Hammontree [email protected] Science, Discovery and the Universe Computer Science Overview For my project, I worked with Kanvasroom LLC, a small startup founded by two fellow Terps and College Park Scholars. The company is based around a web platform that allows creative people to collaborate on projects with people all over the world. My main role in the company was developing front-end and back-end code for the website. I used HTML and CSS to develop the aesthetics of the front end, and primarily PHP for the backend scripting. There were three main components to working in a small startup doing development, and I will go into detail about each of these below. Frontend and backend development are both essential for a fully functioning website. The frontend is what the user sees: the style of the page, the notifications, their profile picture, etc. The backend of the website is what makes the website work like a tool. For example, a backend service might allow for messaging on the site. The frontend of the messaging is represented as a graphical inbox with messages, and the backend is the database queries and scripting (programming) that goes into making the messaging functional. The model Source: http://www.tonymarston.net/php-mysql/front-end-back-end-01.png for this is shown to the left. Development Teamwork Communication During my time at Kanvasroom, I have Developing a large-scale website My co-developer lived in Mexico, so had a lead development role where I involves working in teams. In communication was a huge have both developed source code and partnership with the other challenge to overcome.
    [Show full text]