Clearance Required: Active Top Secret with Polygraph
Total Page:16
File Type:pdf, Size:1020Kb
Cloud DevOps Engineer Clearance Required: Active Top Secret with Polygraph Location: Emerson (Laurel, MD) Anticipated Start Date: September 1, 2017
PCI is actively interviewing candidates for several exciting new positions supporting our government customer on our Prime contract supporting the Fort Meade Agency’s Research Directorate.
Scope/Description of Work: The Cloud DevOps team will build, tune and manage servers and services, which run on 4,000+ node operational clusters to support search and big data analytics for thousands of Analysts. The DevOps team combines software development, networking and large scale systems engineering to build and run fault tolerant software services and compute infrastructures. In this environment, a big data cluster is a highly connected operational platform built from thousands of commodity servers running Linux, Hadoop and Apache Accumulo that are optimized and continually improved.
Development of Map/Reduce jobs processing >100 GB of data, troubleshooting multiple parallel processes, tuning multiple parallel processes operating within specified constraints and resources Development, deployment, debugging, and tuning of applications atop a multi-tenant Accumulo environment, to include managing performance trade-offs for configuration options of Accumulo, based on workload and specified performance requirements Developing and implementing network design of compute cluster environments, to include IP address schemes, routing, firewalls, load balancing, ethernet channel (bonding), and troubleshooting of network hardware and software applications and issues Developing, debugging, and tuning of data-driven applications and deployment of large-scale data-driven applications (For senior positions) Extensive configuration management and use of system provisioning tools within Linux environments.
Qualifications (Multiple levels of expertise are available) Level 2 (5 yrs exp w/ degree): Within the last eight (8) years, a minimum of five (5) years experience deploying, managing, and troubleshooting system issues with one or more of the following: Apache Hadoop, Cloudera Distribution of Hadoop (CDH), and/or Hortonworks Hadoop Distribution (HDP) Within the last eight (8) years, a minimum of five (5) years experience developing applications for and working with Apache Accumulo Within the last five (5) years, a minimum of two (2) years experience developing software within a Linux environment Within the last five (5) years, a minimum of two (2) years experience programming with Java and deploying applications developed with Java Demonstrated understanding of distributed data structures and probabilistic data structures, to include the development and implementation of algorithms within and for the same. Level 3 additional requirements (8 yrs w/ degree): Within the last five (5) years, a minimum of three (3) years experience using some combination of strace, dstat, perf, pert top, vmstat, and/or netstat Linux tools for debugging and performance analysis in a distributed computing environment Demonstrated experience with common Java deployment and configuration for JVM (Java Virtual Machines), to include long running JVM processes and the tuning and troubleshooting of issues for the same