Using Analytics to Guide Improvement During an Agile–Devops
Total Page:16
File Type:pdf, Size:1020Kb
FOCUS: ACTIONABLE ANALYTICS culture built on agile methods and Using Analytics DevOps practices.4 The transformation was moti- vated by Fannie Mae’s need to ad- to Guide just its IT capabilities to meet the requirements and pace of a rapidly evolving mortgage market. In 2012, Fannie Mae had only 10 teams using Improvement agile-like techniques. Releases took nine to 18 months, provisioning of environments took two to four during an months, and quality was not consis- tently measured. Fannie Mae’s IT department was Agile–DevOps a complex, technical-polyglot ecosys- tem composed of 461 applications, several hundred utilities, and almost 18,000 open source components. Transformation As part of reducing this complex- ity, Fannie Mae initiated two proj- ects to implement agile methods and Barry Snyder, Fannie Mae DevOps as enterprise capabilities. Executive management required peri- Bill Curtis, CAST odic empirical evaluation of progress in quality, productivity, and delivery // Fannie Mae IT has transformed from a waterfall speed to monitor the effectiveness of the agile–DevOps deployment. organization to a lean culture enabled by agile methods and DevOps. The article discusses how Implementing the Analytics Platform analytics were used, along with challenges in Fannie Mae started by leveraging selecting measures and implementing analytics what already existed and filled the gaps with new solutions. While this in an agile–DevOps transformation. // got the transformation moving for- ward, it revealed that a highly cus- tomized solution can become an impediment. For example, the pro- duction release platform was built out of three point-solution prod- ucts with significant custom scripts to unify these solutions. While it conformed to Fannie Mae’s tightly governed environment, it impeded transformation because a major up- SOFTWARE ANALYTICS1 HAVE In 2015, Fannie Mae, a provider date to the platform would take proven increasingly useful in in- of liquidity for mortgages in the more than nine months when team dustrial case studies for guiding US housing market, undertook a needs were in the weeks. adoption and evaluating benefits of major transformation from its tra- In comparison, the CAST 2,3 advanced software techniques. ditional waterfall approach to a Application Intelligence Platform 78 IEEE SOFTWARE | PUBLISHED BY THE IEEE COMPUTER SOCIETY 0740-7459/18/$33.00 © 2018 IEEE (AIP)—adopted for application anal- Originally, analytics were de- could rapidly pinpoint which process ysis and measurement—had no cus- livered by a central team, so devel- and technology improvements drove tomization and was implemented opment teams were constrained in the greatest impact. out of the box. This platform using measures by how quickly they For example, one of three was deployed in 2014 as an enter- were received. This bottleneck led to squads working on elements of prise platform providing technology- developing a self-service interface en- the same product line had adopted independent structural-quality met- abling teams to choose the frequency behavior-driven-development tech- rics that enabled consistent analysis of scanning their applications and niques using Selenium for testing. and comparison across Fannie Mae’s consume analytics at their own pace. Analyzing data on effort, size, and application portfolio. Structural This interface increased platform us- development context combined from quality represents how well the sys- age by 481%, from 340 scans dur- different sources indicated that the tem is architected and the code is ing the year prior to the self-service squad’s productivity rose by 4.9% over engineered. Since manual structural- interface, to 1,636 scans during the three sprints due to shortening the cus- quality analysis of entire applica- year after deployment. tomer feedback cycle. This led the re- tions was infeasible, an automated maining squads to adopt Selenium. platform provided a common ana- Analyzing Productivity lytics solution across the enterprise. Measuring productivity improvement Analyzing Structural The only customization automated was a critical challenge in monitoring Quality the AIP onboarding process with an the transformation. Several measures Structural-quality measures were in-house built framework—a neces- (story points, dollar spend, LOC, code joined with defect ratio, dollar spend, sity for providing a self-service ca- review defects, code coverage, etc.) cycle release time, build count, and pability that was integrated into the were evaluated and eliminated be- other measures to analyze the impact DevOps toolchain. cause they did not provide equivalent of agile–DevOps transformation AIP analyzes an entire multilayer, comparisons that were independent practices. These analyses provided multilanguage application system of technology and comparable across equivalent comparisons regardless of from the user interface to the data- applications. For analyzing produc- language, analyzed the complexities base structure.5 To evaluate archi- tivity, Fannie Mae chose to measure of multilayer transactions, and could tecture, structural-quality analytics functional size rather than LOC. be combined with other measures are developed during integration The Consortium for IT Software to calculate ratios such as spend per rather than during unit testing, and Quality (CISQ) recently produced a AFP, defect ratio per AFP, etc. replace some information previously standard, through the Object Man- Application teams scanned builds provided by formal inspections. agement Group, for Automated Func- one or more times during a sprint An abstract structural representa- tion Points (AFPs)6 that was based to detect structural-quality flaws tion of the system is generated from on counting guidelines published by and produce the analytic measures. this analysis and evaluated against the International Function Point Us- Scan outputs provided early detec- more than 1,200 rules of good ar- ers Group (IFPUG).7 Since AFPs were tion of structural flaws, enabling chitectural and coding practice. Vi- automatically generated by AIP when teams to address the most critical olations of these rules are weighted analyzing the structural attributes of a issues before release. Prior to analyz- and aggregated into measures that code base, AFPs experienced rapid ac- ing structural quality, teams would highlight the operational and cost ceptance by early adopter teams. occasionally discover these flaws in risks of the application system in AFPs provided an outcome-based testing, but most often after an ap- five areas: robustness, security, per- evaluation of productivity regard- plication was released into produc- formance efficiency, changeability, ing the amount of functionality de- tion. Most teams began scanning at and transferability (or comprehen- livered per unit of effort in sprints. a minimum of once per sprint (every sibility). These five measures are ag- After aggregating results across all two weeks). Those who were scan- gregated into a sixth measure—the projects, the enterprise could rapidly ning several times a week or even total quality index (TQI)—that pro- adjust activities to address progress daily could fix critical defects within vides a summary structural-quality shortcomings and support transfor- a day or two rather than waiting un- score. mation accelerators. In turn, teams til the next sprint. These measures, JANUARY/FEBRUARY 2018 | IEEE SOFTWARE 79 FOCUS: ACTIONABLE ANALYTICS along with detailed information However, the structural-quality productivity and quality efficiency about the location and severity of output did not align with other en- as trailing indicators of transforma- defects, were provided to application terprise metrics. Aligning the met- tion success to demonstrate return teams via engineering dashboards. rics at the enterprise level and across on investment. Application teams used these the DevOps tools was a challenge, as measures to iteratively improve the tools were in silos. As the trans- Analyzing quality in the five structural-quality formation evolved over the first two Transformation dimensions while monitoring overall years, each platform built one-off so- Improvement quality via the TQI score. Structural- lutions to collate its data with exter- Figure 1 displays how Fannie Mae quality analytics enabled teams to nal enterprise metrics—this resulted measured and monitored productiv- look for code quality issues pre- in metric silos. ity in functionality delivered (AFPs senting short-term risk, while si- To eliminate the silos, Fan- added, modified, or deleted) and multaneously addressing long-term nie Mae constructed an enterprise quality (critical defects per AFP) as architectural limitations and costly data mart to align measures the transformation was deployed maintenance early in construction. and analytics from all enterprise across applications. Critical defects Fannie Mae intended to improve tools covering financials, enterprise were those rated from 7 to 9 on a the structural quality of all appli- program management, software de- 9-point scale. During the first quar- cations to a “green” state—roughly velopment (DevOps, test automa- ter of 2015, data aggregated across a 70th percentile score on the TQI. tion, test data management, agile applications indicated that the trans- At the start of the journey in 2015, methods, code quality, etc.), and formation began delivering substan- most apps were “yellow” and “red” operations. The collation of data tially more functionality without (scores