CIC Guide: Realization

Enterprise DevOps realities and a path towards Continuous Delivery

A Creative Intellect Consulting Guide Report

This guide offers directions for improving the collaboration between IT development and operations teams, and a path towards Continuous Delivery. Continuous Delivery is synonymous with an ability to increase release rates. This provides IT organizations with the capacity and flexibility for reacting to the changing demands of their business clients. One such demand is support for driving competitive advantage. Our direction is determined by exploring the collaborations between IT development and IT operations being employed by a cross section of organizations across the market landscape. We also investigate their understanding of, and capacity for, Continuous Delivery within the context of development and operational interactions. Ultimately, it is a guide that looks to identify the various applications and deployment models for Continuous Delivery, with a focus on exploring the different points of “trust” when Continuous Delivery can occur effectively.

This guide also profiles the support delivered to improving Enterprise development and operations relations (DevOps) by Serena Software’s Release Management and Automation portfolio. In particular, the guide looks at the vendor’s goals for enabling organizations to better cater for Continuous Delivery for the most appropriate application targets.

Bola Rotibi, Research Director, Creative Intellect Consulting

Ian Murphy, Principal Analyst, Creative Intellect Consulting

April 2013

Creative Intellect Consulting is an analyst research, advisory and consulting firm focused on software development, delivery and lifecycle management across the Software and IT spectrum along with their impact on, and alignment with, business. Read more about our services and reports at www.creativeintellectuk.com

© Creative Intellect Consulting Ltd 2013 Page 1

Table of Contents

Executive Summary ...... 3

Ten guide points to establishing an environment geared for Continuous Delivery (CD) ...... 7

The Guide in Detail: Defining Continuous Delivery ...... 9

Realizing the cadence for Continuous Delivery ...... 11

Continuous Delivery attributes in focus: ...... 13

Continuous Delivery Application ...... 16

Enterprise DevOps: A focus for IT agility and adaptability ...... 21

Serena’s Continuous Delivery and DevOps Foundations ...... 25

© Creative Intellect Consulting Ltd 2013 Page 2

Executive Summary

Businesses want something different, something that can enable them to capture new markets. They want IT to deliver competitive advantage, not once a year, but on a continuous rolling basis.

The success of companies such as Amazon, Apple, SalesForce, Google and Facebook show that this can be done. Software can be delivered quickly and reliably, leading to massive growth in the customer base and increased revenues.

There is, of course, a significant difference between many Enterprise customers and those organizations listed above. For many IT departments, delivering software quickly and reliably is not easily achievable, given the processes in use. Enterprise customers have complex, hybrid infrastructure environments and platforms (mainframes, minicomputers, distributed computing platforms) and multiple endpoint devices. Their infrastructure contains components that may be decades old, but which are still running critical applications. The software that they are using can also be old, meaning running critical systems can be hard to replace. Finally, the processes that the enterprises are using will generally have evolved over time to accommodate multiple generations of IT.

The aim of this report is to determine real world approaches for development and operations collaborations, and the application environments and organizational maturity where Continuous Delivery can be applied effectively. It also looks to help establish the mapping between existing ITIL/ITSM transformation strategies and the transition to Continuous Delivery and Cloud provisioning.

In doing so, we aim to establish a definition for Continuous Delivery and the attributes for Enterprise DevOps that reflects the reality of what organizations are looking to achieve – namely to improve the speed and frequency of deploying quality software and become a more adaptable and agile IT organization.

Input from “On the ground” experiences

As part of the research for this report, Creative Intellect Consulting Ltd (www.creativeintellectuk.com) interviewed a wide range of organizations. Respondents came mainly from large and small enterprises, as well as release management and continuous development and deployment consultants dealing with the following markets: US government departments, global financial institutions, European tax and revenue offices, automotive, retail and healthcare.

The goal of each interview was to understand: existing operational and development processes; the level of focus on DevOps concerns; strategies for moving towards Continuous Delivery; the levels of self-service support; considerations for Cloud services and future needs. We sought to understand what was happening in practice, the technology and the process gaps and what was needed to meet the changing demands of the business.

Rather than presenting the interviews in detail, this report looks at the patterns and approaches that are common (or, where relevant, different).

A need for DevOps

For many organizations out in the market place, the workflows within IT development and operations teams are generally well established. There are relatively solid operational processes in place, particularly for workflows like and release management, that target a single platform technology (Java, .NET, Mainframes). Unfortunately, the right collaborations and communication between these two important stakeholder teams are not always well coordinated or orchestrated. In some extreme cases they are not even taking place, with development teams metaphorically throwing their delivered code or application modifications “over the wall” to the operations team.

© Creative Intellect Consulting Ltd 2013 Page 3

Welcomed acknowledgement for a “DevOps” focus

Within the organizations interviewed, there is great appetite for improving the operational processes as well as the collaboration and alignment between the development and operations teams. Many of those interviewed had attained reasonable levels of maturity in their IT organizational processes, although not all processes were consistently applied throughout. There was keen awareness for having in place important operational processes, as defined by the ITIL and ITSM libraries, and best practices to help support throughput gains made from agile development procedures. However, while ITIL and ITSM provided the foundations for many improvement strategies, the level of consistent support and implementation was patchy. A vision for Continuous Delivery...

The task of Continuous Delivery is a growing focus area for both IT development and operations organizations, particularly with respect to supporting a level of service automation and business agility with the fast deployment of desired services.

The number of releases varies per organization, but there is no doubt that many would like it to be on an upward trajectory. Of those interviewed, many felt that “Continuous Delivery” was an initiative that aligned well with the ability to release more, no matter who is executing the release management process. So it is about smoothing the path to production deployment.

Some of those interviewed see the potential of mobile apps; not business critical, but an opportunity to see the development team deploy directly to an internal App Store environment hosted via the Cloud.

...but a lack of clarity in its application

The capacity for Continuous Delivery is not always well understood, especially in the context of development and operational throughput. Many organizations are looking to understand how the ITIL/ITSM model, which they have begun to use to transform and improve their operational processes, maps to support both Continuous Delivery and Cloud provisioning. The drive for Continuous Delivery presents a mandate for DevOps

One of the insights that a drive for Continuous Delivery firmly exposes, is that it is about how you manage the process between IT development and operations teams to create a smother bridge and progression path to deployment. It is not about trying to create a third “DevOps” department that is made up of the other two. Organizations need to stop with the mindset of thinking about having a DevOps manager, a DevOps this and DevOps that. Once lead roles within both teams work on how to manage handover and making the processes smoother and not a step of stop start points, organizations will find that DevOps becomes a given. Ultimately DevOps is about process integration with the right level of collaboration and communication occurring between IT development and operations. It is that simple.

Tooling direction required

Many organizations do not necessarily know which specific tools are necessary to support Continuous Delivery. This is not altogether surprising when both IT development and operations teams already have considerable arsenals of tooling platforms in place to do a mix of management and process functions (e.g. help desk, problem resolution, configuration management, change management, defect tracking etc.). A question many ask is how should the various tool chains already in place interact and align to support Continuous Delivery or Deployment? More importantly what should be the makeup of the various tool chains in operation to deliver strong Continuous Delivery or Deployment support?

© Creative Intellect Consulting Ltd 2013 Page 4

Vendors like Serena, with its focus on orchestration of workflows especially for release management and automation, and IBM with its orchestration for Cloud deployment and workflows have been smart in their directions for supporting Continuous Delivery. Improvements in core systems required Enterprises have been deploying distributed applications for over two decades. In that last decade there have been two things that have exponentially increased the complexity. The first is the increase in the reuse of components across multiple applications. The second has been the introduction of virtualization, meaning that the physical location of distributed application components is hard to track.

The primary tools used inside organizations to maintain a record of hardware and software are Asset Management and the Configuration Management Database (CMDB). In practice, however, many applications and particularly distributed system components are not properly recorded. There are concerns that many CMDBs have not caught up with the wide use of virtualization, so the creation of multiple instances of an application may go completely unrecorded leading to patching errors.

Another key concern is that as we move towards a world where everything is software defined, the CMDB, in its current guise, has no way of distinguishing between hardware and software changes. For example, change a firewall and most CMDB will see that as a hardware change. Open a firewall port to allow specific traffic through as part of a software upgrade and it will not be recorded by the CMDB because it is not seen as a change to the firewall per se.

This is a gap that we believe vendors urgently need to address, especially as software defined networks and environments in general look to be the next big datacentre change.

A lack of best practices is hindering a move to Continuous Delivery

There is very little in the IT world that is really new. The vast majority of 'new' technologies, techniques, processes and tools are really just improved versions of things that have gone before. Yet, despite the money spent on IT solutions and systems, it was evident from our interviews that too many organizations were struggling to find best practice advice.

Interviewees pointed at vendors as not doing enough to show how to bring tools together, how to create more stable processes and how to create effective Continuous Delivery (CD) processes. This was a surprise, because not only are there plenty of examples of CD across the industry, the vast majority of large enterprises already have working models.

For example, deployment of patches and new software to endpoint devices is an automated process: if an endpoint device is detected as not having being used for a period of time it will receive an update, and it will then be restarted to ensure that all patches are properly installed. Any device that is not patched may find itself unable to connect to certain network resources until this is resolved.

Scaling that process up to the delivery of software to production and the way change management is used inside the datacentre is entirely achievable. However, there was a distinct view that the experimentation of how this should be done should be based on vendor examples rather than organizations doing this themselves. The path to Continuous Delivery and for improved DevOps requires commitment

There are many challenges facing organizations in managing their operational processes that make the move towards CD and the implementation of a strategy for Enterprise DevOps difficult. The issues range from rising complexity (application and infrastructure) and lack of sufficient automation for predictable and repeatable processes, to poor process quality, poor discipline and management/organizational inertia. So even though the knowledge of ITIL and ITSM is high, the lack of sufficient support for important operational processes and best

© Creative Intellect Consulting Ltd 2013 Page 5 practices will hinder progress. It is not just best practices, as defined by the framework libraries, but proven best practices from vendors that is lacking for many of those interviewed. Without sufficient proof of success, enterprise IT will feel they are taking a risky leap.

Ultimately, an organization’s capacity to support CD and / or deployment is more than an approach for collectively addressing the people, processes, technology, and tooling perspectives. Nor is it just about the IT organization being able to drive both the tactical and the strategic agenda. This is an outcome that is likely to occur since IT will be in a better position as a result of its capacity to deliver quickly and more frequently. What it is about is whether all parts of the business can embrace the reality of what CD truly means, the commitment and support required and the cultural and organizational mindset changes needed. Those who have experienced implementing Agile development practices, SOA and other process initiatives will be well versed in the challenges this presents.

© Creative Intellect Consulting Ltd 2013 Page 6

Ten guide points to establishing an environment geared for Continuous Delivery (CD)

1. A commitment to Business Agility must be comprehensive: Commitment to change across the entire company is one of the most important foundations of being able to create a valid DevOps strategy that can support CD. That commitment is not just limited to how parts of IT work together, but extends to the whole business and IT relationship. If you cannot do this, then make sure the subset of the business is well-defined and committed through Application Lifecycle Management end- to-end, so that the entire process follows CD.

2. Assessment, and in particular “risk”, must be part of the decision process: Risk is inherent in everything a business does. Therefore any change to the way a business operates should be risk assessed to balance the risk and reward from increased automation. Understanding the risk profile will help to identify those applications and services most appropriate for CD execution.

Before any Continuous Delivery process can be undertaken, it is important that organizations fully understand how their existing development and operations processes work and what it is that they are looking to improve/change/achieve from a continuous process. This is similar to the Business Processes Reengineering (BPR) approach that many organizations will be familiar with. The difference being that in this case, the emphasis is not on reengineering a business process, but on how the processes are created and implemented across the development and operations teams.

3. Process underpins a strategy for Continuous Delivery: Without process it is impossible to achieve a high quality deliverable; was the collective view of all those interviewed. A solid process driven foundation is key to the ability to create an environment that can support continuous integration and delivery, thereby raising the bar for more frequent quality releases.

4. A strategy for Continuous Delivery is easier if you have the foundation of ITIL and/or ITSM in place: One of the things that we found from interviewing a number of organizations was that any CD strategy must be built on the foundations of ITIL and ITSM, particularly from the operational aspects. It is not just about service management, but about how one improves all the interconnecting processes, how you get a level of automation support and then improve that support. The most important thing is around traceability; particularly the traceability of being able to capture all necessary information so that one is able to roll back to a previous state in the event of an error.

Those organizations that had already put ITIL and ITSM strategies in place found it much easier to define their approach to DevOps. Key to this is Change Management and service evaluation. This essential review ensures that the delivered changes do not increase operational complexity without the operations team buy-in. These libraries of best practice were also key to establishing a Release Management process that was able to deal with the complex and hybrid environments that many of the organizations operated in.

5. Raise the bar for change and asset management: The underlying complexity of modern IT environments is a key reason why a mature Change Management process is needed. Change Management and an established Configuration Management Database (CMDB) are required to provide Release Management with an understanding of the complexity of the infrastructure and to assess the impact of releases.

For some of those interviewed, Change Management was seen as a key process to drive the entire workflow and task allocation process. Requests for new features, the allocation of developer time, the move from one staged environment to another, approval in the workflow - none of these could occur

© Creative Intellect Consulting Ltd 2013 Page 7

without a corresponding Change Management request. Some organizations also commented that moving all inter-related changes together through the environments as a single unit ensures a quality test.

6. Get a firm handle on the complexity profile within the IT organization: Complexity is often underestimated, presenting many challenges for those looking to improve the release of software. Establishing the complexity profile with the IT organization or for a particular application system or service will help identify the best approach for addressing CD goals.

7. Identify any build and integration bottleneck and consider the wider implications: Few projects are based around the work of a single developer. Most are either done as part of a team or as part of a much more complex distributed development plan. In order to freeze code, execute a build process and then do integration testing, there has to be a point whereby a particular branch of code is frozen in time.

The time taken to replicate workspaces to a local copy of the SCM repository and then start the build process can be contained, especially for developers on a single site. It can take a lot longer in a distributed development environment with multiple repositories needing to be synchronised. Any late submissions could result in significant cost and time repercussions. It is therefore essential that the implication of how repositories are managed is considered as part of any move towards CD.

8. Rethink the Release Process: Evaluate current release processes for application services in play or in the pipeline and establish whether any can be executed in a modular way rather than as a monolithic deployment package?

Many organizations would like to replicate the modular approach to deploying new application features and updates taken by the likes of organizations like Facebook and Google, who are able to release quickly. However, they are still stuck with their monolithic deployment packages which they find hard to break down. Therefore, IT teams really need to work out how to modularize the release process so that they are only delivering what needs to be delivered, e.g. a module or a function, rather than the whole application or service package.

9. Testing needs to be comprehensive and continually updated: Testing software has to be an integral and continuous process. There is already a tapestry of testing strategies in place: unit tests are used as code is checked into repositories to ensure it is fit for use and functionally correct. Integration and QA testing using use cases prove an application meets the requirements and is stable. Pre-production testing ensures that the code works against the versions of key components inside the production environment. The problem, according to some of those interviewed, is that this testing just isn't aggressive enough. Rather than test software to make sure it meets the use case, the testing should do everything possible to break the software.

This is harder than it sounds. Tests are created based on expected outcomes and it is hard to build a test for an unknown. The theory is, given a series of inputs there is always a single expected outcome. Too often, the use cases do not include real-life inputs, such as a distributed process being down (e.g. destructive testing), someone entering a value out of range, or code which is written so that when inputs are entered in an unusual sequence, the code does not consistently result in the same outcome.

10. Automation and Governance matter: Good automation, as well as a strong governance model, will be crucial for CD progression and to ensure that forensic investigation, analysis and rollback can be applied in the event of an error occurring. Those who are used to delivering and rolling back ERP applications and systems will be well versed in the governance model, traceability requirements and workflow dynamics needed to support CD.

© Creative Intellect Consulting Ltd 2013 Page 8

The Guide in Detail: Defining Continuous Delivery

What do we mean by Continuous Delivery? Does CD mean delivering faster or just being able to continually deliver something, whether that something is an application feature, a configuration patch, or a new business process service? In reality the term “Continuous Delivery” is poorly explained within the industry as a whole.

If we deconstruct the term and look at what those organizations interviewed and those in the wider market are looking to achieve through a process of “Continuous Delivery”, we find that it applies very much to addressing the pace and frequency for releasing or other changes into production. What organizations want is to match the cadence for software delivery and application dynamism and innovation as demonstrated by the likes of Amazon, Apple, SalesForce.com, Google and Facebook.

With this in mind we provide the following definition for “Continuous Delivery”:

Fundamentally, “Continuous Delivery” is about improving the cadence for releasing stable working software or other changes into production without increasing risks to the organization, thereby creating an opportunity for more frequent releases and the flexibility to meet rapidly changing needs and demands.

Specific to this definition are the attributes and factors that need to be in place to support, manage and govern the workflow for Continuous Delivery, so that it can be successfully executed. Figure 1 below provides a visual outline of the driving attributes underpinning CD support as well as the main inhibitors.

Establishing the transition from development to production

The process of moving code from development to production has always been contentious. The development team believe that once QA has approved code, operations should be able to deploy quickly. Operations, whose focus is on stability of the production environment, needs to be sure that there are no unexpected integration or behavioural issues before deploying code into production. The result is that in the best managed environments, code moves through several stages before it is finally deployed.

The staging profiles

Most of those interviewed described common practices and workflows for the handover and transition that happens for developed code or other application changes and their deployment into production. This generally sees the implementation of various staging or sandboxed environments that specifically support key phases of the transition process or workflow. Typically, there is a separate environment used by developers for code development or for making application modifications or configuration changes. Unit testing may also be carried out in this environment. There is likely to be a distinct and separate environment used by the testing team for general Quality Assurance and testing functions. There may also be separate environments for integration and regression testing, as well as one for acceptance testing before the final deployment into production.

Clarifying Continuous Delivery versus continuous deployment

Within many organizations and generally within the market place the terms “Continuous Delivery” and “Continuous Deployment” seem to be interchangeably used, but with few having a clear handle on what, how and where they apply. This confusion between the terms deliver and deploy is mimicked by how people think of the process of releasing software.

When people say the term “release” there are two paths that come to mind. One is that they are looking at deploying a software application or system (e.g. a database or an operation system) into production. The other form of release is moving code that has been reconciled through the release cycle. To many people release and deploy mean the same thing, but to others the release process is more about moving code through the various release staging cycles, with deployment being reserved for releasing a bundled package into production. The deployment of a solution may not actually be code that is deployed, but a patch, a configuration change, a fix, or even the deployment of a physical machine associated with that application service.

© Creative Intellect Consulting Ltd 2013 Page 9

Ultimatel,y it does not matter which term is used: deliver, deploy or releasing application code and changes through the various transition stages or into production. It is the cadence of the progression and the frequency by which it can be executed stably and confidently that is what organizations are looking for. Improving the overall process will enable a level of productivity to be achieved that could potentially open up capacity for other options to be considered.

However, to clarify the terms more succinctly, it is our opinion that “Continuous Deployment” should be more about the movement (i.e. the deployment) of code and application changes between the transition stages. “Continuous Delivery”, on the other hand, should be used when that application with all its components is delivered into production.

An environment geared for Continuous Delivery and deployment

There will of course be various factors that will have an impact on an organization’s ability to release application code or modifications to a staging environment or into production more quickly and frequently. Understanding what is already in place and what needs to be in place will be crucial for improving the cadence and frequency of release across all staging environments before final deployment into production. Figure 1 provides the foundations that we believe are important.

Figure 1: Drivers and Inhibitors impacting Continuous Delivery and Deployment

Source: Creative Intellect Consulting

As the diagram depicts, all of the above factors contribute – positively and negatively – to supporting an environment for Continuous Delivery. The diagram also poses the question as to how CD can work in a “Change the Org/Run the Org” dichotomy that pervades most large institutions. Ultimately, for true success it needs end-to-end design and agile implementation.

© Creative Intellect Consulting Ltd 2013 Page 10

In the following pages we address in more detail the specific attributes that can drive, support and help govern an environment geared for Continuous Delivery. We also outline the inhibitors in more detail.

Realizing the cadence for Continuous Delivery

It would be trite to suggest that all organizations can move at the same speed when it comes to the development and deployment of applications. For some organizations, risk is the most important factor in determining the time and strategy for releasing software. For others, it may be about a need to be first to market to gain a competitive edge or to provide new features to an ever demanding audience base. In some cases the ability to release more often can be hampered by the length of the supply chain. Short supply chains can generally be more highly automated and predictable, versus outsourced or complex third party integrations.

In Figure 2 below, we show how process quality and the level of automation support combine to indicate the pace of release, the potential for risk and the likelihood of stability. All are factors that we believe determine an environment or workflow geared for frequent and predictable deployment. The latter criteria we believe are synonymous with the ideals for Continuous Delivery or deployment. It also highlights the prospects for adaptability and being able to readily address changing needs. Ultimately the diagram below depicts, at a high- level, the foundations for shaping a CD strategy and achieving stable and frequent deployment.

That said, our diagram is not intended to be a one-size-fits-all for enterprise IT departments. An organization may actually have all four quadrants represented within their IT release processes as a result of different support levels for different application environments. Each application or system, development or operations process could be at different points on this graph. Some will be so sensitive that they will stay to the left, while others will progress quickly to the right. The importance is to build the right processes, understand the benefits and risks of automation and match all of those to business needs and goals. As one of those interviewed confirms: “We have to do what fits us – and if it does not add value – to stop doing it”

Figure 2. A Quadrant of Release Strategies

Source: Creative Intellect Consulting

© Creative Intellect Consulting Ltd 2013 Page 11

Explaining the axis:

Process quality is a combination of several elements. These include how well a process is followed and tracked for improvements, the depth and completeness of traceability audits and the ability to rollback a process in the event of an error or an exception.

Automation ranges from a purely manual process to a highly automated, machine driven process.

Achieving a high level of process quality and automation is very dependent upon the cultural and behavioural limits of the organization. High quality processes, high levels of automation and/or the combination of both require organizational Agility. At a time when budgets are tight, it might seem an unnecessary expenditure to try and reach these goals, but this cannot be done "on the cheap". The quadrants in detail:

Release Dependables: Those companies in the top left of the box have highly evolved processes with detailed audit and traceability. However, their release management is predominantly manual with little to no automation support. Whilst the software deployed through these processes tends to be stable and reliable, the opportunities for increasing the frequency or rate of release are limited without greater automation support. From those that we interviewed, this scenario was acceptable for risk critical applications, particularly within organizations that were highly risk averse. Companies that we see in this space would typically be those delivering complex financial or safety critical application services.

The IT organization for a European tax and revenue body is unlikely to employ either a one click Continuous Delivery progression into deployment from development for applications that must address complex tax laws, or use high levels of automation with minimum human touch points.

In such cases, the risk of an error in the deployed solution could have adverse consequences which could have seriously detrimental financial implications.

Release Masters: Top right are the companies that have highly developed process quality and a sizeable amount of automation support. These companies will have invested significant time, effort and money into building an environment where risk is fully understood and is effectively balanced against the needs for continuous or frequent delivery. They stand to gain substantial benefits from their ability to meet the needs of the business without risking the collapse of the IT environment.

Achieving the top right position is an aspiration rather than a realistic achievement. We see various points here where different types of companies are currently positioned. For example, Application Platform as a Service (PaaS) providers will have mature, quality processes and will be good adopters of automation. Using self-service portals, users will be able to select software images that are then deployed automatically. Patches occur without users knowing about it and any need to revoke an image will also be automated with no user impact.

Another example is the internal IT department responsible for endpoint software and devices. They will have adopted automated processes to deliver tested and approved patches on a predetermined schedule, often weekly, to users. The processes that control and audit the delivery of patches will also be mature enough to track machines that have not connected to be patched with the ability to quarantine any computer until it is up to date.

Release Chancers: This is reserved for the bottom right quadrant. These companies are willing to proceed with little to no process quality in place, but are desperate to use the latest version of everything. As such, they have automated the entire deployment process without proper consideration for audit, traceability, risk assessment or problem resolution. It is highly likely that software could be sent direct from the development team to the production environment with little to no form of staged environment used. This could apply to situations where the business unit develops a local productivity solution using platforms that are not sanctioned for the production environment (e.g. an Access database), but which have a link into production

© Creative Intellect Consulting Ltd 2013 Page 12

systems for data or transaction output. It can also apply to an application distribution environment (e.g. an unregulated App Store) which may have highly automated deployment facilities, but where there are no testing or quality processes between development and the application being released.

Release Failures: The worst positioned to be in and a quadrant that needs little explanation, aside from the fact that processes employed are likely to be ad hoc, poorly governed and predominantly manual. A level of stability and repeatability will be difficult to achieve, making CD progression into production simply impossible. Continuous Delivery attributes in focus:

Process Quality considerations – foundations for Continuous Delivery

Process quality is a complex measurement of how a company validates, approves, applies and regulates its processes. Good process quality, however, is more than measuring how or whether a process executes correctly; it is also about addressing the completeness of the process through the traceability of actions, tasks and events. Identifying change along with the ability to record the state of an environment is vital, not only for any problem resolution process; it enables those changes to be replicated and the rollback of systems in the event of a problem.

Having strong process quality in place is a goal to aim for; a state where any process is as complete and as understood as possible. While improving process quality is important, it is crucial that one does not become so focused on the actual process that the surrounding elements of the process are ignored. Achieving process quality is a continuous approach, not a one-off review of what a process does.

Process quality in the context of releasing software into production is a combination of several elements – namely having in place strong process foundations. These include the following:

• Support for best practice operational processes such as those identified by frameworks such as ITIL/ITSM. Therefore, it means having in place good policies, workflows and tooling support for handling release management and automation; change management processes, asset and configuration management, issue and defect tracking, help desk support and the like.

• For development it means support for continuous build and integration server and tooling technologies for the different application platforms (e.g. Java, Mainframe, .NET) that might be in play. This means that developed code is routinely checked into a source code repository and built to see if it passes defined test and deployment requirements. It is also about appropriate support for test automation and other test driven strategies.

• From an overall development and delivery perspective it means having in placing methodologies that support collaborative engagement between operations and development teams from the outset along with other stakeholders of the software service or product.

The above suggest an ideal environment for disciplined software development, delivery and operations that rarely exists in its entirety in the market place. Continuous Integration and Build systems and processes are relatively well established and widely applied within enterprise IT departments. However, many IT organizations continue to struggle with obtaining a level of consistency in the processes that they use for the variety of software services they deliver and the systems they support. Despite all this, it does not detract from the point that good governance and process quality is an ongoing goal to strive for. Process Governance: Reinforcing quality through traceability completeness, replication and rollback

As we highlight above, traceability, replication and rollback support are important factors for process quality.

© Creative Intellect Consulting Ltd 2013 Page 13

Replication: A significant challenge for all those interviewed, and an area where problems with applications deployed into production can often be traced back to, is the differences between the development, test and production environments. Keeping development, test and production environments in sync can be difficult, especially where multiple concurrent environments are being run to allow development teams to work uninhibited. A major bottleneck can come from the mismatch between the “state” of the systems used within the different staging environments in the progression to deployment. The result of which is a case for more stringent control to ensure that changes and updates are applied to all staging platforms to reduce the risk of failure or instability once an application package is deployed into production. From those interviewed a variety of strategies were in place to tackle the mismatch with varying degrees of success.

One organization interviewed used a pointer system to ensure the fidelity of the application package to be tested and deployed. When a component has been tested and moved to the next staging environment in the chain, a pointer is left to where the relevant code can be found. This continues all the way to production. It eliminates the need to keep synchronizing code into multiple locations and ensures that every environment is able to see the most current and active version of any piece of software. It allows more time and resources to be spent maintaining close alignment between the production environment and the proceeding staging ones.

In fact, the ability to capture delta changes of state in the production environment and replicate them quickly back to all the different staged or sandboxed environments used by the development and testing teams was highly regarded. Even though it was not always implemented well in practice, it was found to be crucial in ensuring confidence in the stability of applications once deployed into production.

Traceability completeness: Being able to undo a change was just as important as making a change, according to several of those interviewed.

In 2012, three European financial institutes (Royal Bank of Scotland (RBS), Nat West and Ulster Bank) experienced highly public failures during the application of a software update. It later transpired that some data files were corrupted as part of the update process. Because there was insufficient traceability of all actions taken and of all touch points, rolling back the update did not replace the corrupted transaction files. It took months to actually track down and resolve the problem.

Good traceability and audit would have identified every touch point made during the update process. This would have identified the potentially corrupted files and as part of the rollback, and they would have then been replaced with copies that were taken before the update was applied.

Traceability completeness is a significant process quality attribute, and a major requirement if greater automation is desired. The more automated a process, the more critical it is that there is a full audit trail of what has been changed, when it was changed and by whom, and who authorised that change. Even in a heavily automated system, there is still a need for authorisation processes as checks and balances.

Rollback: Being able to rollback a process when something goes wrong provides a deep level of trust in the way an organization operates. In the case of deployment, it alleviates concerns that change is being applied too quickly to the IT environment.

Continuous Delivery inhibitors

Complexity: For many IT organizations the ability to deliver fast enough for the business is hampered, not so much by an inability to deliver the necessary application code, but by the complexity of many application services and the infrastructure environments that are typically in place.

Those organizations interviewed in our study all revealed infrastructures sporting a mix of hybrid platforms ranging from mainframe through to distributed computing environments supporting multiple programming models (Java, .NET) and multiple application systems. The same was true for the mix of tools used to provide continuous integration, build and test support (e.g. Maven, Cruise Control, Ant, CoolGen etc.).

There is a common challenge for organizations with myriad release processes and tools for different platforms and a multitude of staging environments. An application service or a business process may be a combination of

© Creative Intellect Consulting Ltd 2013 Page 14

components based on different platform technologies. How then does one align and manage the different workflows to deliver a stable and reliable release across multiple participating technologies and platform systems? How can one raise the speed and frequency of deployment in such a complex and hybrid environment, thereby enabling support for CD progression?

There are also challenges with the availability of resources, especially as many projects are executed in parallel.

Ultimately, complexity makes approaching CD a challenge. The complexity of hybrid environments and application services, of processes and workflows and the mix of tooling platforms in place, all create inhibitors to the ideals promoting CD.

Culture and mindset: Another significant inhibitor is the mindset and practice for releasing applications or services. Too many organizations are still executing processes where they deliver everything as a monolithic solution or project made up of complex deployment packages, rather than as a simple feature or module that can be quickly tested and then released, which is where the industry in general is evolving to. The tooling has also evolved to keep in step with this change requirement.

Getting organizations in industries such as healthcare and insurance to understand that they need to break up their complex packages into smaller modules so that they are able to execute their regression and functional tests quicker can be a hard concept to get across. Few want to decommission applications with weighty release processes, especially if that application or service delivers significantly large revenue returns despite its maintenance costs. It is a technical debt issue that few have the appetite to pay off.

Commitment with the right mindset is vital. It is more than having the tools, people and processes in place. It is whether an organization is able to embrace what is required to shift to supporting CD execution. Risk: A crucial tolerance factor

It doesn’t matter whether you decide to automate directly from the development space, or whether you have a distinct stop, validate and approve before progressing system, risk assessment needs to be an important focus. All of those interviewed felt that risk was not always prioritised enough within organizations. Fundamentally, it is not just about how one gets an application solution or service from development to the end user in one process. It needs to be more about identifying for which application system or service that a CD strategy would be the most appropriate execution process. Understanding the risk profile and tolerance, along with a strong knowledge of the underlying processes and workflows involved, will be vital for identifying where and how a CD approach can be applied.

Ignoring risk raises the technical debt charge

As well as the associated business impact of poor risk assessment, risk and weak processes add significantly to the IT workload in the form of technical debt. This causes operations to spend inordinate amounts of time trying to keep systems running, managing disaster recovery processes and fielding calls from users. Development is then bombarded with the need to carry out urgent fixes which impact on the delivery schedules agreed with the business. When people question the effectiveness of IT for the business, it is all too often due to excessive technical debt weighing on IT processes. Finding the right approach to “Automation”

Not every process can be automated and for some system environments or application services not every process should be automated.

Automation is a term that is regularly used and abused in IT terms, but one that is very simple to define. Automating processes is about improving the cadence of delivery. Automation within IT operations might relate to automated deployment of patches, a report system or the testing of software. Inside any large organization, at least two out these three examples will be happening on a regular basis.

© Creative Intellect Consulting Ltd 2013 Page 15

Getting automation right requires a simple binary approach to a task. Did it succeed, yes or no? If yes, move forward, if no stop and find out why. It really is that simple, but often looks anything but.

One of the challenges of automation is that processes are rarely designed with a binary response. This means that in order to automate a process, there is a need to understand it fully. You need to know what to automate before it can be automated. This means analysing what a process does and reducing it to a series of binary steps that can be combined into a larger process.

Any process can go wrong, therefore it is important that any automated process can be rolled back. This process is well understood in transactional terms, but can often be difficult to achieve when workflows are automated. What makes this particularly challenging is that out in the market place, many automated processes are complex. Instead of consisting of several stages, each of which is a separate process in its own right, an attempt is made to automate everything in a single process.

Risk aversion: When manual trumps automated

A number of risk adverse organizations interviewed saw automating some of their processes for continuous deployment or delivery as being too risky. They rightly worried that a highly automated and complex workflow could lead to a runaway process. This in turn could cause significant damage to the business should it fail and release software that is not fit for production. As a result, there is a tendency to slow everything down and require manual intervention at key transition stages. However, by breaking processes down into binary steps, and proving that they are predictable and reliable, it is possible to introduce automation at different points in the software delivery process. This allows organizations to balance risk and automation by inserting manual checkpoints where needed and to create ever greater levels of automation as trust is established.

It is fair to say that most organizations could probably improve the levels of automation for applications, processes and workflows that they are confident in. Continuous Delivery Application

Self service deployment to the Cloud by developers is not for all…nor is it fully ready for now

The self service deployment to the cloud will depend on the type of applications that will be suitable for the concept of CD progression. Application complexity, the bandwidth of the application requirements (i.e. the internet or network latency may not be something that the application wants to deal with) and the risk profile for the application will all have a bearing on the suitability of an application for developer led deployment. Therefore the notion of CD support needs to be thought through carefully.

Realistically, it is unlikely that there will be a straight through deployment or release from the development workspace. Even in the event of a Facebook like application, with an organizational structure not hampered by legacy technology or a limiting conservative culture, there is likely to be staged progression for releasing software into the production environment.

From those interviewed, all concurred that there will always be a staging process where specific roles or individuals will be responsible for the release progression, even for processes that are highly automated. There is a host of reasons for this, the main one being the need to maintain a separation of roles for security and audit purposes. Another is the nature of some applications being too risk sensitive to trust to a fully automated one click approach from development to deployment. Both cases expose some of the challenges to increasing the tempo and frequency of releasing software. Therefore, an environment geared toward CD may not be right for every single application or workflow.

© Creative Intellect Consulting Ltd 2013 Page 16

Continuous Delivery may not be for every application but Platform as a Service (PaaS) offers a workable support model There may be some applications that could be deployed with less interaction from the operations team.

Organizations considering putting in place an internally managed App Store framework similar to Apple’s App Store platform could experience an element of CD that could potentially see applications and updates released directly from the development workspace. This could only happen because the applications developed are built on top of a tightly controlled and managed Application platform, supported by the App Store as it is with Apple’s iOS platform. This is a classic Application Platform model that is also not unlike one of Salesforce.com’s application services (e.g. CRM) underpinned by the Force.com Platform as a Service model with its AppExchange Force.com App Store.

Whilst Platform-as-a-Service solutions support an opportunity for CD, they will still require a series of sandbox environments for development, testing (unit, UAT, regression and any other form of automated testing capability), validation and a release management process. All of this could be carried out within the development team so long as the infrastructure of the PaaS is operationally maintained.

In this model some of the existing release and operational management functions will still be required, but the dynamics of release will be changed. The PaaS model offers a workable model for CD based applications and services. In this form it is already being practiced within organizations that have employed Salesforce.com services and other PaaS style offerings. Mobile apps that might not be business critical may offer another opportunity for developers to deploy directly to an internal app store.

Put Cloud in the mix, but ensure consistency

Many organizations are adopting cloud computing, albeit in a limited way. In this case, the way applications are managed and deployed needs to be understood by both operations and development. Cloud also presents another challenge, especially if it is being used to host test environments. Developers have become adept at creating cloud instances of the applications in production in order to test their code. If those cloud environments do not mirror production, this can cause real problems. For example, are all the applications in the cloud environment at the same patch level and have the same functional capabilities as production? Staging alignment: Setting the stage for automation and release frequency

It is not just about providing developers and QA teams with a production quality environment. For organizations that want to improve the cadence of software delivery through automated test to production processes, any mismatch between test and production environment will lead to failures in production. These failures will immediately highlight where poor choices have been made in automation.

A solution to this is to have the operations team define the test environments, potentially as a self-service mechanism, then be responsible for keeping them synchronised with the production environment. An alternative solution is to put responsibility on operations teams to create and manage test environments. However this is resolved, it is vital that both test and production are synchronised. Otherwise developers and QA teams can be working against old code which may contain deprecated features.

Make time for Audit and Rollback – the cornerstones for increased automation

Just as important to understanding whether a process has been executed as planned is the level of support given to tracking all possible actions, task and events associated with the process workflow. This means being

© Creative Intellect Consulting Ltd 2013 Page 17

able to trace all the touch points. In the case of the change management process it means being able to identify changes made and by whom, as well as being able to trace which tests were executed.

The audit and rollback capabilities provide a foundation to prove that what is being done will not have harmful effects, and should an exception occur, it is possible to reverse the situation without damage to the business. The completeness of the audit trail will add to the maturity and completeness of any deployment process and will help to build trust. It also removes the 'black box' approach that all too often obfuscates the way applications are deployed. This trust is an essential element in proving to the business that being agile or being able to deploy more frequently does not mean unnecessary risk.

How the process is applied is just as important as the effectiveness of the process. Auditing the impact of a process, and having a means to review that audit (a step that is often overlooked) will quickly highlight any errors or concerns. This same process will provide the necessary information for reverse or rollback.

As part of the rollback process, particularly if there have been other changes applied since the change being rolled back was applied, there needs to be a test process. This means that the rollback process also has to be built into the test grid.

Recognize the driving criteria for release automation

Increasing the level of automation is possible if you have a workflow that is predictable and well understood, since it will become apparent where automation can be effectively applied. Effective automated releases require stable and predictable workflows. The more stable and predictable the process for release is, the more automated it, and the associated interactions between development and operations teams, can be.

So if the goal is to improve the level of automation within the deployment process, thereby raising the cadence for release, stability, predictability and trust become driving criteria.

Increasing automation is not possible if you have a risk critical application, since there is only so much cadence that such an application can support. Therefore risk and the confidence an organization has in its processes also help to determine the automation tolerance. A case for a strong Tooling portfolio, but an even stronger one for interoperability

It is hard to see how addressing process quality or improving the frequency of release and deployment can be achieved without having in place the support of good tools. Indeed, many of those interviewed argued that they were hindered in their efforts to improve their processes by the lack of appropriate tooling support.

In practice, there are any number of well established and widely adopted tools that provide good support for tasks, such as continuous integration for a variety of application platforms (e.g. Cruise Control, Maven, Ant etc.). The value of good tooling support is not in question: crucially it enables a level of automation to be achieved, whilst providing governance and compliance controls so that tasks can happen in the right way at the right time, and all interactions and events tracked.

However, getting the right tool can be challenging. The tooling landscape is broad and varied and getting the right tool support for the different IT teams can sometimes be a process of trial and error. What will be an important consideration for those IT organizations faced with a mix of system environments and application models is the strength of a tool’s interoperability framework. The interoperability support will determine how seamless the transition and exchange of data, information and processes can occur. Ultimately it will have an impact on the change management process and provide the capacity of more effective automation and communication and collaboration.

Interoperability becomes an important requirement for any organization or vendor thinking about CD.

© Creative Intellect Consulting Ltd 2013 Page 18

Workflow orchestration and management: a vital and valuable support aid

It is not just about the tooling strategy for CD, it is also about workflow orchestration and management. All of the major vendors in the market place have been working hard on integrating and interoperating their tools with tools from other vendors. They do so because they understand that providing end-to-end support is too big a task for all but the very large vendors who have the broadest portfolios to manage and serve. And even then, few organizations have tied themselves to one vendor supplier; most, as we regularly highlight in this report, operate with a tableau of tools, technology systems and processes. In the face of such heterogeneity, what then becomes an obvious and important requirement is the support for the end-to-end workflow and how one orchestrates or aligns interacting processes to a common goal and outcome. What is also critical from an end user organization’s perspective is how they are then able to tune the overall workflow to make it work for them.

Change the basis of testing – a role for “Comprehensive Testing” What many of those interviewed feared is a lack of robustness in software that suggests many applications or services are not capable of dealing with the unexpected. It is very dangerous to believe that, in any well managed environment, there should not be any unexpected situations. Reality, however, is different. For example, an unexpected and dirty shutdown of a computer that leaves files open, malware that corrupts data, and system failures that mean some distributed components cannot be accessed. If programs are not properly designed to react to exceptions, then there is no understanding of what to expect. There is a need for a quick reaction to “gracefully shut down” when the “impossible” actually occurs. Software in Production must be engineered to be robust, because it will be used for many years and if it even has a .1% error rate, then with some systems performing 10,000 to 10 million transactions per day, means 10 to 10,000 errors will occur per day.

As the Configuration Manager of one of the financial organizations interviewed pointed out:

“Software is not the same as creating widget. Software is much more like creating a factory that creates widgets. If there is a blemish on the widget, we really do not care too much, but with automation, that blemish will be repeated over and over, that is why software is so powerful and so dangerous. We would be wise to take lessons from the mistakes we made in manufacturing which lead to our modern manufacturing practices.”

Hence if we don’t test effectively any error will be propagated and become amplified once automation is applied. Therefore, a seemingly minor problem could eventually cause a major systems failure. One of the keys to being more comprehensive in testing is taking lessons from the operations teams. At present few, if any, organizations take the help desk logs and build tests from them. Such operational failures are actually testing the conditions, which should be deployed back to developers as new categories for tests, to improve the build quality, and into QA to widen the scope of existing tests. Some organizations questioned the effectiveness of automated testing by the same developers who are building the code. This is especially questionable in organizations which are using automated testing as their dominant testing method.

Getting that information from help desk to test is one of the possibilities that a DevOps process opens up. By increased the sharing of information across the IT department, it is possible for QA to take responsibility to create a set of testing that is not only more aggressive, but also represents known and experienced failures. To increase the comprehensiveness of testing, there is also a need for security teams to look at known problems and create tests for internal teams. This will harden security and increase the robustness of testing.

Backdoor agility: Not all Change needs to be driven by Code Changes Well-designed service changes do not always need to be changed via code. Even if you have challenges getting significant agility improvements into your development or deployment processes, an IT organization should look at building in capabilities for responding quickly to the business through “back office” control data

© Creative Intellect Consulting Ltd 2013 Page 19

changes. The use of management interfaces to change service levels is an example of a non-code based service change. There will be many other examples of “control data” that are currently mixed into code level changes that can similarly be exposed in this way. This level of “back office” agility can be handled by other stakeholders, such as business units and non-operations and development teams. For some organizations, this has been their secret weapon and for others this is an untapped capability

© Creative Intellect Consulting Ltd 2013 Page 20

Enterprise DevOps: A focus for IT agility and adaptability

Look deeply at parts of the enterprise and we can see major changes to process and working practices.

The cadence of software development and delivery inside enterprise IT has improved significantly over recent years. Much of that has been due to better processes, adoption of methodologies such as Agile, improvements in tooling and the introduction of continuous and automated testing. Yet for many organizations, there has been little improvement in the speed and frequency of software released into production. This is because the changes in development have not truly been matched by changes in operations and how applications are deployed.

It would be wrong to blame this on an unwillingness of operations to make things happen faster. Like development, there has been an increase in their ability to manage software. The issue, however, is one of focus. For operations, virtualization and cloud computing have increased the burden on their resources, often leaving some departments overstretched. The need to keep on top of the ever increasing number of application patches from software partners, especially those that concern security, has also led to increased workloads. Security of the IT infrastructure and the massive increase in data have had their part to play as well. Bring Your Own Device (BYOD) has brought a major software challenge to IT departments, due to the number of operating systems that now have to be supported.

Despite all this, there are a lot of bright spots. Many of the traditional IT silos are on the way out. For some organizations, IT operation roles are increasingly becoming an active participant in Agile development teams. At the same time, there is increasing evidence that Help Desk and system management services are providing information on operational failures to test teams and developers, in order to improve quality and lower downtime. Tools in particular have got better in focusing their attention to addressing the gaps in communication between development and operations. Many tools now promote a level of interoperability and integration support, allowing relevant and associated data to be displayed and modified where it is appropriate in the development and operations workflow, regardless of the tool employed.

This move from basic collaboration to active engagement between development, QA and the operations teams is critical to the establishment of Enterprise DevOps. It provides a foundation on which the organization can begin to identify processes that can be automated and to implement new approaches, such as CD.

The need for two-way-flow between Dev and Ops

Part of the information being provided by operations to the development and QA teams is what actually happens to applications during production. Historically, this is information that developers would see as not being important to them. After all, if an application passes all of its functional tests, how it is deployed is not part of the development mandate.

If the speed of development is to be matched by the speed of deployment, then developers must understand the environments in which their applications run. For example, distributed applications, that rely on multiple components installed on multiple machines, need to be aware of issues such as latency and security boundaries. Operations, likewise, needs information from developers, such as what dependencies an application has. This enables them to fine tune the automated virtualisation engines so that components are not moved to inaccessible locations.

DevOps: Delivering the underlying mechanics for ALM governance

When Application Lifecycle Management (ALM) was introduced, it was seen as a mechanism to create more stable software. The idea that software in development and software in production were two different worlds was finally being addressed. At the same time, ALM made no distinction between classes of software or the platforms on which they were deployed. This was a critical decision because it said all software, not just some software, is important.

© Creative Intellect Consulting Ltd 2013 Page 21

The three big things that came out of ALM were a governance platform, a wide category of multi-platform tools and the need for collaboration between development and operations. What ALM did not do was define the mechanics of how to use the tools effectively; instead, it rightly left this to the various development and deployment methodologies. Another failing of many ALM frameworks is that the focus tended to be solely on development and release. Although it promoted the collaboration between development and operations, it did not address how they would work together.

Operations has its own governance layer in the form of ITIL, which, like ALM, promotes the need for collaboration between development and operations in places. As a framework of best practices, ITIL is often used as a foundation on which organizations build their own processes and governance.

To bridge the gap between the two worlds and their governance models there is a need for something different. Not another governance model, but an altogether more functional approach. Today, that approach is referred to as DevOps. Defining DevOps for the enterprise

There is a lot of confusion about DevOps. One of the things we have been watching is the belief that DevOps is some sort of new department and new set of roles inside an organization. Unfortunately, this is the all too common perception of what DevOps requires. Some believe that it is about developers trying to take control of operations, while others see it as the opposite. In truth, it is neither.

DevOps is the removal of artificial barriers between operations and development teams. These barriers are often created as a result of inflexible processes and to create silos of ownership. That does not mean there is no need for a process or a need for people to own the various tasks in development and operations. DevOps is about finding a new working relationship that benefits the entire software process.

DevOps is nothing new, because when you look around the industry you will constantly see examples of it. In a small IT team there is often very little separation between development and operations. Developers are often very aware of the environment they are developing for and work closely with operations to identify any issues with the software. This will include performance, installation scripts and bugs that only appear when in production. Solutions tend to be worked through rather than engineered and then applied to existing builds of the software.

While many small IT organizations have some processes that control the interaction between the teams, the majority is ad hoc and there is a lot of collaboration. This can lead to challenges when it comes to getting those fixes to customers. Without proper control processes, bug fixes and updates can also be more ad hoc than controlled.

As an IT environment gets larger and more complex, it becomes increasingly difficult for both developers and operations to understand what each other does. This creates separation between the two teams and the process of delivering software to QA, and from QA to production, becomes more formalized. When problems with installation or failures in production occur, they are recorded and dealt with in accordance with newly laid down processes. There is no longer any ad hoc relationship between the two teams, and solutions are engineered rather than worked through as a team effort.

Overall, this tends to create a slower changing environment, but one where quality and process rule. Solutions can be more readily formalized and a proper update process for customers created.

Enterprise DevOps takes a middle ground between the two positions. It focuses on the flexibility of the small IT team environment, emphasising collaboration and a more dynamic software lifecycle. At the same time, it allows those enterprise level processes that deal with compliance and the need to deliver stable and reliable software to customers, to be in place.

© Creative Intellect Consulting Ltd 2013 Page 22

Release Management (RM) - a bedrock process for DevOps

Increasing the cadence of software delivery means closing the gap between software development and deploying software into production. To make this happen requires the streamlining of processes in both development and operations and a common set of processes that both teams can work with.

An effective Release Management, that spans development and operations teams, provides a set of foundation processes to smooth the deployment of software. Release Management is also a good foundation on which to build an enterprise DevOps approach.

The Release Management process begins at the same time as development. It gathers information from developers as the software is built, to ensure that the operations team know what is in the pipeline. As the process is tracking the development, operations are able to see exactly where the software is at any time. Initial development, QA, returned to software, passed QA, into integration testing, passed integration, ready for deployment - these are all phases that software will go through.

The key is the visibility to both development and operations. This allows for any advance planning by operations to be undertaken. It also ensures that, should there be a decision to change the priority of any work, operations know that development priorities have altered. The result is that operations are not surprised by changes to the delivery schedule and the entire process runs smoothly.

In all fairness, Release Management and Automation are highly regarded as vital processes, with appropriate tool support in place, to manage the smooth handover between Development and Operations. In fact, for many of those interviewed, having this in place was considered more easily obtainable than putting in place a robust test automation strategy with the right tooling support. Agile support for both development and operations

One of the key methodologies involved in making DevOps a reality is Agile. This is because Agile development teams focus on being multi-skilled and multi-disciplined. As well as developers, they should consist of stakeholders from all the departments that the software touches, such as business units, QA and operations.

Within many organizations IT operations is considered a shared resource, and the reality is that they are not always represented within an Agile Scrum team. Yet their value inside the scrum team is to bring operational reality to the software development team.

Agility also means something different to different stakeholders and their workflow processes. Agility on the development side is about involving the right roles, planning and ensuring the right focus for the fast delivery of working solutions that meet the customer’s requirements and expectations. Whilst there is much to be gained from the organizational aspects of Agile scrum that has cross discipline appeal, the agility focus for the operations team works at a different level. Agility for operations is really down to the level of automation that they have engaged within their processes. If they are at ease with a very high level of automation then the level of agility rises for them. Ultimately, Agility for operations is an abstract concept that works in a different way to the way that Agile scrum might work for the development team.

While it is possible to do DevOps without Agile, the make-up of teams in Agile help to create the collaborative environment that is needed. It also creates the means for all team participants to be aware of the bigger picture for the service or product being delivered. One of the positive outcomes of Agile, for many of those interviewed, was the rise in collaboration between the different IT teams and also with the business. It not only resulted in improved communication and cross organizational engagement, but saw better awareness of the business and the different workflows employed. It also saw improvements in mindsets, with individuals thinking ahead of the wider implications a particular direction might have, or looking how to make value based improvements.

© Creative Intellect Consulting Ltd 2013 Page 23

A cautionary note

Despite clear benefits and proven results, Agile practices may not be to everyone’s taste. Even when the sceptics are won over through hands on experiences and demonstrable gains, some individuals are still unable to make the necessary transition. This can be a challenge, especially if it hinders the necessary interactions that need to occur between development and operations teams. Resolving the issue may require drastic and unpalatable intervention, that could see team changes and the relocation of individuals into roles that have no direct influence.

© Creative Intellect Consulting Ltd 2013 Page 24

Serena’s Continuous Delivery and DevOps Foundations

Software has always been about competitive advantage to business, whether it is the accounting system, stock control database or software on a mobile device carried by a field sales team. Simply having the software, however, is no longer enough. Business units want software to reflect current trends. For too long, software updates have lagged months, or even more, behind the need.

One of the biggest problems in accelerating the delivery of new features and software is the speed of development. The arrival of Agile development has brought users and developers closer together. Operations teams are now increasingly getting involved in Agile teams and beginning to bring an understanding of the challenges of speeding up the cadence of software delivery. Although organizations are beginning to increase that cadence, they need the right tools, processes, organizational support and mindset to make it happen.

Serena’s strength in Continuous Delivery and deployment is the strength of its process support. While customers will have tools for the physical design, writing and testing of software, tools that Serena does not have, it can provide the underlying foundations for a set of continuous processes. What Serena brings to customers is a proven process environment that is integrated with ITIL and ITSM, supports high levels of automation and can help operations teams match the accelerated development of software. From ALM to CRM (Change and Release Management) with the backing of orchestration and interoperability

From Application Lifecycle Management (ALM) to Serena’s Change and Release Management (CRM), Serena has a long history of tools that provide process support for application development and delivery. There are three groups of tools.

1. IT Front Office suite: The tools here provide a portal between IT and the business. It enables the business to see the status of all applications and application requests. IT can use it to post updates on work in progress and to show the business the current state of IT services. It can also be used to help refine the current work schedule by providing a voting mechanism on which changes are the more important and allow IT to assess business requests.

2. Orchestrated Apps: This covers all the Serena tools for developers. Some of the tools such as Requirements Manager and Development Manager are multi-platform, including the mainframe. Agile Planner provides a collaborative space for managing Agile teams. The last, but most important tool for improving the cadence of delivery is Serena Release Manager. This is an automated solution with its own workflow that integrates with an equivalent tool in for operations.

3. Orchestrated Ops: There are two tools in this space, but quantity of tools should not be confused with capability of tools. Service Manager is an automated IT service delivery process linked to the Service Desk. It enables the service desk to see what is happening across the entire operational landscape. Serena Release Manager ensures a smooth, automated handover, from development to operations, offering benefits to both groups.

As well as these three tool suites, Serena has a range of other products. Some of these are function specific, such as Dimensions CM, which is a Software Configuration Management suite. Others are platform specific such as ChangeMan ZMF, which is a multiplatform Change Management and mainframe modernization solution. Recognising that customers have to support multiple platforms, and especially with the increased attention on the mainframe in recent years, Serena has focused on interoperability, orchestration and ease of use across its wider product portfolio.

By taking this approach, Serena has sought to reduce tool complexity for customers, but simplicity has to extend beyond the tools from a single vendor. To meet that need, Serena has its own Web Services solution that enables heterogeneous tool integration across vendors and, unlike many of its competitors, across platforms.

© Creative Intellect Consulting Ltd 2013 Page 25

Making strong headway through Release Management

Serena Release Management consists of several elements as outlined in Figure 3

Figure 3: Serena’s Release Manager Portfolio

Source: Serena Software

A key element in Serena Release Manager is the Unified Release Calendar (URC). This provides a single place for all changes to be recorded across application development and ITSM. With its integration into Serena Service Manager, the URC acts as a foundation for orchestrating IT management and ensuring that all scheduling is properly coordinated.

Release Control provides a multi-platform workflow for Release Management

Release Control is a series of policy driven workflows. It ensures that each release goes through the appropriate phases, such as QA and integration, while capturing the sign-off approvals to enable it to progress to the next stage.

It provides a method to enforce processes, not just on a global basis, but at an individual release or release class basis. This is particularly powerful, because it means that a risk-based approach can be applied to each release and the workflow tuned accordingly. By default, the data captured is fed into an audit system to ensure compliance with both corporate standards and regulators.

By providing a planning capability, Release Control ensures that any changes are notified to all those involved in the release. This ranges from business units to developers, QA to operations. It provides an easy to access portal whereby the current status of each release can be monitored, including the release schedule to ensure that there are no unplanned changes.

© Creative Intellect Consulting Ltd 2013 Page 26

Release Vault secures everything

Release Vault acts as the secure storage location for all data related to a release. This ranges from the approval signatures and audit data that come from Release Control, to the code to be deployed.

The latter is important as Release Vault becomes part of the multi-staging mechanism of code where the code, once released from QA is moved from the local SCM to the Release Vault. In doing this, Serena has ensured that there can be no untracked or unauthorised changes to code through any mechanism and that the Release Vault becomes the single source of trust.

A critical feature delivered by Release Vault is that of automated rollback. This is a critical feature that not only ensures that no system is left in an uncertain state after a deployment failure, but is part of the corporate disaster recovery and business continuity planning.

Release Automation improves the cadence of software delivery

Effective automation means a reliable, predictable process. When applied to Release Management it means that software is deployed in minutes rather than weeks, increasing business agility and reducing workload. Serena believes that an effective automated solution can reduce deployment times by over 90%.

To improve the cadence of software delivery, Release Automation requires a proven and trusted test methodology. This methodology must be constantly reviewed by the operations and development teams to ensure that it meets the standards for reliability of software in production.

As well as improving software delivery, release automation delivers a set of additional benefits such as:

• Reduced costs for deploying applications

• Releasing operations staff to maintain the infrastructure

• Improved reliability of software by the removal of human error

It is wrongly believed by many that an improved Release Automation process is just about speed of deployment. Over 75% of operational costs for software are about deployment, patching and maintenance. Properly implemented, Release Automation can deliver fiscal savings, as well as providing a more reliable and predictable environment. Orchestrated solutions and interoperability bring processes together

The complexity of the modern application landscape means that few companies are supporting a single platform or technology layer. Instead, they need to be able to deploy to multiple solutions and this often means using platform specific solutions.

One of the risks with platform specific solutions is the loss of integrity of the Release Management process. Serena has moved to deal with that in several ways:

1. Serena has platform specific solutions that are integrated at the Release Management level. This means that Release Control can provide coherent and integrated process control and scheduling. It also means that the Release Vault mechanism is not compromised. This is especially important when it comes to cross-platform deployment or the sharing of cross-platform resources, as in the case of distributed system development.

2. Serena web services enable developers to build integration with non Serena products. The web services enable the import and export of workflow, schedules, audit data and code. The latter being especially critical so that the Release Vault can hold all code before it is deployed.

3. Operations and development teams own and utilise a wide range of tools from multiple vendors. Serena’s Orchestrated IT approach allows customers to continue using the tools that they are familiar

© Creative Intellect Consulting Ltd 2013 Page 27

with, and only add those Serena tools that they need to complement their existing environment. This eliminates the cost of retraining staff, protects investment in existing tools and reduces risk of data loss when porting data between systems. Hitting the sweet spot for customers and partners

DevOps, Continuous Integration, Continuous Delivery and Continuous Deployment are all interconnected issues for enterprise IT departments. Moving towards one means being aware of its impact on the others.

Serena has placed Release Management and Orchestration at the heart of their approach to improving the cadence of software delivery. But this is not just about speed for Serena, this is also about hardening processes, removing barriers and improving quality. The most important barrier to remove is that of development versus operations.

Serena is dealing with this not by imposing a new framework, but by using Release Management to span the divide. It is a tricky thing to do because it is asking the product to have two owners, but Serena has been clever. Drawing on its experience with ITIL and ITSM, Serena has created a set of processes that build on what its customers are already doing. In this way, Serena isn't asking its customers to change what they are doing, but rather to adapt and standardise across the IT landscape. This is much easier to achieve than wholesale change and is something that can be implemented without creating resistance.

By breaking Release Management into three key areas and then realigning its product portfolio into IT Front Office, Orchestrated Development and Orchestrated IT Operations, Serena has laid the foundations for its own product development going forward. The use of web services to enable integration with other products makes orchestration much easier.

This is not just a benefit for Serena, but also its partners, such as Nolio, Emergen, LMC and Metaphor. Partners can now focus on deploying solutions for customers, rather than struggling with integrating products across multiple platforms.

The use of web services to make it possible to write integration between Serena tools and those from other vendors is welcomed. As an industry standards interface, it takes away the risk of a proprietary integration and interoperability framework. However, Serena has, comparatively speaking, written very few interfaces to third party tools. Instead, it has left the majority of this to its partners, system integrators and customers. It is therefore important that the company is seen to be doing more to make the integration easier and simpler.

Other integration and interoperability initiatives are coming to the forefront. One such example is the Open Services for Lifecycle Collaboration (OSLC) which has been around since 2008 and is gaining significant cross industry and market momentum, especially within end-user organizations. At present, Serena is not part of the OSLC community with its drive to deliver an industry standard interoperability spec. The company’s experiences in delivering a portfolio of tools and services, able to sustain a workflow for Continuous Delivery and deployment and its support for process integration and orchestration, means it has much to offer the OSLC initiative. Competitive Landscape

Serena is not the only vendor looking at improving the cadence of software development, delivery and deployment. However, it is one of a relatively select band that is looking across the whole ALM landscape and positioning its tools to be able to provide a continuous process in all three areas. ServiceNow, Microsoft, IBM and ThoughtWorks Studios are positioning their tools to compete with Serena.

Some of the company’s competitors can potentially offer a broader and rounded scope to their CD strategy owing to sizeable portfolios of products and services and wide ranging coverage of different technology platforms. However, the simplicity and singularity of focus – Release management and automation – is the value proposition that Serena brings to the table.

© Creative Intellect Consulting Ltd 2013 Page 28

It would also be a mistake to ignore those vendors who only cover part of the whole ALM landscape. Jenkins - Continuous Integration, Maven - Continuous Build, JUnit - and Nolio - Release Management are all strong in their respective areas. What many of them lack though, is an integration landscape that will make it easy for them to be used as part of a larger suite.

For those customers who want to use best of breed, but also want to have a common process control, then Serena is still attractive. Its integration and orchestration approach makes it easy to take competitive tools and deploy them as part of a Serena managed solution. End user direction

Competitive advantage in today's business environment means getting the right software to users when they need it. Improving the cadence of software development has been happening for some time now: transferring that cadence to deployment is still considered by many to be fraught with risk.

This is understandable, because the process of automated delivery is not well understood, many organizations do not possess the processes to support it and there is significant concern over risk.

Despite that, it is possible to move towards a heavily automated development to deployment process with minimal risk through the use of quality processes. For those organizations that are already heavy process users, and have well established ITIL and/or ITSM practices in-house, this is not a huge step. For those who have weak processes, the key is in choosing a partner whose solution is not only process strong, but whose processes are built on ITIL and ITSM best practices.

Serena has raised the bar, but has work still to be done

Serena’s orchestration capability and strategy is a smart move, and one that shows that the company recognizes that it is not the tool that matters, but the workflow and the management of that workflow, irrespective of the tools used along the way. The release process is a smart focus point to centre the workflow management process on, as it is a recognizable capability that most want to improve.

There is no question that Serena has done much to raise the bar for CD. By placing web services integration and interoperability at the heart of the workflow process the company has made it easier for customers to adopt and integrate its products into the customer’s environment.

However, there is still work to be done. While Serena does a good job of tracking and recording change in Release Vault and providing a rollback capability, it does not have a tool for application component discovery. In complex systems, tracking new change is a start, but there is a need to discover what is already out there. That is not to say that others are doing a better job.

It is fair to say that the whole software industry has failed in this regard, because discovery has never been seen as important. With multi-platform deployment, including cloud and increasing levels of virtualisation, knowing where something has been deployed is incredibly difficult. The general belief is that all that is important will be captured in the CMDB. Serena has its own CMDB in its Service Manager product, but it still needs to capture more information to fully solve this problem.

Another area where Serena could do more, and one that is core to Serena's entire 'continuous' proposition, is automation support. Customers have spent significant sums on reengineering their processes over the last decade. To deploy effective automation, there is a need to understand those processes and be able to break them into smaller, repeatable, reliable processes. Serena has focused on the importance of repeatable and reliable, but has no tool to help customers visualise their processes and how to break them down.

Despite this criticism, the work that Serena has done over the last three years has transformed their products and made them the benchmark against which to measure the work being done by others, such as IBM and Microsoft.

© Creative Intellect Consulting Ltd 2013 Page 29