<<

This version has been superseded by another which you can find listed at: https://devops.paulhammant.com/trunk-correlated-practices-diagram/ Release 1 release 1 release 1 release 10 releases 100 releases frequency every 100 days every 10 days every day every day every day Google (cadence varies for many deployable things)

A huge Startups wishing to take market from Examples number of legacy enterprises Legacy Square enterprises Modernized, competitive Etsy GitHub enterprises Facebook

Branch for Release Trunk Based Development © 2014 - 2017, Paul Hammant. v2.7 Branches are made late; bugs are fixed on Let me help you migrate to Trunk- trunk, and cherry-picked TO the release Based Development. Monorepos too, branch by a ‘ meister’ (not a regular if you want to go that far. developer); these branches feel frozen; prod hardening is the goal; branches get deleted can be done https://devops.paulhammant.com after release w/o a merge back to trunk/master together can be directly to Release from Trunk/Master done together Trunk (if small team) OR Short-Lived Feature Branches (SLFBs) that last 1-2 days max (per developer) Long-lived Feature and integrated into Trunk/Master when truly done. GitHub Flow is very close Branches:

Branching can be done in conjunction with Develop on shared can be done Release model branches and together merge to mainline/ from tag: Release from commit hash: Never under any master/trunk after Branches may be circumstances “Fix forward” - bugs fixed in release. Or even made for retroact- do this for your trunk and will get released with the more creative ively for prod bug application or next release naturally and will be branching model. fixes, if needs be service mingled with new functionality Also: Code Freezes! No code freeze period in advance of a release; “always release ready”

Only on the day that was originally One hour notice is doable No notice required, it is happening anyway Release planned preparation Sign-offs required. (maybe with audit trails) Uses UAT envs and readiness No planning/coordination, software owns the release process Human coordination/planning likely

Many smaller repos (, ); separately buildable modules; separate OR coordinated releases of those buildable modules; more likely to be Source One repo with recursive build technologies (Maven, Gradle, Rake, MSBuild). many trunk/ Suits micro-service strategies with heterogeneous goals branch roots Repository OR within it (Svn, One trunk ( style); with separately buildable modules; organization Perforce). Chaos! with independent OR lock-step releases of those pieces; and a greater likelihood of directed graph build technologies vs recursive (Buck, Bazel)

In-house Pre-built versioned binary deps via a binary repo (like Nexus) Common code ownership too code Pre-built At source level via Buck or Bazel made possible by monorepo layout binary deps sharing checked in Via deliberate service boundaries as a architecture strategy (microservices)

Unversion ‘HEAD’ centric deps X.y.z versioning of deps

Checked into Third-party the VCS Pulled from a binary repository like Nexus or Artifactory; branch transparent handling of transitive deps; upgrade tardiness in a dashboard dependencies in question OR In VCS again (Buck, Blaze, etc.); If in a monorepo writable read only lockstep upgrades for all other teams in the monorepo. network share

Use of flags/ No flags in code (or Toggles/Flags for hiding functionality that is not ready yet. One Continuous very few). Team has Integration pipeline for each reasonable toggle permutation. bet on branches? toggles Flags used for tuning the running production stack while running

“Branch by Abstraction” in trunk (methodical technique ) Change that On a branch to merge back to trunk/ Replacement via micro service - often a strategic heterogeneous rewrite “takes a while" master/mainline (or not)

Batched Triggered per and pre integration on elastic/ephemeral on limited infrastructure and most likely using unnamed Basic centralized CI with master & Continuous infra. Uses ‘microcosm’ environments that humans never see. slave nodes, possibly into (limited) named QA/ Also, the trunk/master is 99.99% green. named QA or integration envs Integration integ ends. (batched or per integration) Red most of Possible secondary elastic Infrastructure the time Build Cop infrastructure for Selenium role! Auto revert of commits of broken builds (not SLFBs though)

QAs in a formal department, with an over the fence aspect after development and most likely if a separate non VCS repo Developers write (or don’t break) QA automation for their own work. and also do a “desk check” prior to checkin. QA activities QAs do thorough QAs do exped release -ited release ‘Eat Your Own Dogfood' usage of app, if applicable (all staff) certification certification QAs develop automated tests co-located with production source in the trunk, most likely using open source technologies. QAs perform manual test runs .. .. of ‘good’ UI automation Thorough yet appropriate functional UI tests that bypass login. Parallelization speeds everything up. Tests are in same In denial about value Automated functional source repo as prod code. Mocks/stubs and Automated QA of “tests” not actually UI tests. Run resettable data, all used. ‘Test pyramid’ obeyed. in source-control sequentially from CI. All before a deployment to any pre-prod env Separate source repo?

Devs deploy to Shared Integ an “integration As left, but devs env” from their have no permission -ration testing to do this, while CI no such environment workstation in jobs do: post environments order to test: commit pre commit (or any “for devs not QA” envs)

Per-Developer Nope Microcosm of prod stack on workstation poss. incl. wire-mocking techniques Dev workstation can instantiate some Can also leverage CI infra during builds Environments pieces, but shares other downstack services …….. Infrastructure as Code ……. Handmade, slowly. Specialist knowledge and Pre-Prod approvals req’d, costly, dreaded, intertwined Quick, complete recreation is a habit (via IaC scripts). shared down-stack svcs that are busy & Ephemeral QA envs are a possibility if not too costly environments contested (other than the per-dev’r ones) Few / Snowflakes Limited? Numerous Truly Elastic No more UAT or humans, bots only One at a time, pre-integrate (Pull-Request), AKA “Continuous Review” Code Some time after integration in batches One commit a time after integrate - most Claims to do these Note: almost universally, review likely teams committing direct to trunk effectively quite often Pair-Programming is counted as part (or all) of a review No code review do not match reality

Pre-prepared delta scripts for DB rollbacks DB rollbacks, and much practice (in case of a Case by case (un prepared), regretted release) DBAs leading the effort DB is forwards & backwards compatible by design. No downtime needed for changes DBAs Delta scripts … DB changes doing … that need … that can’t (during deployment) manual downtime support multiple (has deployment changes window) versions

App config per Co-located with app source Config fully managed and separate to binaries & app source Environment Outside of all management

Talent leaves and Talent recruiters do not see Talent accumulates, helps ‘lift the bar’, and maybe some Talent comes qualified candidates. Also: become speakers or bloggers. and goes retention ‘B players hire C players’ * A players hire A players * equally * Steve Jobs folklore

Developer Many in dev team Small percentage change activities the of the developers Very little activity change closer they get to a change activities change at all No changes at all release, including towards a release, for any for any developer with proximity getting very busy including excessive developer merging. Also: cherry-pick merges to release heroism rewarded

Claimed Agile, Remember, there are On a DevOps but really: Agile and Continuous two different CDs foundation Fragile, Delivery into QA/UAT Methodology only and to prod Acknow Watergile or less frequently - ledged Mini-waterfall Kanban or ‘flow centric’ Agile as well as Waterfall Scrum or other Agile Continuous Deployment into Prod

Bots make decis When to Compile. Also package of application, if green (base defn of ‘the build’) Unit test automation (definition of build broadens) -ions for humans CI Service test automation (definition of ‘build’ broadens) Functional UI test automation (definition of ‘build’ broadens) and execute with Deployment to UAT (if the build is green) CD -out waiting/asking Deployment to prod, if build green