Branching Model Use of Flags Or Toggles In-House Code Sharing
Total Page:16
File Type:pdf, Size:1020Kb
Practices Correlated With Trunk-Based Development v 2.8 © 2014 - 2017, Paul Hammant Release 1 release 1 release 1 release 10 releases 100 releases frequency every 100 days every 10 days every day every day every day Google (cadence varies for many deployable things) A huge Startups wishing to take market from Examples number of legacy enterprises Legacy Square enterprises Modernized, competitive Etsy GitHub enterprises Facebook © 2014-2017, Paul Hammant. v2.8 Branch for Release Trunk Based Development Branches are made late; bugs are fixed on Let me help you migrate to Trunk- trunk, and cherry-picked TO the release Based Development. Monorepos branch by a ‘merge meister’ (not a regular too, if you want to go that far. developer); these branches feel frozen; prod hardening is the goal; branches get deleted can be done https://devops.paulhammant.com after release w/o a merge back to trunk/master together can be Commit directly to done together Release from Trunk/Master Trunk (if small team) OR Short-Lived Feature Branches (SLFBs) that last 1-2 days max (per developer) Long-lived Feature and integrated into Trunk/Master when truly done. GitHub Flow is very close Branches: Branching can be done in conjunction with Develop on shared can be done Release model branches and together merge to mainline/ from tag: Release from commit hash: Never under any master/trunk after Branches may be circumstances “Fix forward” - bugs fixed in release. Or even made for retroact- do this for your trunk and will get released with the more creative ively for prod bug application or next release naturally and will be branching model. fixes, if needs be service mingled with new functionality Also: Code Freezes! No code freeze period in advance of a release; “always release ready” Only on the day that was originally One hour notice is doable No notice required, it is happening anyway Release planned preparation Sign-offs required. (maybe with audit trails) Uses UAT envs and readiness No planning/coordination, software owns the release process Human coordination/planning likely Many smaller repos (Git, Mercurial); separately buildable modules not possible (no atomic commits across repos); separate One repo with OR coordinated releases of those buildable modules; more likely to be Source many trunk/ recursive build technologies (Maven, Gradle, Rake, MSBuild). branch roots Suits micro-service strategies with heterogeneous goals Repository within it (Svn, OR Perforce). One trunk (Monorepo style allowing atomic commits across the repo); with separately organization Chaos! buildable modules; with independent OR lock-step releases of those pieces; and a greater likelihood of directed graph build technologies vs recursive (Buck, Bazel) Common code ownership too In-house Pre-built versioned binary deps via a binary repo (like Nexus) Pre-built At source level via Buck or Bazel made possible by monorepo layout code binary deps sharing checked in Via deliberate service boundaries as a architecture strategy (microservices) Unversioned ‘HEAD’ centric of in-house deps X.y.z versioning of deps Checked into the VCS Pulled from a binary repository like Nexus or Artifactory; Third-party branch transparent handling of transitive deps; upgrade tardiness in a dashboard in question OR dependencies In VCS again (Buck, Blaze, etc.); If in a monorepo writable read only lockstep upgrades for all other teams in the monorepo. network share Use of flags or No flags in code (or Toggles/Flags for hiding functionality that is not ready yet. One Continuous very few). Team has Integration pipeline for each reasonable toggle permutation. bet on branches! toggles Flags used for tuning the running production stack while running “Branch by Abstraction” in trunk (methodical technique ) Change that On a branch to merge back to trunk/ Replacement via micro service - often a strategic heterogeneous rewrite “takes a while" master/mainline (or not) Triggered per integration into trunk/master Batched Triggered pre integration on SLFBs too Continuous Uses named On scaled or elastic/ephemeral infrastructure and most likely using unnamed ‘microcosm’ environments that humans QA/integ Basic centralized CI with master & can’t use. The trunk/master is always green except Integration ends. Red slave nodes, possibly into limited for accidents of timing of the integration of commits most of the named integration (or QA) env(s) infrastructure time with (batched or per integration) Possible use of service virtualization & strategy one CI job Possible secondary elastic infra per valuable ‘Build Cop’ role -structure for Selenium used by CI jobs branch :-( Auto revert of commits of broken builds (not SLFBs though) QA feels separate to development Developers maintain (and occasionally write) QA automation for their own changes. They do automation work in the same QA is after development? commit as the intended source changes QA code is not in source control ☞ They also do a “desk check” prior to checkin. QA activities QAs do thorough QAs do expedited ‘Eat Your Own Dogfood' usage of app, if applicable (all staff) release certification release certification (dev:QA ratio is a factor too) QAs perform a mix of manual QAs lead development of automated tests that are co-located with .. and automated test production source in the trunk, and draw on dev for help as needed expensive technologies? most likely using open source technologies. Thorough yet appropriate functional UI tests that bypass login. Parallelization definately speeds everything up. Tests The team is in denial Automated functional are in same source repo as prod code. Mocks/stubs Automated QA about the value of UI tests. Run sequentially and resettable data, all used. ‘Test pyramid’ obeyed. “automated tests” from CI, and if slow are All before a deployment to any pre-prod env solved via some parallelization Devs deploy to Shared an “integration As left, but devs env” from their have no permission to do this, while CI integration workstation in jobs do: post order to test: commit no such environment testing pre commit environment(s) Humans can choose a binary to deploy to integration after (AKA “for developers not QA” ) CI has built that binary Per-Developer Nope Microcosm of prod stack on workstation poss. incl. wire-mocking techniques Dev workstation can instantiate some pieces, Environments but shares other downstack services Can also leverage CI infra during builds …….. Infrastructure as Code ……. Handmade, slowly. Specialist knowledge and Pre-Prod approvals req’d, costly, dreaded, intertwined Quick, complete recreation is a habit (via IaC scripts). shared down-stack svcs that are busy & Ephemeral QA envs are a possibility if not too costly environments contested (other than the per-dev’r ones) Few / Snowflakes Limited? Numerous Truly Elastic None (and no CI), yes some companies No more UAT or humans, bots only One at a time, pre-integrate (Pull-Request), AKA “Continuous Review” Code Some time after integration in batches One commit a time after integrate - most Claims to do these Note: almost universally, review likely teams committing direct to trunk effectively quite often Pair-Programming is counted do not match reality as part (or all) of a review No code review Pre-prepared delta scripts for DB rollbacks DB rollbacks, and much practice (in case of a Case by case (un prepared), regretted release) DBAs leading the effort DB is forwards & backwards compatible by design. No downtime needed for changes DBAs Delta scripts … DB changes doing … that need … that can’t (during deployment) manual downtime support multiple (has deployment changes window) versions App config per Co-located with app source Config fully managed and separate to binaries & app source Environment Outside of all management Talent leaves and Talent recruiters do not see Talent accumulates, helps ‘lift the bar’, and maybe some Talent comes qualified candidates. Also: become speakers or bloggers. and goes retention ‘B players hire C players’ * A players hire A players * equally * Steve Jobs folklore Developer Many in dev team Small percentage change activities the of the developers Very little activity change closer they get to a change activities change at all No change in activity at all release, including towards a release, for any for any developer with with proximity getting very busy including excessive developer proximity to release; no drama merging. Also: cherry-pick merges around releases to release heroism rewarded Claimed Agile, Agile and Continuous Remember, there are On a DevOps but really: Delivery into QA/UAT two variations of ‘CD’ foundation Fragile, Watergile only and to prod Methodology Acknow or Mini-waterfall less frequently - ledged Kanban or ‘flow centric’ Agile as well as Waterfall Scrum or other Agile Continuous Deployment into Prod Compile Definition of and Compile, unit tests, service (integration) tests, package Compile, Compile, unit tests, functional UI tests, lint, Findbugs and package “the build” (zip etc) units tests, service (integration) (dev on their workstation & package tests, functional UI with possible use of service virtualization and CI jobs as applicable) tests and package When to Compile. Also package of application, if green When to run unit test Bots make When to push packages to bin repo decisions for What minor version numbers for binaries (effectively unversioned thereafter) When to run service test automation (after packaging/deployment) humans and Functional UI test automation (after packaging/deployment) (switch func/svc tests to be BEFORE pkging/deployment) execute without When to revert commits for red builds on master/trunk waiting/asking (continuous integration) The actual ‘integration’ to trunk/master (continuous Deployment to UAT (if green) delivery/deployment) Deployment to prod (if green).