Three Effective Strategies to Tackle Bugs

Part 2: Smaller, Safer Changes
You can’t inspect a fourth leg onto a three-legged table.
TL;DR - Smaller, Safer Changes
“Just try harder” isn’t a strategy — it’s a symptom.
In Part 2 of this three-part series, we dig into how big batches and approval drag slowed delivery — and how we made changes smaller, safer, and more frequent without a big-bang rewrite.
You’ll learn how we:
- Migrated legacy in thin slices and cut over behind feature flags.
- Built safety in with fast tests, contract checks at the edges, and a pipeline where green means deployable.
- Drew clear seams so changes stayed local and reversible.
- Changed how we worked with the legacy: make small the default, merge early, test the merged code, keep
main
releasable. - Moved from two-week releases to every two days — with far fewer hotfixes.
We didn’t fix feedback by trying harder. We fixed a system that made it hard to succeed.
If you’re scaling fast, wrestling legacy, or firefighting more than you’re shipping, read on and swap release roulette for sustainable fast flow.
This series began as a conference talk I gave at LDX3 and Agile on the Beach — a story of how I helped a team tackle delivery friction head-on in their fast-growing HealthTech platform. What started as a few slides and field notes turned into a set of strategies any team can learn from, whether you’re scaling fast or wrestling legacy.
”Just Try Harder” Isn’t A Strategy!
We’ve all heard it. After a bug slips through, a test fails late, or a release goes sideways: Just try harder next time.
More often, it’s implied — in retrospectives, reviews, and post-mortems — that point fingers instead of fixing systems. Developers need to test more. Testers need to catch more. Then back to the developers again. The blame just looped, but the system stayed the same.
Big batches and approval drag don’t just let bugs slip through. They break flow, shake trust, and kill momentum. Teams slowed down — not because they weren’t trying, but because the system was working against them.
“Just try harder” isn’t a strategy.
It’s a symptom — a sign of a system that makes it easy to fail.
So let’s stop blaming people — and start fixing the system. Because when bugs become a way of life, it’s not an effort problem. It’s a batching and coupling problem.
This is the second in a three-part series about how I helped a HealthTech platform shift from firefighting to fast flow — and made software delivery faster, safer, and more predictable under pressure.
- Fast Feedback — make bugs cheaper to catch
- Smaller, Safer Changes — make changes easier to ship
- Controlled Delivery — make production a safe place to learn
I’ll share what worked - no frameworks, no heroics. Just practical, hard-earned patterns from the coalface.
We’ll start with the reality on the ground — what made even the smallest changes feel big.
Lie of the Land
Before we could make safer changes, we had to understand why it all felt risky.
Even minor tweaks could have unexpected ripple effects.
On paper, we had multiple services. In reality, they behaved like a distributed monolith, with tight coupling across repositories and data stores. Releases were coordination-heavy, and even minor tweaks could have unexpected ripple effects. Delivery times stretched, defect lists grew, and support spent too much time firefighting.
The best-case release cadence was two weeks, and we relied on manual hotfixes to production when issues slipped through.
Architecture With No Edges
We’d inherited four repos that operated as a single deployable unit. Two Angular portals, staff and customer, were copy-paste-modify cousins. Two databases that contained overlapping data, with occasional cross-database joins for good measure. Inside the code, complexity was through the roof: deeply nested if
/else
and switch
/case
chains running to tens of branches, methods
- hundreds of lines, some classes
ran to thousands.
Release day felt like roulette.
To ship anything safely, we played release roulette: 7:30am deploys to line up front-end → back-end → data and hope nothing drifted.
Legacy on Legacy
Years of in-house ↔ outsourced development left us with customised frameworks that were far from idiomatic. Old and New approaches side-by-side; multiple unfinished v2
s. APIs ignored out-of-the-box REST conventions, and naming was… inventive. Every path was a special case, so every change was too.
That made behaviour unpredictable and even simple changes expensive.
Every Change Hurt
Big batches hid risk — and we paid for it later.
A simple initiative like RU99 ballooned from a planned six weeks to six months or more, with bugs still surfacing a year later. Missed timelines became the norm, and trust suffered as a result. Quarterly planning turned into a scramble - if you won a golden ticket for a build slot, you’d throw in everything you could think of - you might not get another chance for years!
In the end, the scramble only meant bigger batches — and bigger surprises.
Safety Theatre & Burnout
After every incident, the checklist got longer, not safer. The support engineer’s role became full-time triage and firefighting. Big-bang releases led to frequent, high-risk hotfixes. When we couldn’t wait for a proper deployment, support applied fixes directly in production!
People were tired, and process bloat couldn’t compensate for architectural reality.
Legacy Migration
Rewrites weren’t realistic, so we carved out the legacy one slice at a time using the strangler fig pattern - we’d expand → migrate → contract.
Replace Piece by Piece
One service at a time, not a programme of rewrites.
After our previous bad experience (RU99), we started with Results. Instead of a year-long v2
, we migrated one capability at a time into an independently deployable service. When a slice behaved like-for-like, we cut over behind feature flags and removed the legacy paths.
Bake in Safety
Green means deployable
New services shipped with fast unit tests, targeted integration tests, and contract tests at the boundaries. Pipelines enforced quality gates — green means deployable. Ownership moved with the work: you build it, you run it.
Create Boundaries
Create seams so changes don’t leak sideways.
We adopted Domain-Driven Design and Hexagonal Architecture (ports and adapters) to keep domain logic separate from frameworks. Where flows crossed service lines, we preferred event-driven integration and used anti-corruption layers to quarantine legacy behaviour so changes didn’t leak sideways.
Simplify Legacy
We reduced change risk by shrinking the blast radius. We deleted dead code where it was safe, were cautious where it was messy, and focused on refactoring where we were actively developing. Less code meant fewer hiding places for bugs and fewer surprises.
Smaller scope led to safer releases, fewer approvals, and releases on demand.
Living With Legacy
Shipping the new services became straightforward. Shipping the legacy still hurt, so we changed how we worked with it.
Build Small, Merge Often
Long-lived branches were where trouble festered. Conflicts piled up, merges bunched at the end of sprints, and the fix was always bigger than the original change. We cut the work thinner. Small slices, merged early, kept differences local and visible. Instead of nursing a branch for weeks, we got used to landing changes while they were still fresh and easy to reason about.
Keep Moving Forward
Bundling for the Friday release had felt efficient; in practice it concentrated risk and delayed value. We moved towards trunk-based development: integrate to main
often and share the integration work across the team rather than leaving it with a release engineer.
The cadence changed the conversation — less orchestration, more steady movement.
Test What You Plan to Ship
Treat every merge to
main
as a release candidate.
Previously, we tested features on isolated branches and only saw the real picture when everything was bundled together. We flipped it. Every merge to main
was treated as a release candidate, and checks ran where services met.
The thing we tested matched the thing we deployed, which cut down on late surprises.
Stay ready to release
From every two weeks, to every two days.
Unpredictability came from bundling and hidden conflicts. We kept main
releasable and enforced quality gates in a fast, trusted pipeline - green means deployable. Hotfixes became rare, releases became routine, and we moved from every two weeks - maybe, to every two days (if needed).
Reflection
What’s the smallest safe slice we could cut over behind a feature flag,
and what would keepmain
releasable while we do it?
If touching the legacy still feels like defusing a bomb, that’s a system signal — where coupling, batching, and approval drag are doing the talking.
- Where are we batching work because releasing is inconvenient?
- Which approval adds no new information that we couldn’t automate?
Takeaways
- Make small the default: smaller changes carry less risk and are easier to reason about.
- Migrate, don’t rewrite: add a new path, move one behaviour, then remove the old.
- Create seams: so a change in one place doesn’t leak into five others.
- Let automation set the bar: when the pipeline is green, it’s deployable.
- Favour trunk-based habits: merge early, test the merged code, keep
main
releasable. - Simplify as you go: remove dead code where it’s safe to shrink the surface area.
- Result: fewer hotfixes, more predictable releases, and a shift from deploying every two weeks - maybe, to every two days (if needed).
Want to go deeper?
If you want to explore the thinking behind smaller, safer changes, sustainable delivery, and system-level improvement, here are four excellent reads:
- Continuous Delivery — Jez Humble, David Farley
The playbook for making green means deployable real: small batches, pipelines, and controlled releases. - Team Topologies — Matthew Skelton, Manuel Pais
How to align teams to seams so changes stay local and flow improves. - Working Effectively with Legacy Code — Michael Feathers
Surgical techniques for carving out safe slices, introducing seams, and shrinking the blast radius. - The Team That Managed Itself — Christina Wodtke
Practical cadence and goal-setting so trunk-based habits stick without ceremony or heroics.
Better software isn’t about trying harder.
We didn’t fix delivery by pushing people harder — we changed the system they worked in. Because when the system improves, everything else starts to shift too:
- Overhead drops
- Batch size shrinks
- Complexity unwinds
- Bugs have nowhere to hide
That’s what these three strategies are about.
- Fast Feedback — make bugs cheaper to catch
- Smaller, Safer Changes — make changes easier to ship
- Controlled Delivery — make production a safe place to learn
So let’s go beyond “just try harder” and build better systems — and better software.
Up Next: Part 3 - Controlled Delivery
Find bugs before your customers do.
How to make production safe to learn: feature flags, dark launches, staged rollouts, and telemetry — so you can ship without fear.