Andy Weir
Software Delivery Consultant & Fractional Technical Leader

From delivery friction and burnout to sustainable fast flow — under pressure, where it matters

I help engineering leaders scaling from one team to four uncover what's really slowing delivery — then lead the shift to sustainable fast flow, from the inside.

Three Effective Strategies to Tackle Bugs

Hand lifting a small blue puzzle piece from a box of puzzle pieces, while green cartoon bugs peek between the pieces.

Part 2: Smaller, Safer Changes

You can’t inspect a fourth leg onto a three-legged table.

TL;DR - Smaller, Safer Changes

“Just try harder” isn’t a strategy — it’s a symptom.

In Part 2 of this three-part series, we dig into how big batches and approval drag slowed delivery — and how we made changes smaller, safer, and more frequent without a big-bang rewrite.

You’ll learn how we:

We didn’t fix feedback by trying harder. We fixed a system that made it hard to succeed.

If you’re scaling fast, wrestling legacy, or firefighting more than you’re shipping, read on and swap release roulette for sustainable fast flow.

This series began as a conference talk I gave at LDX3 and Agile on the Beach — a story of how I helped a team tackle delivery friction head-on in their fast-growing HealthTech platform. What started as a few slides and field notes turned into a set of strategies any team can learn from, whether you’re scaling fast or wrestling legacy.

”Just Try Harder” Isn’t A Strategy!

We’ve all heard it. After a bug slips through, a test fails late, or a release goes sideways: Just try harder next time.

More often, it’s implied — in retrospectives, reviews, and post-mortems — that point fingers instead of fixing systems. Developers need to test more. Testers need to catch more. Then back to the developers again. The blame just looped, but the system stayed the same.

Big batches and approval drag don’t just let bugs slip through. They break flow, shake trust, and kill momentum. Teams slowed down — not because they weren’t trying, but because the system was working against them.

“Just try harder” isn’t a strategy.
It’s a symptom — a sign of a system that makes it easy to fail.

So let’s stop blaming people — and start fixing the system. Because when bugs become a way of life, it’s not an effort problem. It’s a batching and coupling problem.

This is the second in a three-part series about how I helped a HealthTech platform shift from firefighting to fast flow — and made software delivery faster, safer, and more predictable under pressure.

I’ll share what worked - no frameworks, no heroics. Just practical, hard-earned patterns from the coalface.

We’ll start with the reality on the ground — what made even the smallest changes feel big.


Lie of the Land

Before we could make safer changes, we had to understand why it all felt risky.

Even minor tweaks could have unexpected ripple effects.

On paper, we had multiple services. In reality, they behaved like a distributed monolith, with tight coupling across repositories and data stores. Releases were coordination-heavy, and even minor tweaks could have unexpected ripple effects. Delivery times stretched, defect lists grew, and support spent too much time firefighting.

The best-case release cadence was two weeks, and we relied on manual hotfixes to production when issues slipped through.

Architecture With No Edges

We’d inherited four repos that operated as a single deployable unit. Two Angular portals, staff and customer, were copy-paste-modify cousins. Two databases that contained overlapping data, with occasional cross-database joins for good measure. Inside the code, complexity was through the roof: deeply nested if/else and switch/case chains running to tens of branches, methods - hundreds of lines, some classes ran to thousands.

Release day felt like roulette.

To ship anything safely, we played release roulette: 7:30am deploys to line up front-end → back-end → data and hope nothing drifted.

Legacy on Legacy

Years of in-house ↔ outsourced development left us with customised frameworks that were far from idiomatic. Old and New approaches side-by-side; multiple unfinished v2s. APIs ignored out-of-the-box REST conventions, and naming was… inventive. Every path was a special case, so every change was too.

That made behaviour unpredictable and even simple changes expensive.

Every Change Hurt

Big batches hid risk — and we paid for it later.

A simple initiative like RU99 ballooned from a planned six weeks to six months or more, with bugs still surfacing a year later. Missed timelines became the norm, and trust suffered as a result. Quarterly planning turned into a scramble - if you won a golden ticket for a build slot, you’d throw in everything you could think of - you might not get another chance for years!

In the end, the scramble only meant bigger batches — and bigger surprises.

Safety Theatre & Burnout

After every incident, the checklist got longer, not safer. The support engineer’s role became full-time triage and firefighting. Big-bang releases led to frequent, high-risk hotfixes. When we couldn’t wait for a proper deployment, support applied fixes directly in production!

People were tired, and process bloat couldn’t compensate for architectural reality.

Legacy Migration

Rewrites weren’t realistic, so we carved out the legacy one slice at a time using the strangler fig pattern - we’d expand → migrate → contract.

Replace Piece by Piece

One service at a time, not a programme of rewrites.

After our previous bad experience (RU99), we started with Results. Instead of a year-long v2, we migrated one capability at a time into an independently deployable service. When a slice behaved like-for-like, we cut over behind feature flags and removed the legacy paths.

Bake in Safety

Green means deployable

New services shipped with fast unit tests, targeted integration tests, and contract tests at the boundaries. Pipelines enforced quality gates — green means deployable. Ownership moved with the work: you build it, you run it.

Create Boundaries

Create seams so changes don’t leak sideways.

We adopted Domain-Driven Design and Hexagonal Architecture (ports and adapters) to keep domain logic separate from frameworks. Where flows crossed service lines, we preferred event-driven integration and used anti-corruption layers to quarantine legacy behaviour so changes didn’t leak sideways.

Simplify Legacy

We reduced change risk by shrinking the blast radius. We deleted dead code where it was safe, were cautious where it was messy, and focused on refactoring where we were actively developing. Less code meant fewer hiding places for bugs and fewer surprises.

Smaller scope led to safer releases, fewer approvals, and releases on demand.


Living With Legacy

Shipping the new services became straightforward. Shipping the legacy still hurt, so we changed how we worked with it.

Build Small, Merge Often

Long-lived branches were where trouble festered. Conflicts piled up, merges bunched at the end of sprints, and the fix was always bigger than the original change. We cut the work thinner. Small slices, merged early, kept differences local and visible. Instead of nursing a branch for weeks, we got used to landing changes while they were still fresh and easy to reason about.

Keep Moving Forward

Bundling for the Friday release had felt efficient; in practice it concentrated risk and delayed value. We moved towards trunk-based development: integrate to main often and share the integration work across the team rather than leaving it with a release engineer.

The cadence changed the conversation — less orchestration, more steady movement.

Test What You Plan to Ship

Treat every merge to main as a release candidate.

Previously, we tested features on isolated branches and only saw the real picture when everything was bundled together. We flipped it. Every merge to main was treated as a release candidate, and checks ran where services met.

The thing we tested matched the thing we deployed, which cut down on late surprises.

Stay ready to release

From every two weeks, to every two days.

Unpredictability came from bundling and hidden conflicts. We kept main releasable and enforced quality gates in a fast, trusted pipeline - green means deployable. Hotfixes became rare, releases became routine, and we moved from every two weeks - maybe, to every two days (if needed).


Reflection

What’s the smallest safe slice we could cut over behind a feature flag,
and what would keep main releasable while we do it?

If touching the legacy still feels like defusing a bomb, that’s a system signal — where coupling, batching, and approval drag are doing the talking.


Takeaways


Want to go deeper?

If you want to explore the thinking behind smaller, safer changes, sustainable delivery, and system-level improvement, here are four excellent reads:

Better software isn’t about trying harder.

We didn’t fix delivery by pushing people harder — we changed the system they worked in. Because when the system improves, everything else starts to shift too:

That’s what these three strategies are about.

So let’s go beyond “just try harder” and build better systems — and better software.

Up Next: Part 3 - Controlled Delivery

Find bugs before your customers do.

How to make production safe to learn: feature flags, dark launches, staged rollouts, and telemetry — so you can ship without fear.