Three Effective Strategies to Tackle Bugs

Part 1: Fast Feedback
Get feedback in the time it takes to make a cup of tea
TL;DR - Fast Feedback: Catch Bugs Before They Catch You
“Just try harder” isn’t a strategy — it’s a symptom.
In Part 1 of this three-part series, we dig into how slow feedback loops break trust, block flow, and make bugs a way of life — and how we turned it around.
You’ll learn how we:
- Cut test feedback from 30 minutes to under 5
- Tagged and prioritised tests by value and cost
- Wired in pre-commit hooks to catch issues before they left the laptop
- Rebuilt trust in CI by simplifying and speeding up the pipeline
- Reclaimed 20+ hours per dev per month for actual building, not firefighting
We didn’t fix feedback by trying harder. We fixed a system that made it hard to succeed.
If you’re scaling fast, wrangling legacy, or firefighting more than you’re shipping, this is worth your tea break.
This series began as a conference talk I gave at LDX3 and Agile on the Beach — a story of how I helped a team tackle delivery friction head-on in thier fast-growing HealthTech platform. What started as a few slides and field notes turned into a set of strategies any team can learn from, whether you’re scaling fast or wrestling legacy.
”Just Try Harder” Isn’t A Strategy!
We’ve all heard it. After a bug slips through, a test fails late, or a release goes sideways: Just try harder next time.
More often, it’s implied — in retrospectives, reviews, and post-mortems — that point fingers instead of fixing systems. Developers need to test more. Testers need to catch more. Then back to the developers again. The blame just looped, but the system stayed the same.
Slow feedback doesn’t just let the bugs through that break releases. It breaks flow, shakes trust, and kills momentum. Teams slowed down — not because they weren’t trying, but because the system was working against them.
“Just try harder” isn’t a strategy. It’s a symptom — a sign of a system that makes it easy to fail.
So let’s stop blaming people — and start fixing the system. Because when bugs become a way of life, it’s not an effort problem. It’s a feedback problem.
This is the first in a three-part series about how I helped a HealthTech platform shift from firefighting to fast flow — and made software delivery faster, safer, and more predictable under pressure.
- Fast Feedback: Get feedback in the time it takes to make a cup of tea
- Smaller, Safer Changes: You can’t inspect a fourth leg onto a three-legged table
- Controlled Delivery: Find bugs before your customers do
I’ll share what worked - no frameworks, no heroics. Just practical, hard-earned patterns from the coalface.
And we’ll start where most bugs begin — or could’ve ended — with feedback loops.
Lie of the Land
Let’s start with what we inherited — because context shapes strategy.
When I joined the team, software quality relied on instinct. There were some automated tests in the codebase, but many were broken, and the tests weren’t routinely run. Confidence came from manual checks and the “spidey-sense” of a seasoned QA. It mostly worked — until it didn’t.
The tests we did have took over 30 minutes to run. So developers avoided running them.
The CI pipeline ran deployments, but not much else. No quality gates. No security checks. No signal on whether the code was safe to ship.
Under the hood, performance wasn’t helping. We were using the Laravel framework, which added significant test overhead and database access in tests made matters worse. Together, they made the feedback loop 100 times slower than it needed to be.
That was the starting point.
Each of these wasn’t just a quirk — it shaped developer behaviour. When feedback takes 30 minutes, it’s not a surprise that people stop asking for it. When pipelines don’t test, teams stop trusting them. These weren’t individual problems. They were symptoms of a system that had stopped signalling risk.
Quality Relied on Instinct
We were building software with a crystal ball, not a feedback loop.
When I joined, there were no automated quality checks — not in CI, not in the commit process, not even as a local habit. The only visible effort toward test automation was a proof-of-concept (POC) visual regression tool, built by a newly hired QA engineer. It covered a handful of screens and offered a thin layer of reassurance, but it wasn’t yet integrated into any delivery process.
Real confidence came from one person — a former call centre staff member who’d lived through previous bugs and fielded the fallout directly from customers. Her experience gave her an uncanny ability to spot issues before they reached production. But there was no system-level feedback. No tests you could trust. No guardrails around change. It felt like we were building software with a crystal ball — predicting issues by memory and instinct, not signal. And that only works until it doesn’t.
The Tests Were Broken
We didn’t have time to do things properly, because we were too busy fixing the fallout from not doing things properly.
There were automated tests in the codebase - technically. When I arrived, around half of them were broken or failing. They hadn’t been maintained, hadn’t been trusted, and hadn’t been part of anyone’s delivery flow. Over time, failed tests were ignored, skipped, or quietly left behind.
It wasn’t that the team didn’t care. Testing’s hard when it’s not routine, and no one really had the skills or time to fix the tests properly. Running the full suite took over 30 minutes, and even the passing tests weren’t fast or focused enough to be useful. Testing was seen as a luxury — something you might do if there was time. But there never was. The team were stuck in a loop: constantly firefighting the fallout of changes, without enough space to prevent the next fire. We didn’t need more discipline. We needed a way out.
The Pipeline Ran Nothing
The pipeline didn’t slow us down, but it didn’t protect us either.
A deployment pipeline should act as a safety net. It should tell you whether your change is safe to ship - or stop you if it’s not. But this one didn’t. It looked like automation, but it didn’t run any tests, perform any checks, or enforce any quality gates. It simply executed deployment scripts.
There was no feedback on whether the code worked. No signal about performance, security, or correctness. You pressed a button and hoped for the best. And when it failed? Patched it fast — whatever made it pass. Not safe. Not clean. Just green. The pipeline didn’t slow us down, but it didn’t protect us either. And in a system already lacking confidence, that kind of false reassurance only added to the risk.
Framework Added 100x Overhead
Part of the problem was architectural. The test suite was slow, not just because of test design, but because of the stack itself. Every test booted the full Laravel framework, which added significant overhead — around 10x slower than running raw PHP. On top of that, many tests hit the database, introducing another 10x slowdown.
Combined, this created a 100x performance penalty compared to fast, isolated, pure PHP unit tests. But no one had profiled the test suite or questioned the delay. No one expected fast feedback, so no one optimised for it. Slowness had become invisible — just part of the job. It was just accepted as the cost of doing business.
We couldn’t fix everything at once, but we could start shrinking the loop. And the shortest loop was local. If we could cut feedback from half an hour to a few minutes, we’d break the cycle of firefighting and delay.
That became the first goal.
Local Feedback
Get feedback in the time it takes to make a cup of tea.
We didn’t start by rewriting the pipeline or changing the framework. We started smaller, with what developers could see and feel, locally.
The rule was simple: get feedback in the time it takes to make a cup of tea. Not a Netflix episode. Not a lunch break. Just a few minutes — enough time to stay in flow, spot a mistake, and act on it before moving on. That meant treating slow tests as a system smell — and working to tighten the loop from keystroke to signal.
Measure Test Runtime
Using the IDE’s test runner, we could spot which individual files and functions dragged down the whole test suite. We started by profiling how long each test took to run. Not to optimise straight away — to see what was going on. The results were obvious and painful. Most of the runtime came from a handful of slow, integration-heavy tests that booted the full Laravel framework or hit the database.
Until then, everyone had just accepted the test suite as slow. Measuring runtime gave us something to point to — something to question. It turned slowness from background noise into something we could challenge. Once you can see where the time goes, it’s hard to unsee it.
Tag Tests by Cost
We grouped tests by cost — each suite assigned a tag and a folder:
@unit
: raw PHP, no framework, no database@integration
: boots Laravel, no database@e2e
: full stack - Laravel & DB@slow
: expensive tests - we ran these manually when working in the covered code
This structure made test cost visible — and gave the team a shared language to balance coverage against speed. @unit
tests ran constantly during development. @integration
tests ran on commit, enforced by Git hooks. @e2e
tests ran in the pipeline. @slow
tests weren’t automated — we only ran them when we touched the relevant code.
It wasn’t about ignoring slow tests. It was about putting them in their place — and making fast feedback the default.
Catch Issues Early
We weren’t asking developers to care more. We were removing the friction that had stopped them from caring in the first place.
Once we’d isolated the fast tests, we wired them into a pre-commit hook. Now, every time a developer ran git commit
, the @unit
and @integration
suites ran automatically. We also added codestyle and lint checks — another fast way to catch issues before they left a devlelopers machine.
It was the first time feedback felt instant — not just available, but hard to ignore. And because the tests ran fast, they actually got used. We weren’t asking developers to care more. We were removing the friction that had stopped them from caring in the first place.
Prioritise What Matters
Fast feedback gave us more than just speed — it gave us signal. We could prioritise the tests that told us something useful quickly, and ignore the ones that ran long but said little.
It also helped us challenge false assumptions. A slow test that never caught real issues wasn’t valuable — it was just slowing us down. A fast test that caught frequent issues became part of every developer’s loop. This wasn’t about perfection. It was about investing attention where it earned its keep.
Local feedback sped things up — but speed alone doesn’t guarantee safety.
Bugs still slipped in after the merge.
If we wanted delivery that felt reliable, not risky, we needed feedback we could trust. That meant fixing the pipeline next.
Pipeline Feedback
Improving local feedback helped us catch issues earlier, but it wasn’t enough. Developers still needed confidence that their changes would hold up once merged. For that, we needed to rebuild trust in the pipeline.
We didn’t aim for perfection. We aimed for signal. The goal was to give developers fast, automated feedback after every commit, not just to deploy code, but to tell us whether it was ready to go. That meant simplifying the pipeline, speeding it up, and making the feedback visible and usable.
Keep Pipelines Simple
A good pipeline should be boring and predictable
The old pipeline had grown organically — bits bolted on over time, with no real structure or ownership. It didn’t slow you down, but it didn’t help you either. So we started again, with a single goal: fast, meaningful feedback on every commit. We used standard Bitbucket Pipelines — no custom runners, no bespoke setup — just the defaults used well.
That meant stripping out anything that didn’t directly contribute to confidence, including legacy scripts no one owned and stages no one could explain. No retries, no guesswork, no “magic happens here” stages. A good pipeline should be boring and predictable.
Cache What You Can
Once the pipeline was stripped back, we looked at where time was still being wasted, and most of it came down to repeat work. Dependencies reinstalled on every run. Rebuilding and recompiling at every stage.
So we started caching aggressively: vendor directories, build artefacts, test scaffolding — anything that didn’t need to be recalculated every time. Even Sonar analysis got the cache treatment — anything to avoid cold starts and shrink feedback time. It wasn’t glamorous work, but it made a difference. Every minute saved was a minute closer to confidence.
Caching reduced repeat work. But to shrink the total test time, we needed to reduce how much work happened sequentially.
Run Tests in Parallel
Developers started trusting the pipeline again because it finally earned thier trust.
Even after simplification and caching, test time was still a bottleneck. So we split the test suites. Instead of running every test in sequence, we configured Bitbucket Pipelines to run test groups in parallel, cutting overall runtime without cutting corners. We split the pipeline into five parallel jobs — codestyle, linting, @unit
, @integration
, and @e2e
— all reusing a shared, cached install step.
It took a bit of effort to get right, but the impact was immediate. We hit our targets: under one minute for local feedback, under five for the pipeline — without cutting corners. What used to take 30 minutes dropped to under five. And because that speed-up came without cutting quality, it built confidence, not just speed. Developers started trusting the pipeline again — because it finally earned their trust.
On average, each developer reclaimed nearly 20 hours a month — time that used to be lost to waiting, rework, and firefighting. Now, that time went back into focused flow work: building, testing, and improving.
Constantly Improve Tests
Test health became a shared concern, not a side project.
We didn’t fix the whole suite in one go — and we didn’t need to. What mattered was building the habit. Every time a test slowed us down or added noise, we asked: is this still earning its keep?
We followed the Boy Scout Rule: leave things better than you found them. If a test could be re-written without hidding the DB or booting Laravel, we moved it down the hierarchy — @slow
to @e2e
, @e2e
to @integration
, @integration
to @unit
. Always chasing a faster test time.
This wasn’t just a tooling change — it was a culture shift. We reinforced it in team meetings, documented the tags and expectations, and used code reviews to embed the habit.
It wasn’t about chasing perfect tests. It was about keeping the feedback loop fast, trusted, and fit for purpose.
Reflection
What’s stopping you from getting feedback before the kettle boils?
What tradeoffs are you making when feedback is slow?
Slow feedback isn’t just frustrating — it’s a symptom of a larger issue. A signal that something in your system is silently taxing your time, trust, and team flow.
- Is it a legacy pipeline that’s grown too slow to matter?
- Is it feedback so delayed that teams batch changes, skip checks, or delay testing entirely?
- Is it process friction that keeps you guessing?
Takeaways
- Bugs are cheapest when you catch them early — but only do that if your system tells you they’re there.
- Fast feedback unlocks trust, flow, and confidence across the whole team.
- Optimising test feedback isn’t just about tooling — it’s about shaping better habits.
- Profiling, tagging, and pre-commit checks can pay back hours of lost time every week.
- Fixing feedback loops isn’t the end — it’s where better delivery begins.
Want to go deeper?
If you want to explore the thinking behind fast feedback, sustainable delivery, and system-level improvement, here are four excellent reads:
- Accelerate — Nicole Forsgren, Jez Humble, Gene Kim
Evidence-backed insights into what drives high-performing software teams, including why fast feedback loops matter. - Making Work Visible — Dominica DeGrandis
A practical guide to uncovering the hidden queues and delays that quietly undermine team flow. - Modern Software Engineering — Dave Farley
A principle-first take on engineering that scales, including the role of feedback, learning, and batch size. - Resilient Management — Lara Hogan
Smart, human-centred advice for leaders building teams that thrive in complex, high-change environments.
Better software isn’t about trying harder.
We didn’t fix fast feedback by pushing people harder — we changed the system they worked in. Because when the system improves, everything else starts to shift too:
- Overhead drops
- Batch size shrinks
- Complexity unwinds
- Bugs have nowhere to hide
That’s what these three strategies are about.
- Fast Feedback — make bugs cheaper to catch
- Smaller, Safer Changes — make changes easier to ship
- Controlled Delivery — make production a safe place to learn
So let’s go beyond “just try harder” and build better systems — and better software.
Up Next: Part 2 - Smaller, Safer Changes
You can’t inspect a fourth leg onto a three-legged table.
Fast feedback helped us move quicker, but it didn’t make the change feel safe. Our legacy systems still had a nasty habit of turning small tweaks into big surprises.
Next, we’ll look at how we shrunk the blast radius, killed checklist theatre, and made small the default.