Tests as project insurance: what happens when they’re missing?
Lack of tests rarely hurts right away. It usually starts innocently: “we’ll add them later,” “it’s just a small fix,” “we’re under deadline pressure.” Then symptoms appear that both developers and CTOs know: regressions in random places, growing fear of change, ever-longer code reviews, and releases that, instead of being routine, become high-risk events.
Typical signs that a project is running without “insurance”:
1) Regressions after small changes: you fix validation in one form and the data import breaks.
2) “Don’t touch it or it’ll break”: the team avoids parts of the code because no one is sure what it will break.
3) Code review turns into bug hunting: instead of talking about architecture and readability, you try to manually trace every path.
4) Unstable deployments: every release risks a hotfix, and rollbacks happen more often than they should.
Most importantly: the cost of a bug grows with the stage at which you detect it. A bug caught on a developer’s laptop is usually minutes. The same bug caught on staging is hours and involves more people. A bug in production is an incident: potential revenue loss, downtime, SLA breaches, support cost, and context switching for the whole team.
The business consequences aren’t abstract. If the application handles payments, registrations, orders, or internal processes, a regression can mean: lower conversion, loss of user trust, additional ticket-handling costs, and sometimes real contractual penalties. Tests don’t guarantee the absence of bugs, but they drastically reduce the risk that you’ll hear about them from a customer.
Who tests are for: the developer vs. CTO perspective
Tests are often sold as “quality.” That’s true, but in practice tests are primarily a tool for reducing uncertainty — for different roles in slightly different ways.
A developer gains fast feedback and peace of mind when making changes. Well-written tests shorten the loop: change → run tests → you know whether you broke something. That means less manual clicking through the app, less “debugging blind,” and more courage to refactor.
A CTO views tests as a mechanism for risk control and delivery predictability. When tests are part of the process, it’s easier to plan releases, reduce change failure rate, and shorten MTTR. Tests become part of the quality management system, not just a “dev practice.”
The common denominator: tests reduce the cost of change. And the cost of change is one of the main levers that determines whether a product evolves quickly and stably, or “gets stuck” in regressions and hotfixes.
What problems do application tests solve (and how)?
1) Early detection of bugs and regressions
Tests catch problems when they’re cheapest to fix. Example: you change how a discount is calculated. Unit tests confirm the discount doesn’t exceed the limit, integration tests verify the database write is correct, and E2E tests confirm the user sees the right price in the cart.
2) Stabilizing deployments (CI/CD) and shorter time from commit to release
Without tests, CI is at best a compilation step and at worst a formality. With tests, the pipeline becomes a quality gate: if something breaks critical behavior, the PR doesn’t pass. This enables more frequent deployments in smaller batches, which in turn reduces risk.
3) Limiting technical debt through safer refactoring
Technical debt often grows because you’re afraid to touch the code. Tests change the dynamics: you can rebuild a module, simplify dependencies, split a monolith into components — and get automatic feedback on whether the system’s behavior remains as expected.
4) Onboarding and “living documentation”
Descriptive documentation becomes outdated quickly. Tests, if readable, show how the system is supposed to work. A new team member can see: what the edge cases are, what the business rules are, what the module’s “contract” is.
Types of tests: what to test and why
Unit tests — the fastest, usually the cheapest to maintain. Ideal for business logic: functions, classes, validation rules, calculations, permission policies. Example: “for a cart above 200 PLN the discount is 10%, but no more than 50 PLN.”
Integration tests — verify cooperation between components: database, queues, cache, HTTP integrations, ORM, migrations. They catch contract and configuration bugs that unit tests won’t. Example: an endpoint creates an order, writes records to two tables, and publishes an event to a queue.
End-to-end (E2E) tests — simulate user behavior on critical paths. They’re the most expensive (time, maintenance, flakiness), but provide the highest level of confidence for key flows. Example: registration → login → add product → payment → confirmation.
Contract tests (consumer-driven contracts) — especially important in microservices and cross-team integrations. Instead of “hoping the API won’t change,” you formalize the consumer’s expectations. Example: service A expects service B to return the “status” field as an enum with known values.
Performance and security tests — non-functional risks usually surface too late if they aren’t tested. Performance: will the system withstand a marketing campaign? Security: are there obvious authorization holes, do dependencies have critical CVEs?
The test pyramid and coverage strategy: how to find the right proportions
The classic test pyramid says: most unit, fewer integration, the fewest E2E. The reason is simple: unit tests are fast and stable, while E2E tests are slow and brittle. But the pyramid isn’t a dogma — it’s a heuristic.
When is it worth deviating from the “ideal” pyramid?
1) A legacy system without layer separation: sometimes it’s faster to start with integration/characterization tests, because unit tests require major code restructuring.
2) Critical business processes: if a checkout bug costs real money, a minimal E2E set is non-negotiable.
3) Microservices: the importance of contract and integration tests grows, because the risk lies in the interfaces between services.
It’s also worth distinguishing code coverage from risk coverage. Coverage percentage can be misleading: you can have 80% and still not test what really hurts (e.g., payments, permissions, migrations). A better question is: “Do we have tests for critical paths and the riskiest modules?”
A practical method for matching tests to risk:
1) Critical user paths (registration, purchase, login, data export).
2) Frequently changed areas (where regressions happen most often).
3) Highly complex modules (many conditions, rules, integrations).
4) Places where a bug is expensive (finance, permissions, SLA).
Benefits for the development team: less stress, more control
Courage to change and faster refactoring
Tests change day-to-day work: instead of “I touched it and I’m scared,” you have a mechanism to confirm the system’s behavior remains correct. This is especially important when cleaning up architecture, simplifying dependencies, and paying down debt.
Better code review
When tests are the standard, review stops being manual QA. You focus on: readability, module boundaries, naming, responsibilities, contracts. Errors like off-by-one or missing null handling are more often caught by the test suite.
Less debugging and manual repetition
Without tests, many teams do the same thing: start the app, click through 10 scenarios, check logs. Tests automate repetitive checking. Debugging still exists, but it less often starts with “I don’t know what broke” — more often with “this test shows what doesn’t work.”
Consistency of system behavior
Tests are a contract. If today “order status” transitions are A→B→C, tests enforce that tomorrow someone won’t add a shortcut A→C without a conscious decision and an update to the rules.
Benefits for the CTO and the business: predictability, cost, and risk
Reducing the cost of production bugs
Incidents are costly in multiple dimensions: team time, lost revenue, reputation, customer support. Tests reduce the probability of an incident and shorten diagnosis time when something does happen.
Faster and safer releases
Tests support a small-deployments strategy. Smaller batches mean lower risk, simpler rollback, and faster root-cause identification. In practice, tests are one of the foundations that enable moving from “release once a month” to “release several times a week.”
Quality KPIs and DORA
If you measure Lead Time, Deployment Frequency, Change Failure Rate, and MTTR, tests directly affect each of these metrics:
1) Lead Time: less time spent on manual checking and firefighting.
2) Deployment Frequency: more confidence = more frequent deployments.
3) Change Failure Rate: fewer regressions after deployment.
4) MTTR: faster diagnosis thanks to repeatable scenarios and more stable changes.
Scaling the team
As the number of people and parallel changes grows, the risk of conflicts and regressions increases. Tests become the standard that maintains quality regardless of who touches a module. This is especially important with turnover, onboarding, and multi-team work.
The most common objections: “tests slow us down” and “we don’t have time”
Objection 1: “Tests slow down development”
In the short term — yes, because you’re writing additional code. In practice, however, tests often shorten delivery time because they reduce debugging, manual testing, and the number of post-deployment fixes. If the team regularly loses days to regressions and hotfixes, tests are an investment with a fast payoff.
Objection 2: “We don’t have time”
Most often it means: “we have too much uncertainty and too much firefighting.” Lack of tests can be the cause, not the effect. A sensible strategy is small steps: don’t try to cover everything; start where a bug is most expensive.
How to start in a legacy project without tests?
1) The “tests for new changes” rule: every new feature and every bugfix gets a test.
2) Characterization tests: before you touch a risky part, describe the current behavior with tests (even if it’s “weird”), so you can refactor without guessing.
3) Priority: critical flows + modules that break often.
How to reduce flakiness (unstable tests)?
1) Deterministic test data: fixed fixtures, controlled database seeding.
2) Dependency isolation: stubs/fakes for external services that are unstable.
3) Stable CI environments: reproducible containers, dependency version control, no configuration “magic.”
4) Realistic E2E scope: fewer scenarios, but well-chosen and well-maintained.
How to introduce tests step by step (a 2–4 week plan)
The plan below assumes a pragmatic approach: quick wins, minimal chaos, and measurable progress.
Week 1: risk audit and standards
1) Identify 3–5 critical paths (e.g., login, payment, resource creation).
2) Define “done”: when a PR is ready (e.g., tests for new logic, no red tests, a minimum quality level).
3) Set standards: test naming, folder structure, AAA conventions.
Week 2: CI on PRs and fast unit tests
1) Run tests on every PR as a merge condition.
2) Add unit tests for new changes (without trying to “cover the whole world”).
3) Set a time budget: e.g., the unit test suite should run within X minutes.
Week 3: first integration tests
1) Choose 1–2 key modules (e.g., orders, permissions).
2) Build stable test data (seed, migrations, containers).
3) Add integration tests where the risk lies in contracts (database, queues, APIs).
Week 4: a minimal E2E set and reporting
1) 3–7 E2E scenarios for critical flows (not dozens).
2) Report results (e.g., in CI): you can see what breaks and when.
3) Reduce flakiness: retries only as a band-aid; removing root causes is the priority.
Metrics worth defining from the start:
1) Build time and test suite time.
2) Number of post-deployment regressions (monthly/per release).
3) Change Failure Rate and MTTR (if you track DORA).
4) Number of hotfixes and rollbacks.
Practical rules for writing good tests (checklist)
1) Test behavior, not implementation
If a test breaks with every refactor, it often means it’s too “glued” to details. Example: instead of checking that a specific private method was called, check the outcome and side effects that are part of the contract.
2) Use AAA (Arrange–Act–Assert) and clear names
A good test reads like a specification. The name should say: condition + action + expected result. Example: “when the cart is empty, placing an order returns a validation error.”
3) Avoid brittle dependencies
The most common sources of brittleness: time, network, execution order, shared state. If you must use time, freeze it (fake clock). If you must use the network, stub external services.
4) Use mocks in moderation
Mocks are great for isolating units, but overused they create a false sense of safety. If the risk lies in integration (e.g., ORM mapping, API contract), prefer an integration test on real components.
5) Care about speed and ergonomics
1) Run tests in parallel where it makes sense.
2) Split into suites: unit tests separately, integration tests separately, E2E separately.
3) Test selection in CI: fast tests on PRs, heavier ones at night or before release (depending on the process).
Summary: when tests deliver the highest return on investment
The biggest ROI from tests appears in systems that are: deployed frequently, business-critical, developed by many people, and those where the cost of a bug is high (finance, permissions, SLA, reputation). But even in smaller projects, tests quickly pay off through fewer regressions and shorter diagnosis time.
The most effective strategy is small steps: start with critical paths and the highest-risk areas, then expand coverage. Tests work best when they’re part of a quality culture: the definition of “done,” CI/CD, and shared team responsibility.
FAQ
Are unit tests enough to be confident about quality?
Not always. Unit tests are the foundation (fast and cheap), but they won’t catch issues at component boundaries. For real confidence you also need integration tests and a minimal set of E2E tests for critical user paths.
How much test code coverage is “enough”?
There is no universal number. Coverage percentage is a supporting indicator, not a goal. It’s better to measure risk coverage: critical flows, frequently changed modules, highly complex areas, and those where a bug is most expensive.
How do you start writing tests in a legacy project without tests?
Start with tests for new features and bugfixes, and with characterization tests for the riskiest parts. This allows safe refactoring without having to immediately “rewrite everything.”
Why can E2E tests be problematic?
They’re slower and more prone to flakiness because they depend on many parts of the system (frontend, backend, database, network, data). That’s why it’s worth limiting them to key scenarios, ensuring a stable CI environment, and using deterministic data.
How do tests affect CI/CD and deployment frequency?
They provide fast feedback and reduce the risk of regressions. This allows more frequent deployments in smaller batches, with a lower change failure rate and shorter MTTR, because issues are detected earlier and easier to localize.
What next?
If you want to approach testing pragmatically (without “religion” and without paralysis), start with a short rollout plan and clear team standards.
Getting-started steps (to do this week):
1) List 3–5 critical application paths and places where a bug is most expensive.
2) Define “done” for PRs: tests for new logic + a green pipeline.
3) Add running tests on PRs in CI (even if at first it’s only unit tests).
4) Pick one high-risk module and add the first tests (unit or integration — depending on the architecture).
5) After 2 weeks, compare: number of regressions, debugging time, number of hotfixes, time from commit to deployment.
A decision that usually works: not “we write tests everywhere,” but “we test what carries risk and what we change.” This strategy delivers a quick return and builds a quality habit without blocking delivery.




-min.avif)


