• Blog

How Big Design Teams Miss Critical Errors

Big teams miss critical errors because requirements are scattered across BIM, PDFs and giant spreadsheets and human review doesn’t scale. A recent hospital case shows one wrong ventilation requirement surviving multiple reviews and delaying opening by ~20 months. The fix is shifting from memory-based checks to systemised validation that runs repeatedly across models and documents. 

12 Mar 2026

The ventilation case that should worry every design lead

A major UK hospital project was delayed just days before opening after ventilation issues were discovered in paediatric critical care. The full opening was pushed back by around 20 months, with costs estimated in the region of £16–17 million.

At the heart of the issue was an “Environmental Matrix” – a spreadsheet used to list ventilation parameters like pressure regimes and air changes per hour for each hospital room. A senior reviewer described it simply as a list pulling together environmental specs against room types.

A later review found that an early version of the matrix, dating back years, had incorrectly specified four air changes per hour instead of the required ten for critical care rooms. Although one bidder corrected it during the tender process, the error persisted across reviews ultimately described by investigators as a “collective failure”.

Manual checking doesn’t scale when key inputs live in massive matrices

If your source of truth is a matrix, scale is your enemy. For example: 700 rooms × 25 parameters (air changes, pressures, filtration, alarms, temps…) is 17,500 datapoints – before revisions.

The Inquiry itself notes why this is fragile: unlike database-driven room data sheets, an environmental matrix is manually populated and exposed to transcription errors. Spreadsheet research supports the bigger point: even low cell-level error rates become near-certain somewhere in large files. Studies of operational spreadsheets report errors in roughly 0.8%-1.8% of formula cells; manual data entry in a clinical research repository has been measured at ~2.8% overall error.

Even with a brilliant team, 1% error across 17,500 entries is ~175 wrong values. You only need one.

Overlapping reviews can leave a “no one owned this” gap

More reviews aren’t the same as better coverage. The Inquiry highlights a lack of clarity about the matrix’s purpose and whether compliance with it was mandatory – even though parts were reviewed and signed off, and later inconsistencies remained.

That’s diffusion of responsibility in action: in groups, accountability can dilute (“I assumed someone else checked it”).

BIM interface failures are where small mismatches turn into rework

BIM coordination failures still tend to surface at interfaces: MEP vs structure vs architecture. Research shows many coordination issues go “beyond the clash” (e.g., modelling assumptions and semantics), so they can slip past basic clash detection; published work on MEP coordination strategies also shows coordination choices can drive schedule impacts. Libraries help standardise components, not prove project compliance

Centralised object/component libraries improve consistency and reuse – and BIM object standards explicitly aim to improve object quality and consistency. But they don’t answer the bigger question: does today’s model meet today’s requirements?

Requirements change (client decisions, derogations, authority feedback, standards updates – including healthcare ventilation guidance such as HTM/SHTM 03-01). Under UK information management practice (e.g., ISO 19650), teams are pushed to define information requirements and manage information consistently – but you still need a practical way to verify compliance at speed. Academic work on BIM requirement validation highlights continuous compliance needs, and the limits of human-only validation.

The shift that cuts defects fastest: from memory to validation

The fastest path to fewer defects is treating validation like software: repeatable rules, versioned requirements, and results linked back to sources. That means checks that run consistently across BIM models and documents (including PDFs and schedules), not just on someone’s checklist memory.

This is the direction we’re building towards at Tektome: automated checking across BIM (e.g., Revit/IFC) and project documents, aiming to scale quality checks without relying on humans to remember every edge case.

Approach Scalability Traceability Speed Error reduction Implementation cost
Manual review Low Medium Slow at scale Variable Low tool / high labour
Centralised libraries Medium Medium Medium Helps consistency Medium (governance)
Automated validation High High Fast after setup High on repeatable rules Medium-high upfront

The firms that win the next decade won’t be the ones who hire the biggest checking teams – they’ll be the ones who embed automated design validation into everyday delivery.

If you’re a similar multidisciplinary firm and this feels familiar, we’d love to compare notes. Get in touch with Tektome to discuss piloting automated design validation on a live package or sign up below to get notified when the product is ready for public release.

Topics:
Share this post: