The O&M Reality Check — Part 3: Generating O&M’s
In Parts 1 and 2, I made the case that many O&M submissions are late, inconsistent, and difficult to trust, and that the cost of “fixing reality after handover” is often hidden in re-surveys, rework, and operational friction.
Part 3 is about the obvious question that follows:
If traditional O&M compilation fails so often… what would a better production method look like?
The shift: from documents to information products
Most O&M processes still revolve around assembling a binder (even if it’s a PDF). But building operators don’t maintain PDFs—they maintain assets, systems, risks, and obligations.
So instead of treating O&M as a document, treat it as a set of information products, generated from a governed dataset:
- registers (assets, warranties, certificates, spares)
- evidence links (what proves what, where)
- operational instructions (what to do, when, and why)
- a “manual” as a rendered view of the dataset, not the master copy
Where ELT fits
This is where an ELT approach (Extract → Load → Transform) becomes powerful: you load raw source evidence early, then generate and regenerate outputs as the project evolves. Amazon Web Services, Inc.
Extract: documents, models, schedules, certificates, test data
Load: land it with versioning + metadata + provenance
Transform: map into a usable schema (assets/systems/docs), validate rules, build registers
Publish: generate O&M outputs (PDF packs + hyperlinked indexes + COBie/asset data exports)
What you can generate (partially or fully)
Once you have the dataset, these are realistic candidates for automation:
- Document registers (what exists, revision, status, linked package/system)
- Warranty/guarantee registers (term, start date, evidence, conditions)
- Test & commissioning schedules (required vs received, linked evidence)
- Spares schedules (per maintainable asset type)
- Planned maintenance schedules (task libraries + frequencies + safety notes)
- Hyperlinked O&M indexes that jump to the exact evidence page/section
What typically remains “human-authored” (at least for now):
- project-specific narratives and method statements
- nuanced operational sequences and safety-critical interpretations
- edge-case reconciliation (“this certificate covers these 3 systems… except it doesn’t”)
So why aren’t we doing this everywhere?
Because the biggest blockers aren’t technical—they’re contractual.
The contractual barriers that kill automation
- The deliverable is defined as a “manual” rather than “verified information”
If the contract expects a compiled PDF, automation becomes a risky “nice to have”, not the delivery method. - Acceptance criteria are vague
If requirements are written in prose, you can’t reliably generate pass/fail checks without arguments at the end. - Liability is unclear
If a generated register is wrong, who owns the risk—trade, principal contractor, or the platform? - Supply chain obligations don’t align to data quality
If payment is tied to installation progress, not information quality, data arrives late and messy. - IP and reuse rights
You can’t confidently extract/transform/re-publish content if rights to transform and republish aren’t explicit.
How to overcome the barriers (practical contract/process moves)
Here are changes that make ELT-generated O&M feasible without turning the project into a science experiment:
- Define deliverables as “information products”
- Contract for: (1) a governed dataset, (2) generated registers/schedules, (3) a compiled manual as a rendered output.
- Make requirements machine-checkable — without making them brittle.
buildingSMART IDS is a great step forward because it expresses information requirements in a computer-interpretable form that can be automatically checked against IFC. buildingSMART International+1
But in the real world, IDS can feel too prescriptive if it’s used as a giant rulebook instead of a focused requirement set. That’s why a Client AIR is often the better starting point: it’s narrower, aligned to operations, and easier to agree. The key is that AIR should still be authored in the same kind of computer-interpretable format, so validation is automated and repeatable. This is the approach we’ve taken at Activeplan with our AIR application. - Separate “authorship liability” from “aggregation/transformation”
- Trades warrant the correctness of their data; the integrator warrants traceability and faithful transformation.
- Progressive delivery + progressive acceptance
- Don’t wait for a handover “data dump”. Accept and reject information in waves, like any other quality process.
- Tie payment/retention to information quality
- Pay for passing checks: mandatory fields, evidence links, tag uniqueness, revision status, etc.
- Explicit rights to transform and reuse
- Add clauses allowing extraction/transform and republishing of supplier content for the Asset Information Model / FM systems.
What this looks like in practice (a simple roadmap)
- Step 1: Standardise metadata (package, system, location, revision, status)
- Step 2: Build the “evidence lake” (raw docs + extracted text + indexes)
- Step 3: Implement rule checks and a defect loop (like snags, but for information)
- Step 4: Generate registers and hyperlinks back to evidence
- Step 5: Only then compile the PDF/manual—because it’s now a view, not a mess
Closing
If we want trustworthy O&M handover, we need to stop treating it as end-of-job admin and start treating it like information engineering—with contracts, responsibilities, and acceptance tests to match.
Part 4 (if you want it) is the natural next step: how to define the minimum viable “trust score” for handover information so owners can accept, operate, and improve it instead of re-surveying it.