EdgeRed

Home Analysis The AI readiness assessment you actually need

The AI readiness assessment you actually need (before you build anything)

Most organisations that come to us thinking they’re ready to build AI are at least partially wrong. That’s not a slight — it’s just what the work reveals when you look closely. The gap is almost never where they expect it to be. The most useful thing we can do before any build starts is an honest AI readiness assessment. Not a checklist. Not a maturity model with five colour-coded bands. An actual look at whether your data, your governance, and your organisational habits can support AI that behaves the way you need it to. Here’s what we’ve learned doing that work in practice.

Clean data is not the same as AI-ready data

We worked with a client who came to us confident. They had a centralised, well-maintained document library — years of accumulated knowledge, properly stored, no obvious mess. By most conventional measures, they were data-ready. What the assessment uncovered was a different problem entirely. The knowledge base had no mechanism for handling conflicting truths. In their domain, Document A might establish a standard practice, and Document B — published six months later — might contraindicate that standard for a specific context. The client’s system stored both as separate, equal data points with no temporal hierarchy between them. When we mapped what this meant for a RAG-based AI, the answer was uncomfortable. Asked a question that touched both documents, the model would retrieve both and attempt to synthesise them. The output wouldn’t be wrong in an obvious way — it would be a plausible middle ground that was just plain wrong in ways that mattered. The fix we proposed was overlaying a knowledge graph to establish relationships and precedence between sources, so the model could reason about which guidance applied in which context. The client is still working toward implementing it. Until they do, AI is off the table for anything user-facing. The gap wasn’t technical. The data was clean by the standards it was built to. The gap was a governance one — nobody had designed the knowledge base to support AI reasoning across conflicting sources.

Volume is not rigour

The most common misconception I see is that data quantity signals data quality. Eight years of transaction history sounds impressive, and sometimes it is. But AI doesn’t just need data — it needs data with the right structural properties. Two things tend to be missing even in organisations with large, well-kept datasets. The first is structural integrity: metadata that’s consistent, records that are properly indexed and labelled, fields that mean the same thing across systems. The second is temporal relevance — an honest answer to how old data can get before it loses its predictive or contextual value. For some use cases, three-year-old data is fine. For others, anything beyond 12 months is noise. Most organisations haven’t made that call explicitly, which means the AI will make it implicitly, and not always wisely.

Human-in-the-loop is a feature, not a fallback

The other assumption that regularly causes problems is the idea that once AI is deployed, you can step back and let it run. That’s not how production AI works in high-stakes environments — and it’s not how it should work. Human-in-the-loop feedback mechanisms aren’t a workaround for a model that isn’t confident enough. They’re a deliberate design choice that keeps the system honest over time, catches edge cases before they compound, and builds the institutional trust that makes AI adoption stick. Organisations that skip this because it feels like overhead tend to find themselves with a model that drifts quietly out of alignment with what the business actually needs.

The real work isn’t the AI — it’s the evaluation framework

What surprised us most on this project wasn’t the complexity of the source material. It was how fragile the knowledge base turned out to be under the conditions of AI retrieval. Deep in a 50-page document, there might be a single sentence — a specific exception or qualifying condition — that changes the correct answer entirely. Standard RAG retrieval finds the right document. It doesn’t guarantee it finds that sentence, or weights it appropriately against the broader content. We built a tiered retrieval system that scored sources by authority before surfacing them, so the model wasn’t treating a general rule and a specific exception as equivalent inputs. That work — building the evaluation framework to catch the model when it was being helpful rather than accurate — accounted for roughly 80% of the total project effort. The AI itself was a relatively small part of the build. That ratio surprises people. It shouldn’t.

What a real AI readiness assessment looks at

When we run an AI readiness assessment with a client, we’re looking at a few things that rarely appear on standard checklists:
  • Does your data have structural integrity that supports the use case — not just volume?
  • Have you defined temporal relevance for your data, and is it enforced anywhere?
  • Where your knowledge base contains conflicting information, does any mechanism exist to establish precedence?
  • Do you have a plan for human oversight once the system is live — and is it resourced?
  • Can you build an evaluation framework that tests for accuracy, not just coherence?
The answers to those questions tell us more about genuine AI readiness than any data audit or maturity score.

Not sure if you’re actually ready? Let’s find out.

If you’re planning an AI build and want an honest read on where you actually stand, we’re happy to have that conversation. Get in touch.

Frequently asked questions

What’s the difference between an AI readiness assessment and a standard data audit?

A data audit checks whether your data is well-organised, accurate, and consistent. An AI readiness assessment goes further — it looks at whether the structure of that data, your governance, and your operational habits can support a model that needs to reason, retrieve, and stay aligned over time. Plenty of organisations pass a data audit and still aren’t ready for AI.

Can our data be “clean” but still not AI-ready?

Yes — and it’s the most common scenario we see. Clean data means the records are accurate by the standards they were built to. AI-ready data means the structure also supports things like resolving conflicts between sources, establishing temporal precedence, and weighting authority — none of which traditional data quality measures cover.

How much human oversight do we actually need once AI is live?

More than most teams plan for. Human-in-the-loop isn’t a fallback for a weak model — it’s how you catch drift, surface edge cases, and build institutional trust over time. Treat it as a resourced ongoing function, not a launch-week safety net you can quietly retire.

Do all AI projects need a knowledge graph?

No. Knowledge graphs earn their place when your sources contain overlapping or contradictory information that needs hierarchy — for example, where one document supersedes another, or where context determines which guidance applies. For simpler retrieval problems, standard RAG against well-structured data is often enough.

How long does an AI readiness assessment take?

Typically two to four weeks, depending on the size of the knowledge base and how many use cases are in scope. The output is a clear picture of where your data, governance, and oversight stand against the demands of your intended AI build — and what needs to be true before you start building.

This blog was written by Jeremy Ng, Practice Lead – Data & AI Strategy

About EdgeRed

EdgeRed is an Australian AI and data consultancy, part of The Omnia Collective group, with teams in Sydney and Melbourne. We build things that work in production — agentic AI, machine learning, data engineering, and Microsoft Fabric implementation. 250+ projects. 100+ clients. 100% Australian on-shore team.

Subscribe to our newsletter for practical data and AI insights, straight to your inbox.