How we scope an AI automation project
Most AI projects fail in scoping, not delivery. Here is the 30-minute call we use, the 5 questions we always ask, and what comes after.
4 May 2026
Most AI projects that fail do not fail because the technology did not work. They fail because nobody agreed on what success looked like before the build started.
A client comes in wanting to automate their invoice processing. Six months later, they have a working system that handles clean PDFs from three suppliers — and still manually processes everything else. The technology worked. The scoping did not.
This is the single biggest risk in AI automation for professional services firms. The problem is rarely the model or the integration. It is the gap between "we want to automate X" and a clear, testable definition of what automating X actually means.
The 30-minute scoping call
We do not start with a proposal. We start with a call.
Thirty minutes. No slides. The goal is to find out whether you have a workflow worth automating — and if so, what the edge of that workflow actually is.
Most firms come in with a vague mandate ("we want to cut admin time") or an overly specific one ("we want AI to read our emails"). The call surfaces the real constraint. Often it is not what anyone expected.
The 5 questions we always ask
1. What does a tricky case look like?
Every workflow has exceptions. A supplier who sends invoices in a non-standard format. A client whose address changes every few months. A query that sits across two departments.
If you cannot describe a tricky case, the workflow is probably not well understood yet. If every case is tricky, automation is probably the wrong tool.
2. What error rate is acceptable?
AI systems make mistakes. The question is whether those mistakes are tolerable given the stakes.
For a firm processing 500 supplier invoices a month, a 2% error rate means 10 invoices a month need human review. That might be fine. For a firm processing payroll adjustments, 2% might be catastrophic.
Getting an honest answer here early saves months of painful iteration later.
3. What systems will we touch?
Xero. MYOB. Dext. A shared drive. An email inbox. A custom spreadsheet someone built in 2017 that nobody fully understands.
The system landscape determines 80% of the build complexity. Two workflows that look identical on paper can be completely different engineering problems depending on where the data lives and what APIs are available.
4. Who blocks AI decisions in your business?
Every firm has someone — or several people — whose sign-off is required before an automated output becomes action. Understanding that approval chain early tells us whether full automation is realistic, or whether we are building a triage and drafting tool that still has a human in the loop.
Both are valid. But they are different projects.
5. What does success look like in 90 days?
Not in five years. Not "we want to be an AI-first firm." Ninety days. What specific, measurable thing will have changed?
Firms that cannot answer this question are not ready to start. Firms that can answer it clearly almost always end up with a better outcome.
A concrete example
A Surry Hills accounting firm came to us wanting to automate "everything client-facing." They were spending roughly 12 hours a week on what they described as repetitive admin.
The first question that mattered: what does a tricky case look like?
The answer: clients who do not send their source documents through Dext, who instead email PDFs directly to the partner. About 30% of their client base.
That single answer changed the whole picture. The 70% who used Dext were already reasonably structured — their workflow was automatable. The 30% who emailed PDFs required a separate extraction and classification step before any automation could start.
The scoping call turned "automate everything" into a more honest brief: a Dext-integrated document triage and classification workflow for the 70% majority, with a manual review step retained for the 30% exception group. One project, not two. Achievable in eight weeks.
That is the brief we wrote. One page. Not a 30-page deck.
The output
A scoping brief is not a proposal. It is not a quote. It is a shared understanding of what the problem actually is.
Ours run one to two pages. They cover the workflow in plain language, the systems involved, the edge cases, the acceptable error rate, and the 90-day success metric. If we cannot write it in two pages, the scope is not clear yet.
We do not send the brief with a dollar figure attached. We review it with you first. Because a good brief usually surfaces one or two things that change the estimate.
What happens after the call
If the brief is agreed, we run a Quickstart: a fixed-scope build that delivers a working proof within four to six weeks. If it is not the right time — the data is messy, a key staff member is leaving, a compliance question is unresolved — we say so, and we suggest what needs to happen first.
We do not build things that are not ready to be built.
---
If you have a workflow you want to scope, we run 30-minute scoping calls. They are free. We will not pitch you. [Book one →](/p/scoping)