How to Build the Internal Business Case for Enterprise AI
Enterprise AI business cases often generate enthusiasm but fail to generate approval. This article sets out what a business case must cover to withstand scrutiny from finance, risk, and executive leadership.
The project sponsor presents to the investment committee. The slides show productivity gains. There is a reference to industry adoption rates. The projected ROI is positive by month eighteen.
The CFO asks how the productivity gains were calculated. The answer involves an assumption about hours saved per user per week, multiplied across the user base, converted to a dollar figure using average salary cost. The CFO asks whether those hours will translate to headcount reduction or cost avoidance. The sponsor says the benefit will be realised through redeployment of staff to higher-value work.
The committee defers the decision. They want a more rigorous cost model and clearer benefit attribution before they approve.
This is a familiar pattern. Enterprise AI business cases often succeed at generating enthusiasm and fail at generating approval. The gap is not usually in the technology or the use case. It is in the financial and operational rigour of the case itself. Finance and risk functions are being asked to approve significant, multi-year commitments on the basis of benefit projections that cannot be verified and cost models that do not reflect what deployment will actually require.
This article is written for IT leaders, procurement managers, and business sponsors in Australian organisations who are building the internal business case for enterprise AI investment and need a structure that will withstand scrutiny from finance, risk, and executive leadership.
Why Enterprise AI Business Cases Fail Internal Scrutiny
Enterprise AI business cases tend to fail for one of three reasons.
The first is benefit inflation. Benefits are projected at the upper end of the plausible range, often using illustrative calculations that treat assumptions as facts. Hours saved are assumed to translate directly to cost reduction. Productivity improvements are assumed to be uniform across users. Adoption is assumed to reach full deployment within a short timeframe. Finance functions that have seen these projections before know that assumptions at the optimistic end of the range rarely materialise as modelled, and they discount accordingly.
The second is cost underestimation. The business case models licence cost and implementation cost, but omits consumption, governance and operational cost, ongoing internal resourcing, training and change management, integration costs that were not scoped during planning, and the cost of model lifecycle events such as migrations and regression testing. The true total cost of ownership is substantially higher than the vendor's quote, and a business case built on the vendor's quote will be revised upward during implementation in ways that damage credibility.
The third is unclear accountability. The business case identifies a sponsor and a budget holder but does not identify who is accountable for delivering the projected benefits, who owns the AI operationally after go-live, and who is responsible for governance and compliance. Finance committees are approving an investment, not a technology deployment. If the case does not identify who is accountable for the return, the committee has no basis for confidence that the return will be pursued.
A business case that addresses all three of these weaknesses does not guarantee approval. It does guarantee that the questions finance will ask have answers.
What the Business Case Must Cover
The Problem Statement
The business case opens with a clear, specific description of the problem the AI investment is intended to address. Not a general aspiration toward efficiency or innovation, but a defined operational problem with a measurable current state.
A problem statement that works: a specific workflow that currently requires a defined number of hours per week across a specific team, produces outputs at a known error rate, and creates a downstream cost or delay that can be quantified. The AI investment addresses this problem by changing how the workflow operates.
A problem statement that does not work: the organisation wants to improve operational efficiency and remain competitive in an AI-enabled market. This is a direction, not a problem. It does not give finance a basis for assessing whether the investment addresses a real cost or risk.
The problem statement also defines the scope of the initial deployment. Enterprise AI investments that are approved for a broad, organisation-wide transformation programme typically face higher scrutiny and longer approval cycles than those approved for a specific, bounded use case with clear success criteria. Starting with a defined scope does not limit the long-term ambition. It creates a credible foundation for it.
Use Case Definition
The business case translates the problem statement into a defined use case: what the AI will do, in what workflow, for which users, producing what output. Use case and MVP definition at this level of specificity is required before cost or benefit can be modelled with any reliability.
Organisations that have done the pre-procurement work of defining their AI requirements before engaging vendors typically have use cases defined at the level of specificity a business case requires. Organisations that have not done this work often find that use case definition is the hardest part of building the business case, because the investment being approved is not yet sufficiently defined to cost or assess.
If the use case cannot be described specifically enough to model its cost and benefit, the business case is not ready. The organisation needs to complete use case definition before building the financial case, not during it.
Cost Modelling
Cost modelling for enterprise AI must reflect the full cost of deploying and operating the platform, not just the vendor's licence price.
The cost model should address five categories:
Platform and licence costs. The vendor's proposed commercial structure, including the licence tier required for the use case, any consumption-based components modelled at the projected usage profile, and the cost trajectory over the contract term if pricing escalates or usage grows.
For platforms with usage-based pricing, consumption costs (such as token-based API usage or query-based pricing) should be modelled based on expected workflow volume, not pilot usage. These costs can scale materially at enterprise volumes and should be tested against high-usage scenarios, with appropriate cost controls or caps considered where available.
Implementation and integration costs. The cost of deploying the platform, configuring it for the organisation's use case, and integrating it with existing systems. These costs are often underestimated because the scope of integration is not fully understood until implementation is underway. Where integration requirements have not been technically validated, the cost model should include a contingency that reflects the risk of scope expansion.
Organisations with strict data residency or security requirements may also need to operate the platform within their own cloud environment. Where vendors support this, the infrastructure and operational costs are typically higher than the standard hosted model and should be factored in where applicable.
Governance and operational costs. The ongoing internal cost of running governance processes: monitoring, regression testing, impact assessment when model updates occur, compliance documentation, and vendor relationship management. These costs are addressed in detail in the article on the cost of enterprise AI governance. They are rarely included in vendor quotes and frequently omitted from business cases.
Change management and training costs. The cost of preparing the organisation to use the AI effectively, including training, communication, process redesign, and the productivity dip that typically occurs during the adoption period before users develop competence and confidence with the system.
Exit and contingency costs. The cost of switching vendors if the platform does not perform as expected, or the cost of migration if the vendor deprecates the product or is acquired. These costs are not expected to be incurred, but a credible business case acknowledges that they exist and that the organisation has assessed them.
Benefit Quantification
Benefit quantification is where business cases most frequently lose credibility. The discipline required is to separate benefits that can be directly attributed and measured from benefits that are real but not directly quantifiable, and to present both categories honestly rather than converting the second into the first through optimistic assumptions.
Directly quantifiable benefits are those where the causal link between the AI and the outcome is clear and measurable. Time saved on a specific task, reduced error rate in a defined output, reduction in a specific cost category that the AI directly addresses. These benefits can be modelled using current-state data and realistic adoption assumptions.
Indirectly quantifiable benefits include improvements in decision quality, faster response to opportunities, and better customer outcomes. These are real, but the causal chain between the AI and the financial outcome is longer and less direct. Presenting these as a specific dollar figure undermines the credibility of the directly quantifiable benefits. Presenting them as expected but difficult-to-attribute outcomes is honest and, in most finance functions, more persuasive.
Benefits that rely on headcount reduction deserve particular scrutiny. If the business case projects cost savings from staff reduction enabled by AI-driven productivity improvement, those projections should reflect the actual headcount plan, not an assumption that productivity gains automatically translate to headcount cost. If the plan is to redeploy staff rather than reduce headcount, the benefit is cost avoidance or output improvement, not cost reduction. These are different financial claims and should be presented as such.
Adoption assumptions have a disproportionate effect on benefit projections. A deployment that reaches 80% adoption produces substantially different benefits than one that reaches 40%. Adoption assumptions should be grounded in the organisation's change management capability and in reference data from comparable deployments, not in the upper end of the vendor's claimed adoption rates.
Risk Assessment
The business case identifies the material risks to successful delivery and the mitigations in place for each. Risk assessment for enterprise AI typically covers four areas.
Delivery risk: the risk that implementation takes longer, costs more, or produces a system that does not meet the use case requirements. Mitigated by clear requirements, structured vendor selection, and an implementation approach with defined milestones and decision points.
Adoption risk: the risk that users do not adopt the system at the level the benefit projection assumes. Mitigated by change management planning, user involvement in design, and a phased rollout that builds confidence before full deployment.
Governance and compliance risk: the risk that the deployment creates regulatory exposure, data handling problems, or audit findings that require remediation. Mitigated by governance requirements built into vendor selection, not retrofitted after deployment.
Vendor risk: the risk that the vendor changes pricing, deprecates the product, or is acquired in ways that affect the organisation's ability to operate the system as planned. Mitigated by contract terms that address these scenarios and by vendor assessment during evaluation.
Accountability and Governance Structure
The business case names the people and functions accountable for delivering each component of the investment: implementation delivery, benefit realisation, operational governance, and vendor relationship management.
Accountability for benefit realisation is the section most commonly omitted. If nobody is named as accountable for achieving the projected hours saved or the projected error rate reduction, the projection is aspirational rather than committed. Finance committees know the difference.
Operational ownership after go-live should also be identified. The team that owns the AI once it is live, what their resourcing requirements are, and how governance will be sustained over the contract term. A business case that does not address operational ownership is presenting an incomplete picture of what the organisation is committing to.
Presenting to Finance and Executive Leadership
The structure of the business case document matters less than its ability to answer the questions that finance and risk functions will ask. The questions are predictable:
What problem are we solving and how do we know this investment addresses it? How confident are we in the cost model and what are the contingencies? Are the projected benefits realistic and who is accountable for delivering them? What are the principal risks and how are they being managed? What does ongoing governance cost and who owns it? What are our options if this does not work?
A business case that has clear, evidenced answers to each of these questions is in a strong position for approval. A business case that has confident answers to the first two and vague answers to the rest is likely to be deferred.
The length and formality of the business case should match the organisation's investment approval requirements. Some organisations require a full business case document for investments above a threshold. Others require a standard investment proposal format. The content requirements above apply regardless of format.
The Business Case as a Procurement Foundation
A rigorous business case does more than secure approval. It establishes the foundation for the procurement process that follows.
The use case definition becomes the basis for the vendor brief. The cost model establishes the budget envelope for commercial negotiation. The risk assessment identifies the governance and contractual requirements that vendor selection must address. The accountability structure defines who owns the vendor relationship after go-live.
Organisations that build a rigorous business case before engaging vendors enter the enterprise AI procurement process with clarity that produces better vendor responses, more focused evaluations, and stronger contract outcomes. Organisations that build the business case after vendor engagement has begun often find that the case reflects vendor framing rather than organisational requirements.
The business case is not a procurement document. It is the document that makes procurement purposeful.
This article provides general commercial and procurement commentary only and does not constitute legal, financial, regulatory, or professional advice. Organisations should seek appropriate advice specific to their circumstances.