This is the Capital Asset Pricing Model applied to AI spend. It is how a sophisticated investor would build an AI portfolio, and it is the level of discipline that AI capital now deserves. The enterprises that apply it will look back in three years and see that the portfolio construction discipline, more than any single AI initiative, is what produced their compounding advantage.
Capability 3: Track actuals against business cases continuously
Most AI initiatives are approved on a business case that projects benefits six to twenty-four months out. In the intervening period, almost none of those business cases are tracked against reality. The program runs on status updates, milestone reports, and anecdotal feedback. Finance learns whether the benefits showed up when the program closes.
For AI, where adoption curves, model performance, and unit economics shift on a weekly cadence, this is untenable. The half-life of an AI business case is measured in weeks, not years. The only credible governance model is continuous. Realized benefits captured against projected benefits. Trajectory tracked against plan. Corrections made when the gap opens rather than when it has become a write-off.
Enterprises that track AI initiatives continuously catch underperformance early, redirect capital to what is working, and avoid the scenario that will dominate post-mortems in 2027. The multi-year AI program that consumed hundreds of millions and produced an outcome that could have been predicted in month three.
What this looks like in practice
The enterprises that get this right in 2026 will share a common pattern. A central view of every AI initiative across every business unit, scored and ranked against consistent criteria. A portfolio construction process that explicitly weighs risk against expected return. A continuous measurement layer that tracks realized value against the case, initiative by initiative, in real time. And a governance cadence that reallocates capital based on evidence rather than advocacy.
Nothing about this pattern slows AI down. If anything, it accelerates the AI that is working and clears the drag of the AI that is not. The enterprises that combine speed with governance will outpace the enterprises that pursue either alone.
The board-level question
Every board is asking some version of the same question in Q4 2025 planning. "How much are we spending on AI, and what is it producing?" In most enterprises, the honest answer is that nobody knows precisely, because no system captures the full picture.
That answer is no longer acceptable. Shareholders are pricing AI investment into enterprise valuations, and they will price in the absence of governance just as readily. Auditors are beginning to ask how AI spend is capitalized, measured, and risk-adjusted. Regulators, in sectors where AI touches risk or disclosure, are starting to ask for governance artifacts that most enterprises cannot produce.
The enterprises that can answer the board-level questEnterprise spending on agentic AI has moved from experimental to material in less than eighteen months. Every large enterprise is now running dozens or hundreds of AI initiatives, from workflow automation to custom agents to foundation model integrations. A typical Global 2000 company in Q4 2025 is spending eight to ten times what it spent on AI in Q4 2023.
The spend is real. The capabilities are real. The risks are real too, and they are concentrated in a place most enterprises are not looking. Governance.
This post is about why the governance gap on AI capital is wider than most leaders realize, and what a credible governance model for agentic AI investment looks like.
The AI capital explosion
By late 2025, three facts about enterprise AI spending are clear.
First, the spending is distributed. Unlike ERP or CRM rollouts, where a single large program absorbs most of the budget, AI spending is spread across dozens of initiatives in parallel. A typical enterprise has AI investments running in IT, in finance, in customer service, in operations, in legal, in HR, and in every business unit that could make a business case. No central authority sees all of them.
Second, the spending is fast. AI initiatives get scoped, funded, and deployed in weeks, not quarters. The approval cycles that worked for traditional transformation (annual planning, quarterly reviews, steering committees) cannot keep up with the velocity of AI procurement and deployment.
Third, the ROI is uncertain. A meaningful percentage of AI initiatives will pay back. A meaningful percentage will not. Nobody inside most enterprises has a reliable model for distinguishing the two in advance, and most organizations have not yet built the infrastructure to distinguish them in-flight either.
The combination is a governance problem that grows by the week.
Why this wave is different
Every previous enterprise technology wave (client server, internet, mobile, cloud) came with governance frictions that slowed adoption to a governable pace. Procurement cycles were long. Deployment cycles were long. Central IT functioned as a gatekeeper.
Agentic AI breaks all three.
Procurement is fast because AI tooling is often consumed as a service, bypassing traditional capital approval. Deployment is fast because agents are configurable by business users rather than engineered by central teams. Central IT is no longer a gatekeeper because AI is embedded in tools that business units buy and operate independently.
This is not an argument against AI. The speed and distribution are features, not bugs, and the enterprises that throttle AI adoption to fit legacy governance will fall behind. The argument is that the governance model itself has to change. Speed plus distribution plus uncertain ROI is a recipe for either runaway waste or runaway risk unless governance adapts.
What AI governance actually requires
Effective governance of agentic AI capital is not about slowing AI down. It is about deploying it in a way that produces compounding returns rather than compounding write-offs. Three capabilities define it.
Capability 1: Statistically score and rank AI ideas before they become initiatives
Most enterprises in 2025 are evaluating AI ideas the same way they evaluated general IT projects in 2015. A pitch deck. An executive sponsor. A rough ROI estimate. A green light. The result is a portfolio of AI initiatives that reflects enthusiasm and political capital more than strategic value.
The alternative is to apply structured scoring and ranking to every AI idea before it becomes a funded initiative. Each idea is evaluated against a consistent set of criteria: strategic alignment, expected value, required capabilities, data readiness, regulatory exposure, risk profile, and dependencies on other initiatives. The score is statistical, not subjective. The best ideas rise. The weakest ideas are filtered out before they consume budget or attention.
This is not a theoretical exercise. Enterprises that apply structured scoring to AI intake routinely find that a quarter to a third of their existing AI backlog fails basic viability tests. Cutting those initiatives early is often the fastest and largest AI ROI improvement available. The scoring model also produces something even more valuable over time: a statistical read on which initiative archetypes actually produce returns inside this enterprise, which in turn sharpens every future scoring cycle.
Capability 2: Build a risk-return adjusted portfolio of AI initiatives
AI initiatives vary wildly in risk profile. A near-deterministic document automation using proven OCR and LLM extraction is not the same bet as a novel agentic workflow with no precedent inside the enterprise. Both have a place in an AI portfolio, but they do not belong on the same line of a spreadsheet as if their returns were equally probable.
The financial industry has known for six decades that expected return must be adjusted for risk. Enterprise AI portfolios must adopt the same discipline. Every initiative gets an explicit risk assumption, an explicit expected return, and an explicit position on the enterprise's efficient frontier. Initiatives that improve the portfolio's risk-adjusted return get priority. Initiatives that add uncompensated risk get cut or restructured.ion with numbers, not narrative, will be the ones that can keep investing aggressively through the next economic cycle. The ones that cannot will face the classic pattern of aggressive spend followed by aggressive retrenchment.
The stakes
The AI governance gap will close one of two ways. It will close because enterprises build the infrastructure to govern AI capital with the discipline that its scale demands. Or it will close because a wave of write-offs forces a more abrupt and less voluntary reckoning.
The first path compounds competitive advantage. The enterprises that build AI governance into their capital operating system will deploy AI faster, smarter, and more efficiently than their peers. They will learn from every initiative and carry that learning forward.
The second path destroys capital and slows adoption across the industry. It will produce a cohort of enterprises that spent aggressively on AI, cannot account for what they got, and retreat from further investment for a cycle or two while the next generation of leaders takes over.
The choice is being made right now, quietly, in 2026 planning cycles. The enterprises that recognize agentic AI as a capital governance challenge, not just a technology challenge, will be the ones that win the decade.
At Lunation, we are building the infrastructure that makes AI capital governable at the speed and scale the technology now demands. If the governance gap is keeping your CFO awake in 2026, let's talk.