top of page
Search

Almost All AI Projects are Failing – CXO Forethought and Governance Can Turn the Trend Around

Updated: Jan 5


CIOs Are Shaping Innovation, Strategy and Security

Artificial intelligence-enhancement of operations, services and data products is the prime topic in corporate strategy sessions, yet the results are sobering. Studies from MIT and Stanford show that a clear majority of AI initiatives implemented over the last six to 12 months are failing to deliver value or working on the wrong problem. 


In August 2025, MIT found that as many as 95 percent of AI projects do not reach transformational adoption, and  Gartner has estimated that nearly 40 percent of projects will disappear entirely. The problem is not simply technical. 


It is a lack of foresight and governance under irrational exuberance and FOMO around AI.

At the board level, directors already understand that Corporate Governance's goal is to preserve and grow enterprise value. The same principles apply to AI adoption. If companies approach AI with the same discipline they apply to corporate oversight, the outcomes improve dramatically. The challenge is that too many organizations chase flashy pilots without understanding the problem they are solving, the pros and cons of technology they are using in relation to the problem they are solving, or the risks they are introducing.


Five Principles of AI Governance

Corporate governance rests on five broad pillars. These principles, when applied to AI judiciously, provide a framework for scaling projects that deliver results.


1. Goals or Business Objectives: Define goals and desired outcomes before envisioning AI-enabled service products and systems enhancements. The first step is clarity. What business outcome do you want? Is the aim to improve productivity, to generate new revenue streams, or to deepen customer engagement? Too many projects fail because the goal is vague.


2. Structure: Build the oversight structure and get the right mix of expertise. That means technical leaders who understand AI and business leaders who can define the impact that matters. The structure may be centralized, decentralized, or hybrid, but it must exist. Otherwise, shadow AI will proliferate and oversight will collapse.


3. Strategy: Align objectives with strategy. Choosing the right use case is critical. Internal use cases must be those that provide productivity gains, or other transformational benefits, customer-facing applications on the other hand should be those that provide for trust and enough value for customers to pay for the added cost. Pricing strategy, build vs buy for each use case may be different. Selecting the right problem and applying the right strategy is the difference between progress and wasted investment. 


4. Risk Management: Resistance to change, hallucinations and bias in generative models, privacy breaches, preexisting cybersecurity and IT infrastructure insufficiencies and cost overruns are all foreseeable. GenAI also carries unique risks because outputs are not deterministic, or may need memory or context to be meaningful and that needs careful calibration. Using the wrong AI for the right problem will impact adoption and value. Adversarial attacks can manipulate outputs. These risks can be addressed with expertise, transparency, red teaming, deliberate planning, and trust-by-design /secure-by-design frameworks for all layers: models-data-infrastructure, but only if leadership prioritizes them.


5. Performance Oversight. Boards must demand continuous tracking of AI performance against defined metrics. If internal productivity is the goal, measure cost reductions or efficiency gains. If customer outcomes are the goal, measure engagement, revenue, and retention. Usage drop-off is an early warning that trust is eroding.


Why Trust Is the Core Issue

The thread that runs through all five principles is trust. Customers will not adopt AI outputs they cannot understand. Employees will resist tools they see as biased or threatening their jobs. Investors will discount claims that cannot be measured. Designing for transparency from the beginning, logging actions for accountability, and embedding cybersecurity practices into every layer are not optional. They are the price of entry.


The Competitive Advantage Window

AI adoption is no longer optional. Capgemini Research shows that 93 percent of executives believe that companies risk losing competitive ground if they do not scale AI within the next twelve months. One third of all software will already come with embedded AI by 2028 according to Gartner.  Companies that succeed will not be those with the most ambitious pilots, but those that scale trusted systems in a disciplined manner. 

The next 12 to 24 months will determine who pulls ahead. Boards that treat AI governance as seriously as they treat financial oversight will create durable value. Those that do not will continue to watch as projects fail, costs mount, and stakeholders’ trust evaporates.


The Board’s Role

For executives, directors, and those tasked with innovation for growth, the question is no longer whether to invest in AI. The question is how to preserve and grow value through disciplined governance. Boards should insist that management teams articulate goals, define governance structures, assess risks across models, data, and infrastructure, and measure outcomes continuously.

Governance is not an abstract idea. It is the difference between 95 percent failure and meaningful advantage. The boardroom has a responsibility to ensure that AI adoption follows the same principles that have guided corporate stewardship for decades. Solving the right problems the right way is not only good practice, it is the path to survival in the AI economy.



Links: MIT study August 2025, Stanford study June 2025, Gartner study June 2025 and Capgemini research 2025


The thoughts in this article were first presented at a conference in August 2025. 




Want to learn more about AI Governance? Email info@invigrid.com

 
 
bottom of page