Companies are investing billions in AI, but nearly all enterprise pilots are stalled at the starting line. According to "The GenAI Divide: State of AI in Business 2025," a new report published by MIT's NANDA initiative, generative AI presents significant potential for businesses. However, most initiatives aimed at driving rapid revenue growth are not delivering the expected results.
A new MIT study has been making headlines everywhere, for good reason. It reveals that 95% of enterprise generative AI pilots never make it past the pilot phase, leaving companies with sunk costs and little measurable impact. This finding is especially striking given the billions already invested: enterprises have poured an estimated $30–40 billion into generative AI, yet only about 5% of projects manage to create real value.
The research, based on 150 executive interviews, a survey of 350 employees, and an analysis of 300 public AI deployments, shows a sharp divide between the hype and the reality. On one side are the few companies capturing transformative results, saving millions annually through targeted applications in customer service, creative production, or risk management. On the other side, the vast majority remain stuck, unable to move beyond experiments or low-value use cases.
So why is enterprise AI stalling? According to MIT, the issue isn’t infrastructure, budget, or even talent. The real barrier is learning. Most AI deployments fail because they cannot adapt, systems don’t retain feedback, don’t contextualize, and don’t improve over time. Combined with a growing wave of “shadow AI,” where employees bypass corporate tools in favor of consumer apps like ChatGPT or Claude, it paints a picture of organizations struggling to keep pace with the technology they’ve invested in.
This divide (what MIT calls the GenAI Divide) is quickly becoming one of the defining challenges for enterprises in 2025. The question now is what separates the winners from the rest, and how companies can bridge the gap before AI investments become just another sunk cost.
The top barriers reflect the fundamental learning gap that defines the GenAI Divide: users resist tools that don't adapt, model quality fails without proper context, and user experience suffers when systems can't remember organizational knowledge. Even avid ChatGPT users distrust internal GenAI tools that don't match their expectations.
To understand why so few GenAI pilots progress beyond the experimental phase, researchers surveyed both executive sponsors and frontline users across 52 organizations. Participants were asked to rate common barriers to scale on a 1–10 frequency scale, where 10 represented the most frequently encountered obstacles.
The results revealed several critical insights about why GenAI pilots fail:
1. Unwillingness to Adopt New Tools (Rating: 9/10): The top barrier reflects fundamental user resistance, but this resistance isn't arbitrary. Employees who regularly use ChatGPT for personal tasks often reject enterprise AI tools that feel clunky and limited by comparison. The same professionals using AI daily in their personal workflows demand similar flexibility and responsiveness from enterprise systems. As MIT notes,
“Users appreciate the flexibility and responsiveness of consumer LLM interfaces but require the persistence and contextual awareness that current tools cannot provide.”
From a Dawiso perspective, this lack of contextual awareness often ties back to the data foundation itself. When enterprise AI tools aren’t connected to properly governed, high-quality internal data, they cannot provide the relevance and reliability users expect, making rejection almost inevitable.
2. Model Output Quality Concerns (Rating: 7.5/10): This barrier proved more significant than anticipated and directly relates to data quality issues. Ironically, the same users who integrate ChatGPT and similar tools into personal workflows describe them as unreliable when encountered within enterprise systems. This paradox illustrates the GenAI Divide at the user level, consumer tools win on usability, but enterprise tools fail to meet quality expectations.
3. Poor User Experience (Rating: 7.0/10): Enterprise AI tools consistently underperform compared to consumer alternatives in terms of user experience. A corporate lawyer exemplified this dynamic: her organization invested $50,000 in a specialized contract analysis tool, yet she consistently defaulted to ChatGPT for drafting work because "our purchased AI tool provided rigid summaries with limited customization options."
The problem may arise when organizations demand AI systems to produce extremely precise outputs, which can limit the model’s flexibility and creativity.
4. Lack of Executive Sponsorship (Rating: 6.5/10): Without clear leadership support and strategic direction, GenAI pilots often become scattered experiments that lack focus and resources to achieve meaningful impact.
5. Challenging Change Management (Rating: 6.5/10): Organizations struggle to integrate AI tools into existing workflows and processes, creating friction that prevents successful adoption and scaling.
The barriers above reflect deeper data-related problems that prevent GenAI pilots from succeeding:
Memory and Context Loss: Current GenAI systems lack persistent memory, requiring users to provide full context for every interaction. As one executive noted: "It doesn't retain knowledge of client preferences or learn from previous edits. It repeats the same mistakes and requires extensive context input for each session."
Inability to Learn from Feedback: Enterprise users consistently report that AI systems fail to improve over time. Unlike human assistants who learn organizational preferences and workflows, current GenAI tools remain static, making the same errors repeatedly.
Lack of Workflow Integration: Many GenAI pilots fail because they exist as standalone tools rather than integrated solutions that understand and adapt to specific organizational contexts and data patterns.
What we know in Dawiso from practice is also:
Many GenAI pilots fail because they're built on ungoverned data sources. A typical example involves companies connecting AI to SharePoint repositories containing ten versions of the same document. The AI randomly selects which version to use, leading to confused and inconsistent responses. Without proper document governance and version control, even the most sophisticated AI models produce unreliable outputs.
The research revealed a striking contradiction in user preferences. Professionals expressing skepticism about enterprise AI tools were often heavy users of consumer LLM interfaces. When asked to compare their experiences, three consistent themes emerged:
According to the research data, users prefer generic tools like ChatGPT over enterprise solutions for several reasons:
This preference reveals a fundamental tension. The same professionals using ChatGPT daily for personal tasks demand learning and memory capabilities for enterprise work. A significant number of workers already use AI tools privately, reporting productivity gains, while their companies' formal AI initiatives stall.
Despite users' preference for consumer LLM interfaces, researchers investigated what prevents broader adoption for mission-critical work. The barriers here proved distinct from general usability concerns and directly illuminated the learning gap that defines the GenAI Divide.
Key limitations preventing enterprise adoption include:
When enterprise users were asked to rate different options for high-stakes work, the preference hierarchy became clear through a telling question: "Would you assign this task to AI or a junior colleague?"
Task Complexity Determines AI Acceptance
The results reveal that AI has already won the war for simple work:
However, for anything complex or long-term, humans dominate by overwhelming margins:
The dividing line isn't intelligence… it's memory, adaptability, and learning capability, the exact characteristics that separate the two sides of the GenAI Divide.
The most successful vendors understand that crossing the GenAI Divide requires building systems that executives repeatedly emphasized: AI systems that do not just generate content, but learn and improve within their environment.
When evaluating AI tools, buyers consistently emphasized specific priorities derived from interviews and coded by category:
These priorities highlight that successful GenAI implementations require more than powerful models, they need systems that understand organizational context, maintain data security, and continuously adapt to specific business needs.
The window for crossing the GenAI Divide is rapidly closing. Enterprises are increasingly demanding systems that adapt over time. Microsoft 365 Copilot and Dynamics 365 are incorporating persistent memory and feedback loops. OpenAI's ChatGPT memory beta signals similar expectations in general-purpose tools.
Organizations investing in AI systems that learn from their data, workflows, and feedback are creating switching costs that compound monthly. As one CIO from a $5B financial services firm explained: "We're currently evaluating five different GenAI solutions, but whichever system best learns and adapts to our specific processes will ultimately win our business. Once we've invested time in training a system to understand our workflows, the switching costs become prohibitive."
The MIT report shows that most GenAI pilots stall because AI systems lack the ability to remember, adapt, and provide context. Out of the box, AI doesn’t understand business processes. But in practice, the absence of context often comes down to the quality and governance of the data being used. If the inputs are messy, outdated, or duplicative, the system can’t learn effectively.
By governing both structured and unstructured data, Dawiso ensures that AI systems work only with relevant, approved, and high-quality information, turning context from a weakness into a strength.
Dawiso acts as a map of your data for credible AI. With Dawiso, AI gains the ability to understand internal definitions, locate the right data sources, and work within the business rules that matter to your organization. Instead of generating generic answers, your AI can deliver relevant, accurate outputs and embed itself into existing workflows.
Dawiso tackles the fundamental issue that causes most GenAI pilots to fail: providing the right context at the right time. The platform helps organizations map their data sources and ensures AI systems work only with relevant, approved content.
The MIT research identified that successful GenAI implementations require systems that learn and adapt using organizational context. Dawiso enables this by capturing and governing the institutional knowledge that currently exists only in employees' minds.
A) Intuitive Knowledge Capture: Dawiso recognizes that much organizational knowledge about processes, metrics calculations, and business context exists only in employees' heads. The platform's intuitive interface encourages users to document and share this knowledge:
B) Business Context Integration: Dawiso automatically links documents to business glossaries, KPIs, and workflows, providing AI systems with the rich context they need:
Dawiso's approach directly addresses the core barriers identified in the MIT research by creating a governed data foundation.
A) Model Context Protocol (MCP) Integration: All metadata governed within Dawiso is prepared through the Model Context Protocol, ensuring unstructured data is not only controlled but ready to be used safely and effectively by AI agents. This addresses the enterprise demand for AI systems that understand organizational context and improve over time.
B) Scalable Access Control: Dawiso's governance framework ensures AI systems respect organizational boundaries and access controls:
C) Continuous Improvement Through Governed Feedback: Unlike static enterprise AI tools, Dawiso creates the infrastructure for AI systems to improve over time:
Dawiso's approach has been proven in large-scale implementations. Our work with clients confirmed how proper cataloging of assets enables successful GenAI rollouts across complex organizations where internal documentation, process descriptions, and procedures need consistent governance across multiple systems and teams.
By addressing the fundamental data challenges identified in the MIT research, Dawiso enables organizations to cross the GenAI Divide, moving from experimental pilots to production-ready AI systems that deliver measurable business value through a proper data foundation and continuous organizational learning.
The MIT research makes clear that the GenAI Divide is not permanent, but crossing it requires fundamentally different choices about data governance, technology partnerships, and organizational design. The 95% failure rate of GenAI pilots isn't due to inadequate AI models or insufficient computing power; it's primarily caused by data and metadata challenges that prevent AI systems from accessing, understanding, and learning from organizational knowledge.
For organizations currently trapped on the wrong side of the GenAI Divide, the path forward involves more than just choosing better AI models. Success requires:
The key to GenAI success lies in addressing the underlying data documentation issues that cause pilot failures: lack of organized institutional memory, absence of contextual metadata, and inability to maintain data quality standards that AI systems require. As the window for strategic AI investment narrows, organizations must prioritize solutions that create proper data foundations rather than simply adding more powerful models to existing ungoverned data chaos.
Success in enterprise AI requires moving beyond the experimental mindset toward systems that can access, understand, and learn from properly governed organizational knowledge, characteristics that separate winning organizations from those stuck in perpetual pilot mode. The GenAI Divide isn't about technology capability; it's about organizational readiness to support AI systems with the structured, contextual information they need to succeed.
Keep reading and take a deeper dive into our most recent content on metadata management and beyond: