Data and AI have become the central nervous system of modern business, constantly sensing, processing, and guiding decision-making. But when that system is miswired, decisions become misguided. At the 2025 Gartner Data & Analytics Summit, one message was clear: the system needs more than speed and intelligence. It needs trust, transparency, and alignment with human values.
“The future of AI is human-first.”
From trust gaps in AI to the promise of small language models and the unsung power of explainability, this year’s summit was a wake-up call to stop chasing novelty and start building durable, responsible data foundations.
Let’s unpack the most urgent signals from this year’s event and how organizations can respond.
Metadata is the foundation of everything else, AI included.
While many organizations may feel overwhelmed or even fatigued by the endless AI buzz, the reality is clear: AI is still the dominant topic, and everyone is racing to make their data “AI-ready.”
But in that rush, one critical truth is being overlooked: without strong data foundations, no AI strategy will succeed. Gartner’s latest research highlights this exact contradiction. Organizations are prioritizing AI-readiness at the top of their investment agendas, yet foundational capabilities, like active metadata tools, data fabric architecture, and operating data through data mesh principles, are still significantly underfunded.
If you ignore these four elements, your AI strategy risks becoming a house of cards:
Active metadata tools and practices
Without visibility into what data exists, how it’s connected, and who owns it, AI becomes a black box.
Upgrading to data fabric architecture
A modern AI stack requires seamless integration and real-time access across platforms.
Applying data mesh principles
Teams must operate data as a product, with clear ownership and decentralized responsibility.
Modernizing infrastructure via lakehouse architecture
Efficient, AI-ready pipelines depend on flexible, scalable, and analytics-optimized storage layers.
Investing in AI without investing in metadata is like trying to build a smart city on sand. You can experiment, but you can’t scale or trust what you build.
According to Gartner's 2023 AI in Organizations Survey, only half of AI prototypes ever make it into production. Why? The biggest barriers aren't technical, they're cultural and ethical. The top concern, cited by 8% of AI leaders, was trust in AI models, followed closely by fears about ethics, fairness, and bias.
Gartner emphasized that trust is not a feature; it’s an outcome. An outcome of explainability, transparency, and accountability baked into every layer of the data and AI pipeline.
Explainability is the difference between blind automation and informed action. The summit introduced a five-point framework for enhancing explainability, including aligning with business needs, balancing accuracy with interpretability, and ensuring credibility in generative AI.
Rather than relying on a single method, Gartner recommended a "composable" approach to explainability, tailored to model type, data context, and target audience… from technical users to regulators.
💡 Insight: Explainability doesn’t have to be perfect; it has to be useful, validated, and maintained over time.
One standout prediction: by 2027, organizations that prioritize AI literacy among executives will see a 20% increase in financial performance. In lower-maturity organizations, Gartner found a staggering 43% performance gap between data-literate and data-illiterate teams.
AI literacy doesn’t mean turning every leader into a data scientist. It means equipping them to ask the right questions, challenge outputs, and understand the guardrails. In other words, it's about fluency.
Amid the noise around massive generative models, a quieter trend is gaining traction: small language models (SLMs) that are leaner, cheaper, and often more efficient for domain-specific tasks. SLMs are easier to govern, faster to fine-tune, and more explainable, making them ideal for enterprises that value control and clarity.
While AI stole the spotlight, data catalogs remained a critical enabler, especially in the context of explainability and compliance. Gartner reminded leaders that successful catalogs must go beyond technical metadata. They must bridge the gap between business and IT, offer clear ownership models, and support federated governance.
The guidance? Don't rush into tooling. Define your use cases, roles, and success metrics first. Only then will the catalog become a true foundation (not another silo).
At Dawiso, we're definitely not just watching this evolution. We are part of it. Helping to shape it.
Our platform is purpose-built to address the challenges Gartner spotlighted:
In a world where AI is accelerating, Dawiso helps you slow down where it matters: to document, to explain, and to govern with confidence.
Keep reading and take a deeper dive into our most recent content on metadata management and beyond: