AI-driven data: preparing for a zero-trust future

Gartner report finds that, by 2028, as AI data proliferates, organisations will shift to a zero-trust model to safeguard business outcomes.

Gartner forecasts that by 2028, a significant portion of organisations will adopt a zero-trust approach to data governance. This shift arises from the increasing presence of AI-generated data, which complicates verification processes.

"Organisations can no longer implicitly trust data or assume it was human generated," stated Wan Fui Chan, Managing VP at Gartner. As AI-generated content becomes more prevalent and indistinguishable from human-created data, implementing authentication and verification measures is crucial to protecting business and financial outcomes.

Large language models (LLMs), often trained on diverse sources including books and research papers, are at risk of repetitive AI content. Gartner's 2026 CIO and Technology Executive Survey found that 84% of respondents plan to boost GenAI funding.

This surge in both AI adoption and investment will lead to models being trained more on former AI outputs. The consequence could be model collapse where AI results might fail to mirror reality.

"As AI-generated content becomes more prevalent, regulatory requirements for verifying ‘AI-free’ data are expected to intensify in certain regions, cited Chan, emphasising the variances in global regulatory standards. Identifying and tagging AI-generated data will be critical.

Success in this regulatory landscape depends on tool availability and workforce expertise in information management and metadata solutions. This will support data cataloguing, differentiating proactive organisations.

Proactive management practices, like active metadata management, provide advantages. Such practices allow organisations to swiftly analyse and automate decisions across their datasets.

  1. Appoint an AI Governance Leader: Assign a role specifically for AI governance to ensure robust zero-trust policies. This leader should coordinate closely with data and analytics teams.
  2. Foster Cross-Functional Collaboration: Form teams across cybersecurity and data sectors to assess AI data risks and strengthen policies accordingly.
  3. Leverage Existing Governance: Enhance current governance frameworks, tuning security, and metadata management to incorporate AI-related policies.
  4. Adopt Active Metadata Practices: Utilise real-time alerts to identify and correct data inaccuracies or biases, safeguarding critical systems.

Such actions will be a significant in combatting the risks of unchecked AI-generated data, preserving organisational integrity in a rapidly evolving digital landscape.

A joint effort by Fujitsu and SC Ventures aims to push quantum computing applications in financial...
JumpCloud introduces AI features that aim to enhance safe innovation and compliance, ensuring...
Worldwide AI spending is set to reach $2.52 trillion by 2026, seeing significant growth in AI...
Exploring Europe's potential for industrial transformation through investments and enhanced...
Cloudflare has acquired Human Native, an AI data marketplace, to develop tools that help creators...
AI is transforming business decisions, emphasising governance and the human-machine alliance for...
A new survey reveals the hidden costs of AI-generated outputs, suggesting that without proper...
Examining the adoption of agentic AI, focusing on trust and process orchestration in business...