Ai governance lags behind rapid adoption: a call for responsible deployment

A new report from OpenText highlights gaps in security and governance as enterprises rapidly adopt AI technologies without necessary risk management strategies.

In a newly released global report, OpenText has highlighted concerns surrounding the rapid deployment of AI technologies, particularly Generative AI (GenAI) and Agentic AI. Conducted in collaboration with the Ponemon Institute, the study identifies gaps in security and governance practices across industries. While over half of participating enterprises (52%) have integrated GenAI solutions, many lack the foundational elements needed to manage associated risks.

The report notes that AI maturity extends beyond adoption and requires the integration of security and governance frameworks. Currently, only 20% of enterprises have achieved full AI maturity, defined as the deployment of AI in cybersecurity activities alongside risk assessments. Additionally, 43% of businesses have adopted a risk-based AI management strategy.

The findings also highlight gaps between AI deployment and governance practices, which are intended to support trust and compliance. According to the report, 79% of organisations have not yet reached full AI maturity in cybersecurity, and 41% have implemented AI-specific data privacy policies.

The survey outlines several challenges reported by enterprises:
  • A majority (62%) report difficulties in mitigating model and bias risks, including ethical concerns during language model development.
  • 58% of respondents report challenges in minimising prompt or input risks, such as misleading responses.
  • 56% of participants report challenges in managing user risks, including the potential spread of misinformation.
The report also notes that trust and reliability remain areas of concern, with potential impacts on security effectiveness due to governance gaps. In terms of effectiveness:
  1. 51% of respondents say AI is effective in reducing the time to detect anomalies.
  2. 48% rate AI as effective in threat detection and hunting, with limitations linked to model biases and decision rule errors.
  3. Confidence in AI’s ability to operate autonomously remains limited, with 47% stating their models can independently make sound decisions. The report indicates that human oversight continues to be required due to the pace of threat adaptation.
To support business value from AI, organisations are encouraged to integrate transparency and control from the outset. The development of secure information management systems, governance frameworks, and continuous monitoring is identified as important. Aligning AI with data and security practices is presented as a way to support innovation and operational use of AI, according to OpenText.
Snowflake introduces Project SnowWork, an AI platform aimed at accelerating workflows and...
Salute teams up with Phaidra to support AI operations in high-density data centres with operational...
ZutaCore’s OmniTherm cold plate is designed for cooling high-density PCIe GPU servers to support...
SailPoint introduces a novel AI governance solution to monitor and control unauthorised AI tool...
Scality's research reveals challenges in scaling AI infrastructure, highlighting storage's critical...
Rapid7's latest report highlights the shrinking timelines in cyber threat landscapes and...
Samsung Ads introduces TotalView to unify and enhance ad reach across linear and streaming TV...
All-in-one platforms offer convenience, but retail and SaaS companies report they can limit growth,...