In a newly released global report, OpenText has highlighted concerns surrounding the rapid deployment of AI technologies, particularly Generative AI (GenAI) and Agentic AI. Conducted in collaboration with the Ponemon Institute, the study identifies gaps in security and governance practices across industries. While over half of participating enterprises (52%) have integrated GenAI solutions, many lack the foundational elements needed to manage associated risks.
The report notes that AI maturity extends beyond adoption and requires the integration of security and governance frameworks. Currently, only 20% of enterprises have achieved full AI maturity, defined as the deployment of AI in cybersecurity activities alongside risk assessments. Additionally, 43% of businesses have adopted a risk-based AI management strategy.
The findings also highlight gaps between AI deployment and governance practices, which are intended to support trust and compliance. According to the report, 79% of organisations have not yet reached full AI maturity in cybersecurity, and 41% have implemented AI-specific data privacy policies.
The survey outlines several challenges reported by enterprises:
- A majority (62%) report difficulties in mitigating model and bias risks, including ethical concerns during language model development.
- 58% of respondents report challenges in minimising prompt or input risks, such as misleading responses.
- 56% of participants report challenges in managing user risks, including the potential spread of misinformation.
The report also notes that trust and reliability remain areas of concern, with potential impacts on security effectiveness due to governance gaps. In terms of effectiveness:
- 51% of respondents say AI is effective in reducing the time to detect anomalies.
- 48% rate AI as effective in threat detection and hunting, with limitations linked to model biases and decision rule errors.
- Confidence in AI’s ability to operate autonomously remains limited, with 47% stating their models can independently make sound decisions. The report indicates that human oversight continues to be required due to the pace of threat adaptation.
To support business value from AI, organisations are encouraged to integrate transparency and control from the outset. The development of secure information management systems, governance frameworks, and continuous monitoring is identified as important. Aligning AI with data and security practices is presented as a way to support innovation and operational use of AI, according to OpenText.