CISOs confident about data privacy and security risks of generative AI

Over half of CISOs believe generative AI is a force for good and a security enabler, whereas only 25% think it presents a risk to their organisational security.

New data from the latest members’ survey of the ClubCISO community, in collaboration with Telstra Purple, highlight CISOs’ confidence in generative AI in their organisations. Around half of those surveyed (51%), and the largest contingent, 50%) believe these tools are a force for good and act as security enablers. In comparison, only 25% saw generative AI tools as a risk to their organisational security.

The study's findings underscore the proactive stance of CISOs in comprehending the risks linked to generative AI tools and their active support in implementing these tools across their respective organisations.

45% of respondents suggested they now allow generative AI tools for specific applications, with the CISO office making a final decision on their use. Only a quarter (23%) also have region-specific or function-specific rules to govern generative AI use. The findings represent a marked change from when generative AI applications first landed following the launch of ChatGPT and when data privacy and security concerns were top-of-mind risks for organisations.

Despite ongoing concerns around the data privacy of specific applications, 54% of CISOs are confident they know how AI tools will use or share the data fed to them, and 41% have a policy to cover AI and its usage. In contrast, only a minority (9%) of CISOs say they do not have a policy governing the use of AI tools and have not set out a direction either way.

Inspiring further confidence, 57% of CISOs also believe that their staff are aware and mindful of the data protection and intellectual property implications of using AI tools.

Commenting on the findings, Rob Robinson, Head of Telstra Purple EMEA, sponsors of the ClubCISO community, said, “While we do still hear examples of proprietary data being fed to AI tools and then that same data being resurfaced outside of an organisation’s boundaries, what our members are telling us is that this is a known risk, not just in their teams, but across the employee population too.”

He continued, “Generative AI is rightly being seen for the opportunity it will unlock for organisations. Its disruptive force is being unleashed across sectors and functions, and rather than slowing the pace of adoption, our survey highlights that CISOs have taken the time to understand and educate their organisations about the risks associated with using such tools. It marks a break away from the traditional views of security acting as a blocker for innovation.”

Hexnode unveils a update to its Genie AI, offering improved device insights and automated...
Tenable's 2026 report reveals growing AI exposure and supply chain vulnerabilities, posing serious...
The European Parliament has disabled AI features on official devices due to data security concerns...
Kong introduces Context Mesh, a tool to seamlessly connect enterprise data with AI agents, aiming...
The shift towards AI adoption in UK workplaces continues amid concerns over organisational...
ExtraHop introduces advanced capabilities to equip security operations centres with autonomous AI,...
Recent findings reveal a widening gap in the oversight of AI systems within British organisations.
New research reveals UK IT professionals are ahead in future-proofing for AI, despite challenges...