BlackFog has released research examining the use of unauthorised AI tools in the workplace, often referred to as "Shadow AI".
The study, which surveyed 2,000 individuals, found that 86% use AI tools at least weekly for work tasks. Around 34% reported using free versions of company-approved AI tools, raising questions about how corporate data is stored and managed.
Of those using AI tools not sanctioned by their employer, 58% rely on versions that may lack enterprise-grade security and data management features. The research also suggests a general acceptance of risk, with 63% indicating it is acceptable to use such tools without IT oversight if no company-approved option is available.
A “speed over security” mindset is evident, with 60% of respondents willing to accept potential security risks to improve efficiency or meet deadlines. Additionally, 21% believe their employer would overlook the use of unapproved AI tools provided work is completed on time.
Key findings include:
- Leadership and risk: 69% of senior executives prioritise speed over security, compared with lower proportions among junior staff.
- Data vulnerability: Around one-third of employees report sharing sensitive information, including corporate and financial data, using unauthorised AI tools.
- Integration risks: Approximately 51% of respondents connect AI tools to other systems without IT approval, increasing potential vulnerabilities.
The research indicates a growing need for IT oversight and management of AI tool usage. As AI becomes more integrated into workplace processes, organisations may need to address these risks to maintain data security and compliance.