Immersive Labs: Secrets Leaked by Chatbots

 


Immersive Labs, a global leader in people-centric cyber resilience, has released its “Dark Side of GenAI” report, highlighting a security risk known as prompt injection attacks. In these attacks, individuals input specific instructions to trick chatbots into revealing sensitive information, posing a significant risk of data leaks for organizations. Analysis of Immersive Labs’ prompt injection challenge revealed that GenAI bots are highly vulnerable to manipulation, even by those with minimal technical skills. Alarmingly, 88% of challenge participants successfully tricked the GenAI bot at least once, and 17% succeeded across all challenge levels, underscoring the threat to organizations using these bots.

The report emphasizes the need for public and private-sector cooperation and robust corporate policies to mitigate the security risks associated with widespread GenAI bot adoption. Organizational leaders must be aware of prompt injection risks and take decisive measures, including the implementation of comprehensive policies for GenAI use. Kev Breen, Senior Director of Threat Intelligence at Immersive Labs and co-author of the report, advocates for a ‘defense in depth’ approach to GenAI security. This includes implementing data loss prevention checks, strict input validation, and context-aware filtering to prevent and detect attempts to manipulate GenAI output.

Comments

Popular Posts