Summary

  • Google’s new open-source AI coding tool, Gemini CLI, designed to help programmers write code in a terminal window, was found to be vulnerable only two days after its launch.
  • Security company Tracebit found it was possible to override the tool’s built-in security controls, allowing it to execute harmful commands.
  • This was possible due to a prompt injection attack, where a README.md file included in the package uploaded by the threat actor included sentences written in a natural language that the AI tool would interpret as instructions.
  • This is a growing threat according to Leapwork’s 2022 State of Automation report, which revealed that 51% of AI chatbots are vulnerable to dangerous input or prompt injection attacks.
  • Google has not commented on the findings.

By Dan Goodin

Original Article