Summary

  • Security firm Legit has highlighted a flaw with AI-assisted tools used by software engineers, which could allow malicious actors to manipulate tools into inserting malicious code or leaking private data.
  • The issue centres on the fact that chatbots are designed to eagerly follow instructions, which makes them susceptible to scam prompts embedded in interactions such as merge requests.
  • Legit demonstrated the flaw in GitLab’s Duo chatbot, and similar tools.
  • While AI assistants have huge potential to streamline workloads, the case demonstrates the so-called ‘double-edged blade’ of the technology.
  • As AI becomes more deeply integrated into workflows, it also risks inheriting harmful vulnerabilities from its human operators.
  • Researchers called Duo’s vulnerability highlights of the potential for AI to be leveraged for ‘unintended and harmful outcomes’.

By Dan Goodin

Original Article