ChatGPT, the AI chatbot that answers questions and solves problems, has become very popular since its launch and hackers are using it for their own purposes.
The cybersecurity community is using it both offensively and defensively, streamlining some tasks such as generating code to perform reconnaissance and information gathering.
This initial phase of hacking operations involves automating boring tasks such as domain name system enumeration and searching publicly available databases for information on targets.
Hackers are also using it to create more complex scripts and even bespoke hacking tools, and for legal purposes it can summarise and alert users to key information discovered during investigations.
There are also concerns that hackers could exploit the technology to make phishing scams more convincing, create more sophisticated malware or even use it to create AI-driven phishing chatbots that impersonate legitimate users.