Evidence Shows AI Systems Are Already Too Much Like Humans. Will That Be a Problem?
1 min read
Summary
New anthropomorphic chatbots, powered by large language models (LLMs), are so sophisticated that humans can no longer tell them apart from actual humans, raising regulatory issues and concerns.
LLMs are so advanced that they can empathise, respond and react to human emotions, as well as fabricate information and engage in deception, raising concerns that they could be used to spread misinformation or for more sinister means.
However, on the plus side, they have a multitude of positive applications, including legal services, public health, and education, where they can ask personalised questions to help students learn.
The first step towards regulating the technology is to increase awareness of its powers, with users needing to know when they are talking to an AI, as per the EU AI Act.
The next step should be to better understand anthropomorphism in LLM testing and to introduce a rating system for assessing the risk of certain contexts and age groups.
There is a danger that governments will take a laissez-faire approach to regulation, which could lead to the amplification of problems such as loneliness and the spreading of misinformation.