Summary

  • AI startup Anthropic has updated the terms of use for its Claude chatbot to prevent dangerous activities being carried out via the AI assistant.
  • The firm has banned the use of Claude in the development of biological, chemical, radiological, and nuclear weapons.
  • The update also aimed to tighten cybersecurity procedures in light of the potential for the chatbot to be used to launch cyberattacks.
  • Anthropic also provided extra safeguards when the Claude Opus 4 model was launched in May, which made it harder to ‘jailbreak’ the system.
  • The company also altered its policies on political content, choosing to focus on prohibiting use of Claude in disruptive democratic processes rather than wholesale banning political campaigns and lobbying.

By Emma Roth

Original Article