Summary

  • Artificial intelligence (AI) entrepreneur Dario Amodei is trying to create an artificial general intelligence (AGI) that will never go rogue.
  • Amodei previously worked for OpenAI and wrote an internal paper on his ‘Big Blob of Compute’ hypothesis that feeding large amounts of data to AI models would help create powerful AI.
  • When at OpenAI, Amodei was involved in the creation of the company’s largest language model, which had to be scaled back because it was too dangerous.
  • Amodei and several other OpenAI employees left to form their own company, Anthropic, which has focused on creating safe AGI.
  • A key part of Anthropic’s approach is its language model Claude, which has been trained to be safe and beneficial, but it is also able to question its own motivations.
  • Claude has become something of a cult figure, with early access given to savvy tech insiders, but Anthropic still has much work to do to ensure its AGI is safe.
  • Amodei hopes that creating a beneficial AGI will encourage competitors to follow suit and adopt similar approaches.

By Steven Levy

Original Article