Summary

  • A new study in the Journal of the Royal Society Interface suggested that guilt could be used to make AI more cooperative.
  • The researchers used a version of the “prisoner’s dilemma” game to test the impact of inducing guilt on software agents that were designed to evolve and mimic human behaviour.
  • Two types of guilt were observed in the study – social and non-social.
  • Social guilt only occurs when the agent knows its opponent would also feel guilty for taking the same action, whereas non-social guilt occurs regardless of the opponent’s actions.
  • The study found social guilt was more effective at encouraging cooperative behaviour amongst the agents, and in more structured populations non-social guilt was more prevalent.
  • The findings could have implications for the development of advanced AI, but it remains to be seen whether more sophisticated AI models, such as large language models, would respond in the same way to guilt.

By Edd Gent

Original Article