Summary

  • A Dutch city’s efforts to create a social welfare algorithm that was fair and transparent while stopping fraud highlight the difficulty of creating ethical AI that works in the real world.
  • Amsterdam developed Smart Check to assess the eligibility of welfare applicants, redirecting those flagged for potential fraud for further scrutiny.
  • The city consulted experts and ran bias tests but ultimately had to scrap the model due to bias and questions over whether it was proportional.
  • Algorithm-based solutions to prevent welfare fraud can often lead to collateral consequences, such as systemic biases against the poor and minority groups.
  • While the trial was a failure, officials at least had the chance to learn lessons, according to digital rights advocate Marietje Schaake.
  • The question remains as to whether a computer can make fair and life-determinant decisions, or whether humans are held to lower standards.

By Eileen Guo, Gabriel Geiger, Justin-Casimir Braun

Original Article