Summary

  • In the quest to create a fair and discriminatory-free welfare system, Amsterdam’s attempt to assess welfare applicants using AI has backfired due to bias still creeping into algorithms.
  • Despite testing and attempts to eliminate bias, it appears that discriminatory algorithms were built into the welfare system, similar to criminal justice and healthcare systems.
  • This raises questions about whether AI and algorithms can ever be fair and if so, how?
  • This topic will be discussed in more detail in a forthcoming panel discussion by MIT Technology Review that invites views from its editor Amanda Silverman, investigative reporter Eileen Guo and Lighthouse Reports investigative reporter Gabriel Geiger.
  • Links to related readings will also be made available to attendees ahead of the online event.

By MIT Technology Review

Original Article