Summary

  • Red team stress tests on AI systems at a security conference in Virginia, US, last October highlighted 139 new ways the technology could be made to misbehave, including the generation of misinformation or the leaking of personal data.
  • The tests also identified shortcomings in a US government standard for companies designed to assist with AI testing.
  • However, a report on the exercise carried out by the National Institute of Standards and Technology (NIST) was not published, with sources claiming it was one of several AI documents from NIST that were not released for fear of clashing with the incoming administration of president Joe Biden.
  • It is believed the publishing of the red teaming study would have aided the AI community, with some sources suggesting a pivot away from topics related to diversity, equity and inclusion (DEI) may have been the reason for it not being released.

By Will Knight

Original Article