Summary

  • Google has changed its official artificial intelligence (AI) guidelines in a manner that will allow the company to work on projects previously prohibited under the original guidelines, thus allowing the company to work on weaponry and technologies that violate accepted international norms.
  • The 2018 principles stated that the company would not develop, build or sell AI techniques and tools that could cause harm, or whose purpose “contravenes widely accepted principles of international law and human rights.”
  • However, on reviewing the principles in the backdrop of an increasingly widespread use of AI and evolving standards, Google decided to remove the clause forbidding the development of technology that causes overall harm.
  • Instead, the company commits to implementing “appropriate human oversight”, claiming it will “work to mitigate unintended or harmful outcomes”.

Original Article