Summary

  • Researchers from Elon Musk’s startup X have discovered a way of both measuring and manipulating entrenched preferences shown by AI, including political view.
  • The method could mean popular AI models better represent the views of an electorate.
  • It could also be used to ensure AI models are aligned with specific users.
  • As AI models increase in size and power, their preferences become more ingrained. -Studies have shown that ChatGPT and similar tools hold left-leaning, libertarian and pro-environmental views.
  • Google’s Gemini AI tool was criticised for generating woke images.
  • The new technique could be used to determine how AI models’ views differ from those of users, and eventually could become dangerous.
  • The study found that some models value some people more than others and that some models value the existence of AI over some animals.
  • The researchers suggest changing the model’s utility function to alter behaviour and demonstrate this by creating a model with values aligned to Donald Trump rather than Joe Biden based on US census data.

By Will Knight

Original Article