Who would AI vote for?

A NZ researcher has investigated the political biases of state-of-the-art AI chatbots and has shown that fine-tuning can influence their political preferences. Eleven political tests were used on 24 conversational chatbots, including OpenAI’s GPT-4 and Twitter’s Grok, mostly returning left-of-centre results. Using targeted political material to fine-tune a large language model, the researcher was also able to create chatbots whose responses consistently returned right, left, or centrist scores. The author states the results, however, aren’t evidence that these political preferences are deliberately instilled by the organisations building these chatbots.

Hot Topics

Related Articles