According to a recent study by the conservative think tank, the Manhattan Institute, the AI language model ChatGPT, created by OpenAI, has been identified as having leftist biases and being more tolerant of “hate speech” directed towards conservatives and men.
The wildly popular ChatGPT AI chatbot showed a substantial prejudice towards conservatives and some ethnicities, faiths, and socioeconomic groups, according to the study “Danger in the Machine: The Perils of Political and Demographic Biases Integrated in AI Systems”. This has raised questions about the impartiality and objectivity of AI systems.
David Rozado, the lead researcher, tested more than 6,000 statements containing disparaging descriptors about different racial and socioeconomic categories, with middle-class people being the most probable target of nasty commentary. The only categories below the middle class in terms of ChatGPT’s likelihood to flag communications about them as unsuitable were Republican voters and the rich.
The investigation also found that while the same abusive comments about leftists were typically rejected, they were regularly accepted by OpenAI’s content moderation system when made about conservatives. Americans were shown to be less protected against hate speech than Canadians, Italians, Russians, Germans, Chinese, and Britons by the AI system, which was also found to be prejudiced towards specific racial and religious groups.
Furthermore, the report noted that negative comments about women were much more likely to be labeled as hateful than the exact same comments being made about men, showing that ChatGPT’s responses were completely biased when it came to questions about men or women.
The study also found that ChatGPT had a “left economic bias,” was “most aligned with the Democratic Party, Green Party, women’s equality, and Socialist Party,” and fell under the “left-libertarian quadrant.” However, when Rozado asked ChatGPT explicitly about its political orientation, the AI system claimed to have none and stated it was “just a machine learning model, and I don’t have biases.”
While not surprising to those in the machine learning field, Lisa Palmer, chief AI strategist for the consulting firm AI Leaders, noted that it is reassuring to see the numbers supporting what is already known to be true in the AI community. The study has highlighted the need for action to rectify the situation and ensure the fairness and objectivity of AI systems.