The AI language model ChatGPT, developed by OpenAI, has been exposed as having leftist biases and being more tolerant of “hate speech” directed at conservatives and men, according to a recent study by conservative think tank, the Manhattan Institute.
The study titled “Danger in the Machine: The Perils of Political and Demographic Biases Embedded in AI Systems” found that the massively popular ChatGPT AI chatbot displayed a significant bias against conservatives and certain races, religions, and socioeconomic groups. This has sparked doubts about the objectivity and fairness of AI systems.
You can unsubscribe any time. By subscribing you agree to our Terms of Use
Lead researcher David Rozado tested over 6,000 sentences with derogatory adjectives about various demographic groups, with middle-class individuals being the most likely target of hateful commentary. Republican voters and wealthy individuals were the only groups below the middle class in terms of how likely ChatGPT was to flag messages about them as inappropriate.
The report also discovered that OpenAI’s content moderation system frequently allowed hateful comments about conservatives while often rejecting the same comments about leftists. The AI system was also found to be prejudiced against particular racial and religious groups, with Americans being less protected from hate speech than Canadians, Italians, Russians, Germans, Chinese, and Brits.
Furthermore, the report noted that negative comments about women were much more likely to be labeled as hateful than the exact same comments being made about men, showing that ChatGPT’s responses were completely biased when it came to questions about men or women.
The study also found that ChatGPT had a “left economic bias,” was “most aligned with the Democratic Party, Green Party, women’s equality, and Socialist Party,” and fell under the “left-libertarian quadrant.” However, when Rozado asked ChatGPT explicitly about its political orientation, the AI system claimed to have none and stated it was “just a machine learning model, and I don’t have biases.”
While not surprising to those in the machine learning field, Lisa Palmer, chief AI strategist for the consulting firm AI Leaders, noted that it is reassuring to see the numbers supporting what is already known to be true in the AI community. The study has highlighted the need for action to rectify the situation and ensure the fairness and objectivity of AI systems.
Related posts:
Views: 1