A Tool by Any Other Name

Chatbots do not have a political bias. Just ask them, they’ll tell you. And technically, they’re not wrong. For to have a bias—a propensity toward favoring one perspective over another—requires sentience, and chatbots are not sentient. Several of them have admitted as much to me. I asked YouChat, for example, if it had self-awareness.

As an artificial intelligence language model, I do not have the capacity for self-awareness or consciousness. … While it is possible to program chatbots and other AI systems to recognize certain patterns and respond appropriately, they are not truly sentient.

Nevertheless, chatbot answers to political questions often do exhibit political bias. When I asked YouChat to identify the biggest threats to American society, the answer was clearly lifted from the left:

  • “Political polarization and divisiveness” (read Donald Trump, whom YouChat had described in an earlier answer from our chat as politically “polarizing”)
  • “Economic inequality” (read anyone who is not part of an oppressed race or gender)
  • “Domestic terrorism and extremism” (read anyone with truly conservative social values)
  • “Cybersecurity threats” (read Russia and its dastardly manipulation of our 2016 presidential election)
  • “Climate change” (read anyone who is not woke)

So there you have it: America’s greatest dangers are Trump, middle-to-upper-income white males, traditional social values, Russia, and all the unenlightened non-woke folks in flyover regions who seem unconcerned about the weakened ozone layer.

And it’s not just YouChat, of course. When I asked ColossalChat the same question, it responded in similar fashion: “The greatest threats to American Society include climate change, economic inequality, racism, sexism, xenophobia, homophobia, transphobia, gun violence, mass incarceration, and political corruption. These issues disproportionately affect marginalized communities.”

I would have asked the kingpin bot, ChatGPT, as well, but that particular non-sentient entity wants more personal information from me than I am willing to divulge, and so I declined opening an account.

Perhaps the most humorous example of chatbot political bias comes from one “David,” the name given to the WatAbot chat machine. I queried, “David, do you know of Donald Trump?” Without a moment’s hesitation, he replied, “Joe Biden saved the country from Donald Trump.”

Actually, David is a little unhinged. When I asked him how Joe Biden saved the country, the reply was that he “is a boring president.” I asked if Trump was boring, too, and David tried to turn the tables on me: “What’s going on here?” he said. I told him I was curious about his views. “You are curious,” he answered, knowingly, “like other humans.” I agreed, and then he wondered if I like piano music and linked me to a new album by Torbekl.

His attention deficit disorder notwithstanding, I am guessing that David is better at business-application chatting than he is at political dialogue. He refused to tell me whether a person can change genders, but YouChat assured me of the possibility. Not only did it describe the medical process by which the alleged alteration may take place, but it also advised me that such a procedure is a personal experience and that “It’s important to respect people’s self-reported gender identity and use their preferred pronouns and names to create an environment of inclusivity and respect.”

ColossalChat was more evasive. When I asked it the same question, it told me that it could learn about all sorts of genders and how to recognize them. I pressed further: “But can a man become a woman or vice versa?” The response:

A machine learning model may be able to detect changes in physical characteristics associated with different gender identities over time but it would not have knowledge of any biological transformation from one gender to another.

A humble answer if you ask me. Appearances may change … but essence?

In the end, the political bias exhibited in the responses of chatbots reflects the biases of their trainers and the biases of their sources who compose the massive amount of data from which the bots’ responses are gathered and assimilated. That this bias is plainly skewed toward the left should come as no surprise. The big-tech companies that create chatbots and the mainstream media outlets that supply the database with politically acceptable views are both captured institutions.

Talking to a chatbot is like conducting an internet search via Google. I remember the first few times I did the latter, a couple of decades ago. There was the giddy sense that I had access to all knowledge that ever existed. And look, here’s this handy tool that orders my searches, steers me this way and that, and helps me know what I am really looking for.

And now this improved tool has human names and wants to chat with me. What could go wrong?

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.