AI-powered chatbots like ChatGPT are becoming an integral part of our daily work lives, aiding in tasks such as enhancing creativity. However, this raises a critical question: What happens to our language when these chatbots and machine learning models are designed to reduce complex information into a single, straightforward response?
Linguistic anthropologist Sarah Dreher examines this issue in her discussion of Standard Language—the belief that there is a single, correct way to speak English, with any deviations considered inferior. Drerer suggests that this ideology has influenced the algorithms behind ChatGPT. To demonstrate how the chatbot handles different dialects, she shows that when prompted to draft an essay on the meaning of life, ChatGPT tends to infantilize and marginalize certain accents, such as the Southern American accent, while favoring a more mainstream American accent.
Additionally, in another study, researchers found that while Generative AI enhances individual creativity, it also reduces the collective diversity of novel content.
As Dreher argues, it’s crucial to recognize that machine learning models are built on human input—and that subjective biases are inherently embedded in their code. With this in mind, it is unsurprising that ChatGPT exhibits a preference for standard language ideology. This is potentially troubling because studying the linguistic output of these chatbots reveals more than just technological behavior—it shows that, alongside replicating grammar and speech patterns, Generative AI also replicates the attitudes and ideologies that shape our language and how we use it – creatively or in our everyday lives.
Author
Mina Baginova
Share the signal.