AI stymies diversity

Alan Cai

August 9, 2024

Artificial intelligence as it is developed today is classified as weak AI, meaning that though it generates reasonable responses to the prompts, it doesn’t have the capacity to understand what it is saying. American philosopher John Searle presented the Chinese room argument in 1980 as an anecdote to help visualize why computers, and by extension, modern large language models, do not have a consciousness despite high accuracy in language output.


The Turing test would suggest that modern large language models such as ChatGPT or Google Gemini do have the ability to think intelligently because their responses closely emulate possible human-generated responses to given prompts to the extent that the responses may be indistinguishable from organic responses were they to be evaluated alongside their human counterparts. However, the Chinese room thought experiment postulates that just like a machine, given sufficient rules and guidelines on how to generate outputs from provided inputs, can successfully execute such tasks, a human tasked with writing Chinese outputs from Chinese inputs could in theory, successfully do the same without any prior knowledge of the language if provided with sufficient rules and guidelines. This differentiates a so-called “strong AI” from a “weak AI:” Weak AI, the variation we have today, do not understand what they are saying, and merely synthesize responses from a predetermined pool of past textual material.


The fact that AI can not flexibly generate original content and is completely reliant on the text it is fed is a problematic social issue in the long run because the outputs it will give will solely reflect the sentiments and writing style of the body it draws from, making it almost impossible to be reflective of the diverse speaking styles, methodologies, ideas, and perspectives modern languages provide. For example, if a language model relies heavily on historical literature to give the base text from which responses are to be generated, then the AI is more prone to holding outdated biases and less open to modern viewpoints and language interpretations. A Stanford research paper recently found that some GPT usage detectors disproportionately incorrectly classified responses from non-native English speakers as “AI generated” while correctly labeling their native speaker counterparts as “human.” This phenomenon is slightly counterintuitive: one would presume that swaths of text on which engines are trained would be written by native English speakers themselves. However, non-native speakers generally are more prone to being less spontaneous in writing and more adherent to academic rules, leading to their frequent misclassification.


The dilemma of reduced diversification can not be simply solved by more AI training; there must be a fundamental alteration to the source code which actively encourages ideological and literary differentiation. In other words, AI must become stronger.