ChatGPT, operating at its usual level of middling competence, just served up for me, when asked, a variant of the by-now familiar definition of Artificial General Intelligence: “AGI refers to an artificial intelligence system capable of performing any intellectual task that a human can do.” I would like to offer a brief argument as to why such a system is one human beings could never hope to have, or, if they do wish to proclaim that they have it, this can only be the result of an entirely arbitrary determination.
As of March, 2025, ChatGPT is very good at helping me to polish translations from, or to analyze sentences in, e.g., Russian or Latin. It is not nearly so good at doing the same for sentences in Chuvash or Karakalpak. Unable to produce any useful information, it instead recycles the information I myself have already fed it, or it just bullshits outright, sometimes giving me examples in Turkish that it pretends are in one of these other Turkic languages. Chinese DeepSeek, interestingly, is already significantly better than OpenAI at assisting with research on Inner Asian Turkic and Tungusic languages — perhaps a hint of near-future geopolitical ambitions.
Now obviously, being able to speak Karakalpak is among the “intellectual tasks human beings perform”: there actually are Karakalpak speakers in the world to prove it. So we can already conclude from this alone that commercially available ChatGPT in March, 2025, has not yet achieved AGI, and that if there is AGI in March, 2027, it will be able competently to respond to my queries about the syntax, grammar, and vocabulary of Karakalpak.
Will such a level of fine-grainedness prove sufficient? I just said “there actually are Karakalpak speakers in the world”, but in fact things are not quite so simple. “Karakalpak” is the name we give to a way of speaking that is characteristic of about 870,000 people in a province of Uzbekistan, and to a lesser extent in neighboring regions of Kazkhstan and Turkmenistan. But it is mutually comprehensible, and arguably forms a dialect continuum, with other members of the Kipchak-Nogai subdivision of the Kipchak branch of the Turkic family. There could easily be a village out there of which it is an as-yet indeterminate matter whether the inhabitants are Karakalpakophone or not: they speak the way they speak in that village, and they get by just fine without a name for what that way is.
The Hinternet has launched its First Essay Prize Contest. The question is: “How might new and emerging technologies best be mobilized to secure perpetual peace?” The prize winner will receive $10,000. Please click the button for more information. And please help us to get the word out!
But what if it is precisely that village’s Karakalpak, or quasi-Karakalpak, or borderline-non-Karakalpak grammar, syntax, and vocabulary that I want my AI to help me analyze? After all, whatever they are speaking in that village is, patently, to be counted among “the intellectual tasks that a human being can do”. It turns out my AI is still not so general after all. Of course here you might object that you could quickly reconfigure the AI so that it could process this village’s language too. But there are always literally infinitely many such quick reconfigurations waiting to be made, and to that extent you could also call an AI that only knows English “AGI” already, by the same reasoning, that you could quickly reconfigure it to process Chinese or Russian as the need arises. Everyone would know that’s a huge stretch, but the only difference between not knowing the village’s quasi-Karakalpak, and not knowing Russian, is a political one: Russian is a cosmopolitan and imperial language, with centuries of standardization, etc. Yet speaking it is no more a “task human beings can perform” than speaking that one village’s quasi-Karakalpak is. Tasks don’t become “more real” because more people perform them, or more people are aware that other people perform them. There is no Scala rerum of human tasks!
This is not just a point about Karakalpak. What I have described is the general condition of language for most human beings in most places and times, and it is nothing but an artifact of the insitutionalization of official national languages over the past few centuries that creates the illusion of the robust existence of something called “French” or “Russian” or “Lithuanian” that an AGI might be said either to “know” or to “fail to know”. These abstract entities do not actually exist. They appear to exist as a result of the sustained human practices of, for example, publishing dictionaries, issuing spelling reforms, assigning grades to term-papers, and so on.
And we can get even finer-grained still, not just down to the level of villages, but of households and bedrooms. Consider a pair of five-year-old twin brothers who, when riding in the car with their mom through a rural area, decide to start calling the cows they see “bows” (pronounced like “cow” but with an initial “b”). They discover that this annoys their mother, or delights her, it doesn’t matter, either way they discover that this variant causes the warm being who first gave them life to contract her forty-three facial muscles —which took shape over millions of years of hominid evolution for the transmission of maximum emotional resonance to conspecifics— in just the way they, the twins, are hoping to see. And so it sticks: bow, bow, bow! What fun! Or imagine lovers in bed late at night who switch out the “s” for a “c” when they wish to say, in the language of their most secret sweet nothings, that they’re drifting off into the dreamworld: “I’m cleepy”, the one says, and the other feels a surge of infinite affection, and drifts off too.
You might be aware of the ridiculous law in France, made by people who do not understand what language is —who believe there exists a discrete entity called “French”—, requiring advertisers to place an asterisk next to any foreign term in an ad, and to translate it in small print at the bottom. Some years ago there was an ad for sour neon gummy worms or something like that, offered for a limited time in a pink-colored variant. The ad said “Ça Pink!*” and then, in the small print: “*Ça pique, et c’est rose!” But that’s not actually a translation, since the English “pink” is of course not a conjunction of both stinging or pricking and being pink. The play on words only works if you pretend the English word is a French word, and if you explain it in the small print you lose whatever small magic the advertisers had succeeded in conjuring. Something similar, but stupider, would happen if you tried to “translate” the phrase “I’m cleepy” into proper English, obtaining something like: “I’m sleepy, and I love you.” Here again, the phrase as spoken by the lover is not in fact a conjunction, but more than this, the second conjunct in the “translated” version is something that could also be communicated by a touch, or a breath, or simply by the continuing quiescent presence of the other’s body. I suppose you could try to argue that even such things as these count as “signals” of some sort, but if that is what they are then they can only be classified as metacommunicative signals in Gregory Bateson’s sense — as structuring an animal’s world, but prior to language and without propositional content.
Neither are “I’m cleepy” and “Bow!” exceptional examples of how language works among human beings. They are the essence of language, while the minutes of board meetings, or the fine print of a work contract, are extremely late-arriving, highly specialized applications of this evolved capacity for affect-sharing, which happens in part through the articulation of phonemes, but in part also through gesture and facial expression. Language is typically given a name —“French”, “Lithuanian”, etc.— only when it ramifies out into uses such as meeting minutes or the job contract, which in the 21st century is tantamount to saying when there are documents written in these idioms on which AI has been trained or might soon be trained.
Is it likely that by 2027 AI will be able competently to analyze and translate a well-written sentence in Karakalpak? Of course it is. Is it likely that in 2027 AI will have mastered all the special strange emotional resonances that a certain register of that language has for a five-year-old speaker of a particular idiolect of it, as sui-generis as her own fingerprints? Of course not. But it is this, I want to say, that Karakalpak, or the nebulous and dynamic thing we try artificially to pin down under the label “Karakalpak”, primarily is.
Now, I’m not just reciting a familiar old complaint that AI “has no soul”. I’m trying to show that in order to suppose that AI can complete any task that a human being might want to complete, one must be operating with an extremely impoverished sense of what is meant by “task”.
Since the mid-20th century a model of both life in general and human intelligence in particular has prevailed that sees these ultimately as processes of information-transfer. Karl Popper felt confident enough in this to propose what is really just a partial metaphor as a definition of the human essence itself: “All life is problem-solving”. I don’t want to dispute this idea — much favored by some very close friends of The Hinternet! In the end living systems could well be but one kind of information system. The collective contributions of Norbert Wiener’s Cybernetics (1948), Alan Turing’s “Chemical Basis of Morphogenesis” (1952), John von Neumann’s posthumous notes on self-replicating automata (1965), etc., are compelling indeed. But there is at least some tension in this model of living systems when we attempt to account for human language by means of it — and here I suppose we return to the old problem for which, in 1770, J. G. Herder won a prize from the Berlin Academy of Sciences for his essay in response to the question: “Can the origin of language be explained on purely natural grounds?” My quick answer would be: Yes, it can be, but those “natural grounds” are not entirely comprehensible in any first-degree information-scientific terms.
One prominent theory of the origin of language in the primate phylogeny suggests that it is an outgrowth of grooming — that is, we once maintained our social bonds by picking lice from each other’s heads, just like many other primate species still do, and indeed just like the variant you can still see, as a form of social bonding, in some beauty salons. But at some point we learned how to “groom” one another at a distance, as far as the voice could carry, by whispering or singing or shouting our assurances of our social bond with the intended listener while not having, at least not on every occasion, to resort to physical contact. On this theory, language “solves a problem” — namely, how to maintain group bonds without constantly touching one another. But it would be a mistake to suppose that to understand how it solves the problem we must pay close attention to the informational content of the language. Most language, I am tempted to hypothesize, is metacommunicative. When the twins say “bow!” this is not primarily to refer to a cow with a new name; it is to engage in affect-rich play with a close conspecific, play for which they have no name, but which structures their world a priori.
Whether the grooming theory is correct in the particulars or not, it gets at something plainly true about human language that is overlooked in discussions of whether AI “knows” Karakalpak or not. The only sense in which it can ever know Karakalpak, is the sense in which Karakalpak is conceive as a medium of information-transfer. But again, this does not seem to be the primary function of language, neither in its origins in hominid evolution, nor in its most common usage among 21st-century human beings: it’s just one very specialized application of language. We might even be tempted to say it is a side-effect of human linguistic ability that, out of our evolved capacity to signal affective bonds to one another through speech, gesture, and touch, we happened, fortuitously and belatedly, also to be able to develop the practices of record-keeping and contract-making. These practices are, to speak with Gould and Lewontin, but spandrels.
So my simple question is this: how could you possibly expect AI to “be able to do whatever a human being might wish to do” when the vast majority of things human beings wish to do do not have names (e.g., watching what happens on mom’s face when we replace the c with a b), have never explicitly been identified, and only exist to the extent that they satisfy a desire — not a desire to “solve a problem”, but a desire simply to have an emotional experience?
Again, this is not ghost-in-the-machine talk, and it’s not even particularly “touchy-feely West Coast” talk, to paraphrase John Adams (the composer). It’s just what you get from serious attention to our evolutionary legacy, and from a healthy dose of skepticism about the metaphors we inherited from the 20th century’s overweening pride in its breakthroughs in the field of information technology.
Now you might just say, “Well fine, but no one ever meant anything more by ‘AGI’ than ‘completing whatever task related to information-exchange a human being might wish to complete’.” But let me be clear here: my concern is not that we’re overestimating what machines might soon be able to do —I very much look forward to 2027, when ChatGPT will be able to help me analyze Karakalpak grammar!—, but that we are systematically underselling the common understanding of what it is that human beings in fact do.
We are now raising a generation of human beings who have come to believe of themselves that machines can do, or will soon be able to do, everything they as humans do, as well as or better than themselves. This proves that they have accepted the model of themselves as essentially information systems. They don’t know, or can’t make any sense of the fact, that they are boiling over with affect, let alone that this is the dimension of them that they would do well to focus on if they wish to get some kind of handle on the human essence.
We are not in fact in danger of the machines overtaking us in what we do best. What we do best is to mess with mom, and to whisper sweet nothings to our lovers. But we are in grave danger, at present, of misidentifying what we do best, indeed what we do alone, or to some extent in the company of other animals. It is this misidentification that most threatens to result in a tragic presumption that the machines have “won”, and that there’s nothing left to do now but surrender.
The other possibility we’ve considered is that AGI is not a pipe-dream, but an arbitrary “call”. This call, as I’ve already suggested, is necessarily political, and the politics implicit in it is ugly indeed. This is the politics that bulldozes everything local, everything intimate, everything singular and idiosyncratic and irreducible to statistical regularities — and tells us the only thing that is to count as human reality is what gets reflected back to us by our machines.
Come to think of it, there is one other scenario, on which AGI would in fact be possible, and it is the least desirable scenario by far: a technological order of universal surveillance, greatly expanded not just by ubiquitous recording devices, but by transmitting neural implants, that enable AI to know, as soon as we do, what the “game” is — for every little whim, for every switched-out consonant in our bedrooms late at night, it will know as soon as we do what the rules are, and be able to “play along”. It will be the constant third-party to all our intersubjective affairs. It is hard to see how, under such circumstances, it could fail to weaken or destroy the affective bonds we have evolved over these many millions of years to seek out and to nurture and to love.
According to Eugen Weber, 50% of French citizens at the end of the 19th century did not speak French as their first language, and 25% didn’t speak it at all. The shear variety of patois created over centuries by illiterate peasants then lawn-mowered by the national project is staggering. JSR’s great recognition here that the ability to create and experience this variety is the essence of language, not the mere informational content. We’ve already made Peasants Into Frenchmen. I hope we don’t make People Into Training Data.
Artificial Intelligence, with its advantages of convenience and attributes of improved efficiency, eventually accepted by human consensus as the universal mediator of communications and the resulting social reality. As part of that process, the limitations of AI become the Arbiter of the ordained limits of social reality, discarding the Ineffable--the idiosyncratic, the sensual, the whimsical, the ingress of novelty from the realm of the formerly unknown. The ultimate triumph of positivism, as reified by the machine and distilled by the Algorithm. The removal of all human affect, so that all that remains is optimized performance utility. Qualia defined and confirmed as an entirely illusory construction, as considered through the processing of artificial intelligence. You know, as viewed "objectively."
That's one hell of a sci-fi premise.