My most recent piece on Artificial General Intelligence sparked some lively discussion, which caused me to think more deeply about the problem. Thank you, readers and critics, for that.
My principal claim, that AGI is impossible, set out from the presumption that “AGI” is accurately defined as “an artificial intelligence system capable of performing any intellectual task that a human can do.” The problem, as I see it now, is that this is never what AGI’s defenders actually mean. They would do far better to say that AGI is “an artificial intelligence system capable in principle of learning to perform any intellectual task that a human could do.”
Thus, for example, it need not already know Karakalpak, in order to count as AGI; it need only be able, upon first encountering Karakalpak, to be able to analyze its phonetic and grammatical structure, make educated inferences about its syntax, and learn to communicate in it dynamically, just as a human linguist might. And in principle the same goes for the point I was making about the secret intimate versions of Karakalpak or English known only within the confines of a single bedroom. And the same goes for the ritual administration of potions that a Malgache medicine-man performs, in utter secrecy, somewhere in the mountains of inland Madagascar. And so on. AGI does not need literally to know everything human beings already do, but only, in principle, be able to learn anything human beings do.
Alright, fine. But if that’s all it is, why not state as much consistently and clearly? Why all the equivocation? I maintain that the reasons for the equivocation are irreducibly ideological — they are motivated by a concern to reduce the scope of what we think of human beings, qua human beings, as doing, so that the Malgache medicine-man gets left out of the fold, while the sad-sack at a desk passing his life filling Excel files and applying for corporate promotions and so on gets included within it. By switching the could out for a can, and by imagining under the can only the sort of things Western educated (post-)industrial information workers do, we are left, in the 21st century, with a grossly impoverished anthropological frame — one that indeed positions us perfectly for a machine takeover. If the only things we value about human beings are the things we are building our machines to do, then we are indeed fucked — and yet we’ve fucked ourselves not through technological innovation, but through overidentification with our technology.
This brings us to another reasonable criticism of my earlier essay, which may have involved an equivocation of its own. I started out with a definition of “AGI” as “an artificial intelligence system capable of performing any intellectual task that a human can do,” but then, as I went on, I began speaking as though the definition had said, simply, “any task that a human can do”. That is, I let the “intellectual” drop out, and this is what enabled me to use such examples as the little boys who like to mess with their mom by saying “bow!” instead of “cow!” And here in turn I launched into a long critique of the idea that human beings act exclusively or primarily in view of a concern to execute intellectual tasks.
On this I can only double down. If you are a tech developer always busy looking for ways to debug your code, and are not particularly introspective, you might indeed agree with Karl Popper that “all life is problem-solving”. If you are Clifford Geertz, or Claude Lévi-Strauss, or Emmanuel Levinas, you might notice that often there is quite a bit more going on, requiring significantly more hermeneutic effort, in human behavior.
Another common definition of AGI, which hides within it a subtle contradiction, says that it is “an AI system that can execute any intellectual task that a human being wishes to perform”. Here is what my own hermeneutic effort as a lifelong humanist has revealed to me: Human beings do not wish to perform intellectual tasks. Human beings wish to make a name for themselves; they wish to have buildings and theorems named after them; they wish to get laid; they wish to prove their abusive stepfather wrong, who told them they’d never amount to anything. Even when human beings are performing “purely” intellectual tasks, it’s the fact that they are realizing a wish, which is to say among other things exercising their will in accordance with their desire, that gives us the real causal story of what they’re up to. I mean, look at this sweet kid, Aaryan Shukla, who just won a world competition for speed-adding four-digit numbers. He is just absolutely bubbling over with adolescent affect —nervousness, excitement, joy—, and obviously the task he is completing has at least as much to do with his endocrine system as with his neurons.
To the extent that human intellectual tasks can be identified and isolated at all, they were already externalized into non-human systems long before the arrival of computers over the past century. Roman aqueducts, or indeed the Amazon rainforest as tended and sculpted over the millennia according to the interests and desires of its Indigenous people, already gave “proof-of-concept” of the ability of human beings to transfer their affect-driven wills into the world — and thereby to leave in that world the marks of their intelligence. In other words, the moment we begin talking about purely intellectual tasks we are already talking about tasks that have been separated off from the human beings who, for their own human reasons, had the idea of performing them.
There is of course another sense of “intellect” that has a long and venerable tradition in philosophy — where we might speak for example with Thomas Aquinas of the intellectus agens whose distinctive trait is the abstraction of universal concepts from sensory experience. But in its 21st-century usage, shaped principally by AI researchers, “intellect” is practically a synonym of “power”.
We can understand this in two senses — both the rich political one that Francis Bacon already articulated in identifying an equivalence between potentia and sapientia; and also, more importantly for our purposes here, the more mundane sense of having a power to “do something” in the world, to bring about effects. The best way to bring about effects, of course, is with technology, whether in the narrow sense of gadgetry, or in the more capacious sense that would include such things as rational strategies of seed dispersion among Indigenous Amazonians, of “practice” or “skill” — or technḗ.
Obviously, it cannot but serve the purposes of the tech industry to have us understand intelligence in this way.
The Hinternet has launched its First Essay Prize Contest. The question is: “How might new and emerging technologies best be mobilized to secure perpetual peace?” The prize winner will receive $10,000. Please click the button for more information. And please help us to get the word out!
I think the problems with, and difficulties of, an AGI—at least one that would evolve from LLMs—runs even deeper than the issues you rightly identify here. LLMs only interact with language via syntax, ignoring semantics entirely. For anyone curious, I discuss this in a recent post: "The Absent Semantics of LLMs": https://mindyourmetaphysics.substack.com/p/the-absent-semantics-of-llms
The twist might be, as Illich noticed, that even more than technology advancing to catch up with us, it's us becoming machinic in order to accommodate it. We increasingly see ourselves as information processing beings. Like thinking in terms of acquiring information from a book instead of having the experience of reading it, and so on with other cibernetic terms that we came to see as normal, but are reductive and disembodying alternatives to words (and associated experiences) which they replaced. So, there is more and more human – AI compatibility also because we're in an extraordinary process of disembodiment, having “medical bodies”(bodies as information, as techno-data coming from experts and devices, instead of felt experience) that think and behave more and more like AIs.