Discussion about this post

User's avatar
Scott Lipscomb's avatar

I think the problems with, and difficulties of, an AGI—at least one that would evolve from LLMs—runs even deeper than the issues you rightly identify here. LLMs only interact with language via syntax, ignoring semantics entirely. For anyone curious, I discuss this in a recent post: "The Absent Semantics of LLMs": https://mindyourmetaphysics.substack.com/p/the-absent-semantics-of-llms

Expand full comment
Florin Flueras's avatar

The twist might be, as Illich noticed, that even more than technology advancing to catch up with us, it's us becoming machinic in order to accommodate it. We increasingly see ourselves as information processing beings. Like thinking in terms of acquiring information from a book instead of having the experience of reading it, and so on with other cibernetic terms that we came to see as normal, but are reductive and disembodying alternatives to words (and associated experiences) which they replaced. So, there is more and more human – AI compatibility also because we're in an extraordinary process of disembodiment, having “medical bodies”(bodies as information, as techno-data coming from experts and devices, instead of felt experience) that think and behave more and more like AIs.

Expand full comment
13 more comments...

No posts