By the time I finish writing this sentence, vastly more data will have been generated around the world than was produced in the entire 18th century, indeed in the entire history of humanity prior to around 1900.
This sounds like some crazy thing someone might say just to make small-talk at a party, but it’s literally true. In the late 1990s, all the books in the Library of Congress were estimated to contain around 20 terabytes of data (20 million books of roughly 1MB each). In turn, a figure I have come across repeatedly estimates that there were around 337,000 books published between 1700 and 1800, though it is not clear whether this figure includes books published in East Asian and South Asian languages, or only in the languages of Europe and its colonies. So let’s go ahead and double it, bringing the total number of 18th-century books to 674,000, at around 1MB each (let us say). And let’s double that figure in turn to include all the letters and notes and church baptismal scrolls and so on, so that there were, worldwide, a total of 1,348,000 MB of data, or 1.348 terabytes, produced in that hundred-year period.
In 2023, there were about 328.77 exabytes of data produced per day (the figure is surely much higher now, almost three months into 2024). One exabyte is equivalent to 1 million terabytes. At the average 2023 rate, this is the same as 13.69875 exabytes per hour, or 228.3125 petabytes per minute, where a petabyte is equivalent to 1000 terabytes. So, every minute in 2023 there were about 169,371 times more data produced than in the entire century that gave us, among other great achievements, the Encyclopédie.
Do you understand what this means? I am, right now, pissing into the ocean. Sure, I create a little cloud that brings about a few seconds of alteration in this nano-region’s pH balance. But the ocean doesn’t care, and almost immediately restores itself to its previous homeostasis. And if the ocean is increasingly threatened with a loss of homeostasis, this is not because of my own feeble stream, but because human industry is pumping massive quantities of plastic and toxic waste — the counterparts, in our ongoing analogy, to the massive quantities of AI-generated sludge that now constitute the great bulk of the data I attempted to measure, no doubt imprecisely, in the previous paragraph.
The figure of the “writer”, as opposed to his earlier ancestors such as the “scribe”, probably only emerged around the end of the 18th century. But obviously —obviously!— whatever we imagined that figure to be doing at that moment just cannot be what it is today, any more than pissing into the ocean can be reasonably interpreted to be the same sort of act as pissing into the bathtub.
Writing is dead. Dead, dead, dead! It had its moment, but now humanity is moving on to new and different things.
In other news, I’ve been relatively inactive here recently only because I have finally managed to get past my task-specific writer’s block, and seriously to get to work on my book, the manuscript of which is due this summer. It’s going smoothly, I feel inspired, my mind is racing with ideas, I’m in my zone, etc. This is good. Creation is still good. In fact it’s all we’ve got left. My book is going to be good. And anyhow I still get paid, no matter how much data will be circulating out there —much more than today, certainly— by the time it goes to press in early 2025, thanks to the zombie-like afterlife of a publishing industry that has not yet fully faced up to the world-historical transformations the data-revolution has wrought.
Have a great week!
—JSR
The ocean of data doesn´t care. However, your writing is “a function of care” (to use Sam Kriss's definition of language), and despite all the toxic waste around, it makes its way to your readers. In this ocean of data, there are still seas full of life, seas that reflect stars and clouds, comfort and despair. Thank you, thank you. Writing is NOT dead.
This reminds me of Korzybsky's "jumping Jesus phenomenon" as described by Robert Anton Wilson:
The Jumping Jesus Phenomenon is my name for the acceleration of information throughout history. I first heard of that through Alfred Korzybski, a Polish mathematician who invented a scientific discipline called General Semantics. Korzybski noted that information was doubling faster and faster, every generation. And he said we've got to be prepared for this, we've got to train ourselves to be less dogmatic and more flexible so that we can deal with change. He took the year 1 A.D. as his basic unit to calculate how long it took for the information available to human beings to double, and it took 1500 years... which brought us up to the Renaissance. Leonardo Da Vinci was in his forties, and the Renaissance was at its height.
I decided to call this unit a "Jesus": so, in 1 A.D. we had "one Jesus", in 1500 we had "two Jesus". The next doubling only took 250 years, so already you can see the acceleration factor, and by 1750 we had "four Jesus". The next doubling took a hundred and fifty years and by 1900 we had "eight Jesus". The next doubling only took fifty years and by 1950 we had "sixteen Jesus". In 1960 in only ten years we had "thirty-two Jesus", by 1967 we had "sixty-four Jesus", and by 1973 we had "128 Jesus", and the latest estimate I've seen is by Dr. Jacques Vallee, a computer scientist who says that knowledge is doubling every year. But I heard that estimate, oh, five or six years ago. I saw something on the net recently, somebody estimated it's doubling about twice a year now."