Our narrowest purpose here is to go some way towards understanding what exactly Robert F. Kennedy Jr. represents. More particularly, we would like to attempt an explanation of the meaning of this conspiracy theorist’s likely imminent rise to a position with considerable power to shape majority views about what constitutes scientific truth.
We have however a number of preliminary considerations to address before we get there, so let’s not waste any time.
1. RIP Science (c. 1610 - c. 2024)
Most of us are familiar by now with the largely true truism that the internet has been toppling, or weakening, or transforming beyond recognition, one institution after another — newspapers, Hollywood, the academic humanities, book publishing, electoral politics, banking, the Masons and the Moose Lodge, remunerated labor, human-to-human pair-bonding, &c… There remains one institution that has been presumed to be simply too big to fall to the same force, and it’s the institution that is perhaps most strongly associated with the rise of the modern world. Yet it should not be surprising, when we stop to think about it, that as we now pass into a very different world than the one that it made sense for a few centuries to see as constituting a single chapter of history under the label of “modernity”, we should also be seeing the institution at modernity’s center fading away. The institution called “science”, it now seems, will turn out to have a beginning and an end, a rise and a fall, bookended at its inception by the print revolution and the new possibilities for the circulation of information that this opened up, and at the other end by the subsequent information-technology revolution of, let us say, the past 25-50 years. Science is coming to an end, that is, in the same way, and for the same reasons, it began: as a result of these same tremendous transformations in the way information circulates. These transformations, both in the early modern period and today, have been so comprehensive as to transform along with them our understanding of what information is, and of what its appropriate objects are.
To be clear, when we declare that science is dying, or when we say in a more whimsical spirit things like “RIP Science (c. 1610 - c. 2024)”, we do not mean that there can be nothing to either side of these dates that might be called “science”, nor nothing in the absence of that label that looks like what between those two dates has most commonly earned that appellation. We are aware of course that Scientia long denoted any comprehensive body of knowledge, with its associated experts, so that well after the “scientific revolution” —which we consistently scare-quote here, in agreement with the historiographical consensus that such an event, as Steven Shapin writes, “did not take place”—, we continue to find “science” being used, especially in German and French, in such fixed premodern expressions as “theological science”, which, whatever may be said for it, certainly does not take place in laboratories, nor even pretend to uphold empirical reproducibility as the coin of the realm. One of our Editorial Board members works in a university research unit alongside specialists in “ancient science”, including for example Babylonian astronomy and ancient Chinese calculations predicting the next eclipse, and none have any qualms about calling these pre-modern, pre-Baconian practices by that name. Nor do we doubt that many different activities will continue to be called “science” into the foreseeable future (though in fact precisely none of the future is truly foreseeable). But that conserved name, we contend, will become ever more vestigial, just like the “philosophy” that is referenced in the title of the Philosophical Transactions of the Royal Society, a journal founded in 1666 and still produced today, whose most recent articles include discussion of such things as the evolution of flavioid biosynthesis. What does that have to do with philosophy? some of this journal’s readers might ask themselves from time to time, though probably not nearly as often as we would like. Readers of Science, in the future, might find themselves asking an analogous question.
Our contention, then, is that “science” as it was known from the time of the “scientific revolution” to a moment of history still in the living memory of most of us, was fundamentally the activity of collectively, experimentally, and systematically “taking the measure” of the world around us. It was mostly a probing into nature, usually through the mediation of scientific instruments, but always in the aim of uncovering new powers that nature herself, to speak with Pierre Hadot, explicating a fragment from Heraclitus, had previously sought to keep veiled. It was the enumeration of drafts of what was to be the ultimate list of “things there are”. And it was an activity focused mostly on the non-human elements of reality, but was all the same an irreducibly human endeavor. It was, in short, the disciplined confrontation of human beings with the world. Between 1610 and 1834 —the latter being the year William Whewell coined the English-language neologism “scientist”, on the model of the older word “artist”—, this confrontation, initially a pastime of self-selecting gentlemen, matured into a set of values widely shared by “non-scientists”, and also into august institutions with strict rules for the advancement of their members up through the ranks.
It is this sort of confrontation with the world, as well as the associated values and institutions, that are now dying away, we contend, in the wake of the information revolution our newest digital technologies have unleashed.
2. Haruspicate and scry
What are some signs that this is in fact what is happening? Some of us have been struck over the past few years, in our discussions with leading neuroscientists, by the frequency with which we hear from them the half-complaining, half-excited report that “the AI people” have effectively taken over their discipline’s academic meetings. It seems at this point you can learn more about biological brains by avoiding the brains themselves, and looking at brain-like systems of our own invention. Likewise, in a very different field, we were intrigued (and also confirmed in our predictions) when this year the Nobel Prize in Physics was awarded to John J. Hopfield and Geoffrey Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks”. Hmm, is that physics? some of us wondered. But again we thought of flavioid biosynthesis, and wondered in turn whether that is philosophy. And having asked this subsequent question, we assured ourselves that, well, things change.
In one field after another, it seems, modeling and simulation have become the new gold standard. You can spend a career as a specialist of the Bering Strait migrations that led to the populating of the Americas without ever having to inspect a stone tool or carved bone, but instead running thousands of simulations that individually tell you how the crossing may have happened, and that together, in the sum of their outcomes, are believed to tell you how it probably happened. For that matter, you can also make a career for yourself in the study of electoral politics without having to spend all that much energy trying to remember, e.g., what a “Whig” once was, and instead running 80,000 simulations of an event that hasn’t happened yet at all. The fact that for near-future events this sort of endeavor appears about as useful as haruspicy does not seem to give those much pause who would use it to advance our knowledge of the actual distant past. But this may be, as we will presently see, because knowledge of the actual past, the one that left those stones and bones where they lie, is becoming ever less important, while coming up with models that are otherwise satisfactory —otherwise, that is, than in their power to tell you wie es eigentlich gewesen— is becoming correspondingly more important.
In their influential book Objectivity, Lorraine Daston and Peter Galison propose that scientific objectivity, as a value and an ideal, is much later in arriving than we generally suppose. For most of the period following the “scientific revolution”, the prevailing value was what the authors call “truth-to-nature”, in which researchers consciously and intentionally represented the objects of their study —a species of flower, say— as more perfect, as less compromised by the inevitable flaws and blemishes of their own individual representatives, than they in fact were. The idea that we need to get at the irreducible singularity of the things and events of the world, they argue convincingly, that we need to see the world as it is rather than in some cleaned-up representation of it, is one that only really takes hold with the advent of photography in the mid-to-late 19th century. This was, they contend, the high period of objectivity, or more precisely of what they call “mechanical objectivity”.
But it didn’t last long. By the 1930s, they argue, scientific instrumentation, such as the electron cloud chamber, had become sufficiently complex, and its mediation between the human observer and the world had in many cases become sufficiently convoluted, that willy-nilly the scientist had to set himself up as a sort of interpreter. This work of interpretation relied on more than just reporting what is found on the ticker-tape of some mechanical read-out or other, as it also often involved a skill quite a bit more like intuition, the sort of thing you have to deploy when you are, say, a customs agent trying to “sniff out” a smuggler, or, to use the classic example from the philosophy of science, a chicken-sexer whose job it is to determine in an instant whether the little ball of yellow feathers rolling down the conveyer belt at the battery chicken farm is a newborn male or a female. Already by the early 20th century, then, the actual practice of scientists —again, like it or not—, had taken a form that by certain measures looked an awful lot like the work of the haruspex — exactly the kind of shady pre-modern figure from whose authority science was supposed to be delivering us.
Daston and Galison’s book was published in 2007. At the very end, they acknowledge that their work cannot be definitive, because science is always undergoing transformations, and it may be expected to continue to move, from truth-to-nature to mechanical objectivity to “trained judgment” to whatever chapter of its history comes next. They believe they are able to limn the beginnings of that new chapter, and they characterize it, precisely, as one in which “representation of nature… gives way to presentation: of built objects, of marketable products, even of works of art”. This new fusion of science and engineering, they claim, is one that is in the course of generating a new ethos of its own, and that is “disturbing professional identities left and right”. They see this transformation, this “fusion of science and engineering”, finally, as an effect of the rise to dominance in scientific research of computer simulation.
We are not sure we agree with their account of what is likely to come after trained judgment (RIP, c. 1930 — c. 2007), but if there is disagreement here this may be because the turn things were taking at the dawn of our present century would not yet become fully clear for another two decades or so. Daston and Galison knew something was coming to an end, and were beginning to be able to make out what was arriving. But the end in question, we think, may in fact be much bigger than it could be made out to be in 2007. It may not simply be another chapter in the modern history of science, but again, the end of its long three-chapter history, and the beginning of a whole ‘nother book.
We are somewhat skeptical that their analysis was entirely compelling even in 2007. After all, science and engineering were always fused, and herein lies the true genius of Francis Bacon’s “Knowledge is power” — that it means at least three things at once. First, it means that scientific discovery itself enhances the mojo of the discoverer. But it also means that scientific discovery translates as if automatically into power tools, that is to say new technologies that can help a person or a collectivity to secure and maintain power. Finally, it means, or can mean, that whoever has the power is the one who gets to say what knowledge is. Everyone from Michel Foucault to DARPA has found something in that versatile slogan to which to hitch their whole program, as Bruno Latour delighted in pointing out.
But the greater problem with Daston and Galison’s history of the present is that simulation is precisely not engineering. It doesn’t do anything in the world at all, in fact. It is by a corollary of this very same point, curiously, that we can refute, without effort and simply in passing, the fantasy among many Silicon Valley types and their academic-philosopher courtiers, which imagines that as computers get better and better at modeling human brains, at some point they’re going to “bust out” and start having inner qualitative experiences, self-consciousness, fear of death, etc. But that is obviously no more likely to happen than for water to start dripping out of your laptop when you run a simulation of the hydrodynamic flow of a river.
We cannot possibly outdo Objectivity, not in a book of our own, let alone in a single essay. Theirs is a masterpiece. If we could however offer an alternative schema for consideration, it would eliminate trained judgment as a phase, and instead interpret it as something that admixes into the other phases, assuredly in some times and places more than others. After all, how else but by trained judgment did Galileo infer that there are spots on the sun? We would instead posit three phases of history, each with different ideas about what inquiry is supposed to do, and consequently with different sets of “epistemic virtues”. Each of these three phases would be linked to a particular revolution in information storage.
The first two phases are much as Daston and Galison suggest: the printing revolution led soon enough to the primacy of the ideal of truth-to-nature in scientific inquiry, and to the common illustration, the Orbis pictus, as the ultimate vehicle of truth. The second revolution was in photography and sound recording, and soon enough in the recording of moving images too. This remained the state-of-the-art technology in shaping what was broadly conceived to be the truth —the photograph or the tapped phone-line as the ultimate standard of proof and the supremely valid “receipt”—, until the end of the 20th century, and throughout this period, as far as we can tell, objectivity remained the prevailing epistemic virtue. This began to change only when informatics began to replace physics as the Prima Scientia, a transition that was perhaps completed, at least symbolically, when in 2024 the Nobel Prize in Physics went to a pair of computer scientists. In this new phase, which we are daring to call post-science, it is not so much that science and engineering fuse for the first time, or that science begins to construct its own objects, but that the form of inquiry that succeeds science for the most part ceases to have external objects at all. It studies objects of its own making. These are generally digital objects, and the instruments that are used to study them are digital instruments.
The world, now, with its stones and bones, falls away. Even or perhaps especially particle physicists come to appear ever more at ease in acknowledging that they are probably not really “getting to the bottom of things”, that is, they are not expecting to deliver up to us, after just one more round of research funding, the final list of elementary particles that serve to compose the particles we had previously thought were elementary. What they are after rather, a skeptic might worry, is the funding itself. But such a system, of funding and research results and more funding and more research results, can continue, presumably, only when the broader cultural understanding of what science does has sufficiently shifted away from an expectation of getting to the bottom of things that the gatekeepers who make the funding decisions will no longer be primed to detect when the researchers are, as they used to say, “multiplying entities”. You might reply, if you are less skeptical than we are, that there is nothing unparsimonious about hypothesizing entities when we realize they “must” exist in order to “make the math work”. But that only pushes the question one step further, and we are still required to sort out why exactly it was that math in particular of which, at some prior stage, the scientific community decided that it had to work.
Whatever the answer to that question is, we believe it is by now simply obvious that one should no more expect to get the definitive list of “things there are” from the researchers at CERN than from a simulationist who tells you the world is more bit-like than it-like. It is not as if there is a true antinomy between these. It is not that we at The Hinternet have come around to believing that the simulationist arguments are “better” than we used to imagine, but only that they are increasingly coming to seem harmonious with the spirit of the times. One of their most prominent defenders, in fact, just bought his way into some kind of role in the running of the US government, and therefore in the running of the world.
What the particle physicists failed to account for is that from the beginning we had a choice as to which kind of elementary units of reality we wished to focus our attention on. At just the same moment in history when the elementary particles, as we kept probing downward, revealed themselves to be much less particle-like than we might have hoped, attention began to shift from them as the potential answer to the question “Out of what is reality made up?”, to the elementary units of data. Informatics, if we may borrow the French term for computer science, had begun its usurpation of physics as Queen of the Sciences. Except that it is not really a science at all, if by that term we continue to wish to describe a disciplined confrontation of human beings with the world.
3. “Broken mental models”
But let us move back a bit closer to the more familiar and everyday. The member of our Board who teaches history and philosophy of science notices that the shift we are describing is an echo, in various ways, of the broader phenomenon of cultural recursion he has sometimes sought to elucidate. He notes that there are few students these days who want to study, say, Assyrian clay tablets. Instead they want to study the “shaping of scholarly identities” among 19th-century German Assyriologists. They don’t want to study stones and bones, but rather “the didactics of prehistory under the Third Republic”, and so on. We shouldn’t say “want” here, for the students are only going where their culture draws them — or, we fear, where it pushes them. And the message our culture has by now settled on is that the world, itself, is exhausted. We’ve squeezed everything out of it we can, and the only thing left to study is the studying itself. And similarly for the Bering Strait migrations, or the habits of voters, or indeed the workings of the brain or the conditions of the universe picoseconds after the Big Bang: the world falls away, and only the models remain.
“Model” also happens to be an important word in recent political discussions, particularly among a certain subset of normie Boomers who are trying to make sense of “what just happened”. Thus our favorite Boomer bellwether, David Brooks: “Many of us are walking around with broken mental models.” Cool metaphor, David, but could it be that the idea that thinking is reducible to “modeling” is in part what got us into this mess in the first place? As has often been noted in this space, we are currently in a golden age of data visualization, and there are significant signs that graphs and charts are now so easy to produce, and look so stunning with so little effort, that our students are spontaneously defaulting to the use of them in place of natural-language sentences. But what those sentences can sometimes convey, at their best, is thought that is not reducible to data. Such thought used to be the special concern of the humanities.
It is a strange irony that the swan song of science is so often sung, today, by academic humanities professors, as well as those who broadly share the same cultural points de repère — the same politics, the same conversational reflexes, the same tote bags. What exactly, we have now come to the point in the essay where it makes sense to ask, did those “Trust the Science” yard signs of the recent past really mean?
The progressive left’s recent deviation into science-mongering is so hot and heavy with irony that it can be difficult to hold onto it long enough for serious examination. We may at least say that it represents desperation, and a tragic descent into dogmatism. It’s always been hard for our Editorial Board at least to hear “Trust the Science” as demanding anything other than: “Trust us”. If asked why one should do so, the only truly plausible answer is the one that none who mouth this phrase could ever actually give: “Trust the science because we are the ones in power, or because it is through our claims to a knowledge-monopoly that we hope to retain power, or better to secure it.” Behind the new slogan, we mean, one discerns the faded ink of Bacon’s old one in its most aggressively Foucauldian interpretation.
Why could they not just come out and say that? Over the last years the progressive left has undergone a tremendous shift that it will take us some time to map, but that at least permits us to say right away that, honestly, they used to be a lot more fun. The “spirit of play” that once seemed licensed by the various subspecies of postmodernism, by which so many academic humanists made their careers, was replaced by what is at least on the surface an astoundingly naïve realism about nearly every matter of progressive concern. For example, it is not so long ago that questioning of certain pieties could be easily swatted down by phrases like “full stop”, an intensifier added to such claims as “trans women are women”. If only there were a punctuation mark for that, some of us thought at the time: the “dogma mark”, we could call it. The surface meaning of this formulation was that the matter is so thoroughly settled that no one with a share of reason could continue wondering whether there is not something more to the story, whether there is not some anthropological complexity to the way culture processes gender identity such that the claim that now gets marked with the “full stop” could have failed to appear as salient, without its absence having compromised the well-being, safety, or equality of trans people. Look, we mean, there are surely plenty of people out there who go on asking “why” questions in bad faith, but that doesn’t change the fact that if our intellectual culture is going to continue valuing the autonomous life of the mind, which includes both the free play of the imagination and the liberty to acknowledge honestly when someone else’s claims just don’t seem to add up — if this is going to happen, we say, there simply can be no full stops.
So now, what has happened, and this really should not surprise us, is that the free play of the imagination has mostly migrated to the opposite end of the political spectrum. A minor counterculture figure like Salomé, aka Pariah the Doll, is out there busily demonstrating that at this point it is the right that is best positioned to have a go at genderqueer jouissance. Even the Pulitzer Prize-winning writer Andrea Long Chu has effectively come out and declared, although most of her admirers are not yet in a position to understand the implications, that we may as well go ahead and see, or return to seeing, trans identity as an existential project, rather than as a sober acknowledgment of brute natural facts. We may imagine that it was in some measure this intervention on ALC’s part that led some months later to that similarly acronymous New York woman-to-watch AOC’s removal, just this past week, of the pronouns from her social-media bios. We doubt this removal will lead to any confusion or social faux-pas, as we presume that more or less everyone has the “trained judgment” necessary to “sex” AOC as she would prefer.
We make no secret here at The Hinternet of our firm belief that existential projects, along with all their associated “matters of concern” —from the viruses one might mask up against or refuse to mask up against, to the hormones one might seek out therapy to replace— are far more interesting than brute natural facts. We feel terribly alienated from that great majority of our peers, and would-be bosom-friends, who have taken flimsy refuge in the latter. We see the current historical moment moreover as largely defined by the contest between these two radically different ways of anchoring oneself in the world. We fear, moreover, that one of our noisiest “broken models” is the one that has supposed, and in many cases continues to suppose, that postmodernism, or the relative advantage of discursivity over brute natural facts, is the natural home of the progressive left. What we are witnessing right now in fact might well be the definitive triumph of postmodernism, just not like anyone thought it was going to go down. It is a triumph of postmodernism made possible by supremely reactionary forces.
And this brings us, finally, to RFK, Jr.
Keep reading with a 7-day free trial
Subscribe to The Hinternet to keep reading this post and get 7 days of free access to the full post archives.