1. The emerging consensus among commentators I take seriously is that the actions of the Trump 2 regime so far mark a massive structural shift in how our world is ordered, one that has been inevitable for some time, one whose current reactionary inflection is significant but far from the full story of what is happening, and one in which Trump himself by now has far more the character of a weaponized meme than of a properly historical actor.
Adding to John Rudisill's comment: I've been exceedingly dismayed by the way naive technophilia has led so many faculty colleagues in the sciences to embrace AI. I actually suspect it's the ultimate motive behind the hostile takeover of the US government, and universities should take heed. Several observations:
1) Back in 2023, the hype surrounding AI exploded just a few months after the cryptocurrency bubble burst. That seemed very suspect to me, since AI has been around for decades.
2) Biden-era regulations on speculative investments and lack of public enthusiasm mean that the tech bros, who invested billions in crypto, will lose their money without deregulation and a big push from the government. Similarly for AI: corporations have invested heavily, which has led to a speculative bubble, and Chinese technology is now a threat, which makes sense of Trump's protectionism and cozying up to Russia.
3) The data centers that enable AI are a horrible energy sink. According to the IMF (International Monetary Fund), in 2022 these datacenters were already responsible for 2% of the world's energy consumption and 1% of global emissions, and their usage is supposed to double by 2026. I bet it already has—such predictions are inevitably optimistic, and Musk rushed to build the world's largest data center, which he calls The Colossus, in 2024 before the election. Trump/Musk's rush to decimate climate policy makes sense in this context, along with their flip-flop on Ukraine (they need to corner the market on minerals).
4) The attack on universities makes ideological sense but Trump is not an ideologue, so what's going on? What is the end-game of cutting funding to the NIH and NSF? Universities serve as economic, cultural, and social anchors in American cities. In the area I live in now, if the universities were to, say, lay off half their staff, the private sector wouldn't be able to take up the slack and you'd wind up with a massive economic depression and people not being able to pay bills. I don't think Trump and Musk are that dumb. My bet—and John Rudisill's note provides indirect support for the idea— is that the federal administration will back off the draconian cuts under some pressure from the courts but say "we'll let you keep your funding on the condition that you develop AI programs/rely on Starlink/sign an exclusive license to use xAI" or some such.
What Musk wants is data and money. Exactly why remains a puzzle to me. I don't think he wants to go to Mars himself...I think him more likely to fancy himself one of the Eloi and the rest of us as Morlocks.
As I read this I am in the midst of a struggle against administration and some colleagues (outside of my department) to stop before it starts a new “academic minor” in “computing (read: A.I.) for the Arts and Humanities”. We are told “computing is just a tool for solving problems” and students in the humanities can benefit from the power of this tool to “solve their discipline’s problems”. I wish I could imagine a scenario where this goes through and the result is that, once it does, we can finally turn to the more important human endeavor having fully turned the utilitarian bullshit over to the coders and AI prompt authors. My worry is that the acceleration is away from the capacity to even recognize the immense intrinsic value of that more important stuff we have to do.
What a relief to finally read something that captures what I've been feeling these past couple of years intensifying these past few weeks. (And I have not been for years and years an academic in the scientized humanities but my earliest work was on Diderot under the kind guidance of France's post-structuralist philosophers, so I profoundly "get" what you are digging at.) I desperately hope you are right about something human surviving the techno-oligarchy, like an unexpected baby squirming all messy and squalling out of a blindly metastasizing machine. It is our only hope.
A great piece as always. And thanks to the massage therapist line, I'm currently adding "Dr JSR thinks I should" to the pros column of switching careers to personal training.
This post made me giggle a little due to my personal experience. I liked your sweeping approach - it's scalable, as my tech colleagues say, now to consider interoperability and integration. Integration and interoperability are always unique to the context and individuals involved. They can be impactful, but stories about how integration and interoperability happen are usually compacted. As your work often points out, historical narratives smooth the rough edges to create a more digestible story with centralised lessons and themes.
My brain is trained as someone who usually worked on the post-scaling integration side of tech - my bosses sold the software and I, like a less-expensive accessory, arrived to cheerfully persuade the end users who were not always aware or happy about the sale to try and use the software effectively. Now called 'change management', this is a never-ending process full of compacted 'success' stories that create the illusion of an ending. Even fairly proficient users of Microsoft, one of the world's most widely adopted software systems, rarely use the Microsoft suite to its full and ever 'evolving' capacity. Your post also made me think of COBOL debates I've heard. COBOL is still essential to the US banking system while also, according to some, holding back a more updated system like that which we see in the UK. I don't know enough about this system to weigh in on any debate, but working in Europe, I see similar issues. In Belgium, for example, the heart of Europe, transportation apps exist for every individual transportation system, which can be frustrating if you take buses across the linguistic border, which I do almost daily. You need both the Flemish bus app (deLigne), the inter-city bus app (TEC), the train app (SNCB), and the Brussels app for the intra-city bus/tram/metro app. You see similar interoperability issues across other countries and regions despite many incentives from users and the EU to address this. Many of these systems are hybrid, backed by government and private funds, and government technology is notoriously complicated to develop for reasons both good and bad, depending on the agency, constituency, and context.
One of the main advantages of private US-developed systems is that, like VISA, they are 'everywhere you want to be,' they 'work' in that expectations are clear and mostly consistent across different contexts, making these tech platforms convenient in ways EU tech struggles to be sometimes. Again, I want to stress that this is not because the European tech doesn't work or isn't impressive within its ecosystem but because it is regional and focused on supporting a specific infrastructure, and this can be infuriating for those new to a region and unfamiliar with the underlying thought process embedded in the tools and their uses.
In my limited experience, France is an exception, possibly because the country is fairly centralised. But that's not the case for several EU countries. In this way, technology is more reminiscent of language for me—people adopt US tech platforms the way they adopt English: to communicate across borders in a way that can weaken and strengthen the borders simultaneously.
AI is being used to address this in some areas, but regional and linguistic solidarity can complicate this, and users default depending on their preferences and objectives. There's excellent research on AI-assisted federated learning platforms from Belgian universities like KULeuven, UCLouvain, and the famous Gent IDlab. Again, I'm not an engineer. I'm the person engineers hire to explain their genius to external practitioners and organisations. Still, the Belgian teams building these tools do seem to be inherently aware of the complications that arise in systems built to perpetuate one ideology meeting with systems meant to perpetuate a different ideology and how this can complicate, stagnate, perpetuate, etc., any 'Great Updates' over time.
My experience in the robotics & automation sector always make me laugh about the idea of perfect interoperability and integration of all these systems. Because the tech-bros want to privatize everything, there will inevitably be multiple platforms and lack of standardization so that nothing works well together. People like us will have to be there to handle the 'change management'. ;-)
LOL. I know. It's such a rare thing to run into a researcher-engineer who recognises that, but they are structurally motivated to pursue technological 'solutions' at the expense of human interaction. One great 'gentle giant' robotics researcher in human-computer interaction with whom I've had the pleasure of working is Jean Vanderdonckt. Maybe it's because he's Belgian, has the right research area, is generally a listener more than a talker or all of the above. Still, he does consider all aspects of change management as he builds. That said, his focus on this is at the expense of personal promotion, which he shies away from. Being a practical academic and engineer in the current funding system is sometimes challenging. Balancing career, what gets funded vs what needs funding, and staying open to the unknown unknowns is why I think AI sounds so attractive to many, but I think adapting ourselves to the reality will require shifting focus. Sometimes it reminds me of working with medical personnel in the US system - what will the insurance pay for, what does the patient probably need, and what is available/accessible and best suited to the individual? It's not always the same thing or very clear.
Well-observed essay. In some ways, I wish it were simply a clear opposition between "ordinary human corruption [vs] antihuman automated perfection."
On the one hand, we risk diminishing how exhausting, numbing, degrading, and dangerous "ordinary human corruption" can be--think of the miserable situation indicated by your interlocutor from India, or just any old account of suffocatingly incompetent, cronyism-ridden, and otherwise morally compromised bureaucracy, for example in the Soviet Union--how thoroughly human venality can ruin lives! But on the other hand, the "indifferent clerks who give in to your unconventional request after your third desperate plea" is a very real thing too.
The problem is that our well-known and immemorial human problems won't actually be "solved," and certainly not because of some new antihuman remedy--anyone who claims otherwise (often tech bros, execs, investors) are willfully-optimistically delusional, outrightly cynical-deceitful, or (often members of the general public wanting to believe in the promise of an AI-dominated future) vulnerable pollyannas. Though today it might seem increasingly viable, I don't think there will ever be "automated perfection," only the false appearance of it. All code, all the new uses to which AI gets put, will continue to bear the imprint of the (few) human actors directing these things and accumulating wealth and power from them. The hope in India for judges to be replaced by genuinely neutral, disinterested, reasoning computer arbiters is a desperate fantasy. The idea of a smoothly functioning automated federal government that does things "better" or more "efficiently," or AI-run educational institutions responsive to the personal learning needs and tendencies of individual students, or of people "freed" by machines to do more "creative" work--all these are utopian thoughts (even if soullessly utopian), and impossible as realities in our world.
Eric can be mean! A tiny thread I want to pull on: Estrada: is it still a thing in Russia? In Bulgaria it’s one of the few transgenerational bits of popular culture that Tues any kind of society together. It’s weird to see even post 90s kids belting out classic songs from 59 years prior. https://youtube.com/playlist?list=PLYM8HerqanORVr07HDlaiFDtHnes45nd8&si=10g-DqGRGHgfNWZa
Oh excellent, thanks. I think in Russia and Romania, the two Eastern Bloc countries I know best, Estrada-like entertainments are always on TV, and even young people know the old tunes from the communist era, but it no longer seems to have the quality of a proper civic institution.
A genuinely great thing about Eric is that he always tries to remain in dialogue with everyone and always to interpret other people's views as generously as possible -- a European Will Rogers!
European will rogers is a pretty good band name! Yes Eric is a good interlocutor and I learn a lot from reading him even though our politics are so different. On Estrada I’m fascinated by its popularity or lack thereof visa a vis post socialism and if I thought the nsf would still exist in the coming years I would apply for a grant about this. I know no less than 3 djs that spin Bulgarian Estrada and Turkish funk music which is the closest to rapprochement between the center and periphery of the Ottoman Empire I know of. My in laws still blame things on the Turks but will listen to Bulgarian Turkish Estrada 🤷🏻♂️
Enjoying your political writing J, you’re one of the few who has internalized the lesson that broad comparative anthropology is necessary to get a grip on our own culture and selves. I’ve had to rethink my own categories as of late and your conservatism essay truly made me think.
Thanks for this. I hadn't heard of you before but am following based on this wide-ranging rumination.
One small thing that jumped out at me highlights the distinction between a predicament and a problem. I suggest that the real choice is not between an iris scan and an HR meeting, it's between an iris scanner looking for "microtraces of any prohibited affect or longing" and an iris scanner looking for microtraces of irritation toward a corporate indoctrination program.
The medium is the message, in other words. What they are both scanning for is compliance, and how compliance is defined is very much a secondary notion.
I used "is" even though this doesn't exist yet. But having spent some years in corporate bureaucracy, I can't imagine global capital not rolling out a program that can easily police compliance with corporatism.
Very thoughtful essay. Look forward to reading more.
Personally I am (I believe!) in deep alignment with many – certainly including you Justin.
In a Daoist 'way' my life feels like it has been and is about a 'response' to where we are and are going now. I wrote on a "Way Off Autopilot" back in 2008 – but held back from putting things together. Jane Roberts and Seth, the Law of One and similar forms say in various of those works what I now see:
We have had a collective blindspot which we called malignant narcissism and other things like psychopath et al. None of those terms catch T and Co... This in Malignant Intentionality (which in prior times would have been simply called the work of the devil). They are determined to not going to ever bow to Integral Intentionality... Best ; > } Barry
there's a lot i loved about this (hence the multiple restacks) but i must say, this takes a pretty bleak/alienated/uninspired view of "knowledge production."
I don't disagree that the social sciences have lost their way, but your line of thinking here seems much more reminiscent of the "fuck the baby AND its bathwater" style of the wrecking-ball politics that are now ascendant. Humanistic inquiry cannot be neatly split between that which has been touched by institutional perversion and that juicy stuff you imagine we could NOW maybe be free to do, if we handle the machine god arc just right.
Handling things is hard!! that's how we got into this mess in the first place.
I suspect that the way out will involve tapping into a lot of the knowledge that has been ""produced"" because or perhaps even in spite of, the infrastructure that's now close to burning down. To tapping into the currents and creativity and human capital that has *currently* wound up in such siloes (and that which never made it in/out), *because it had no where else to go.*
What if we could give it a new place to go?
Engaging with that question seriously would probably mean acknowledging what does work about it (like in the very generous treatment you gave bureaucracy, DEI, and the like). And taking "knowledge production" more seriously, not less.
Many of the tools and concepts that have given me clarity through all the nonsense these past few years were a result of encountering the fruits of such knowledge production, and my extraordinary luck in encountering people and places who pointed my attention in the right direction.
I think creating better ways to share and build context might sort of be the path to creating a better endgame. Pro-knowledge production, pro-vibes
Lovely; but the surveillance, the iris scanners, are you sure this is part of the Upgrade, rather than an association that naturally arises in the mind based on many analogous past sci-fi threads? I'm really not convinced that Elon wants to surveil me or my opinions, but maybe I'm just missing something; the desire to algorithmize the operation of the state seems somewhat orthogonal to the surveillance line.
If some techie manages to sell the state of Nebraska "anonymized genital scanners" for its public restrooms, now, that will really be something to note.
Adding to John Rudisill's comment: I've been exceedingly dismayed by the way naive technophilia has led so many faculty colleagues in the sciences to embrace AI. I actually suspect it's the ultimate motive behind the hostile takeover of the US government, and universities should take heed. Several observations:
1) Back in 2023, the hype surrounding AI exploded just a few months after the cryptocurrency bubble burst. That seemed very suspect to me, since AI has been around for decades.
2) Biden-era regulations on speculative investments and lack of public enthusiasm mean that the tech bros, who invested billions in crypto, will lose their money without deregulation and a big push from the government. Similarly for AI: corporations have invested heavily, which has led to a speculative bubble, and Chinese technology is now a threat, which makes sense of Trump's protectionism and cozying up to Russia.
3) The data centers that enable AI are a horrible energy sink. According to the IMF (International Monetary Fund), in 2022 these datacenters were already responsible for 2% of the world's energy consumption and 1% of global emissions, and their usage is supposed to double by 2026. I bet it already has—such predictions are inevitably optimistic, and Musk rushed to build the world's largest data center, which he calls The Colossus, in 2024 before the election. Trump/Musk's rush to decimate climate policy makes sense in this context, along with their flip-flop on Ukraine (they need to corner the market on minerals).
4) The attack on universities makes ideological sense but Trump is not an ideologue, so what's going on? What is the end-game of cutting funding to the NIH and NSF? Universities serve as economic, cultural, and social anchors in American cities. In the area I live in now, if the universities were to, say, lay off half their staff, the private sector wouldn't be able to take up the slack and you'd wind up with a massive economic depression and people not being able to pay bills. I don't think Trump and Musk are that dumb. My bet—and John Rudisill's note provides indirect support for the idea— is that the federal administration will back off the draconian cuts under some pressure from the courts but say "we'll let you keep your funding on the condition that you develop AI programs/rely on Starlink/sign an exclusive license to use xAI" or some such.
What Musk wants is data and money. Exactly why remains a puzzle to me. I don't think he wants to go to Mars himself...I think him more likely to fancy himself one of the Eloi and the rest of us as Morlocks.
And if anyone needed any further persuasion that AI is being put to evil uses, check out this piece at the LRB on how senior staff at Google, Microsoft, and Amazon in Israel empowered the destruction of Gaza by greatly expanding the Israeli military's access to cloud computing and AI tools: https://www.lrb.co.uk/blog/2025/january/militarised-ai?utm_medium=email&utm_campaign=20250205BlogUSRW&utm_content=20250205BlogUSRW+CID_3af904971c3e354ae6ae7d7f46c9ee89&utm_source=LRB%20email&utm_term=Militarised%20AI
As I read this I am in the midst of a struggle against administration and some colleagues (outside of my department) to stop before it starts a new “academic minor” in “computing (read: A.I.) for the Arts and Humanities”. We are told “computing is just a tool for solving problems” and students in the humanities can benefit from the power of this tool to “solve their discipline’s problems”. I wish I could imagine a scenario where this goes through and the result is that, once it does, we can finally turn to the more important human endeavor having fully turned the utilitarian bullshit over to the coders and AI prompt authors. My worry is that the acceleration is away from the capacity to even recognize the immense intrinsic value of that more important stuff we have to do.
Keep fighting the good fight (or just leave your colleagues to their poor decisions and go do something new and wonderful on your own)!
Great piece of writing! Nothing to add. Just wanted to drop in to say hello and salams. So: hello and salams!
What a relief to finally read something that captures what I've been feeling these past couple of years intensifying these past few weeks. (And I have not been for years and years an academic in the scientized humanities but my earliest work was on Diderot under the kind guidance of France's post-structuralist philosophers, so I profoundly "get" what you are digging at.) I desperately hope you are right about something human surviving the techno-oligarchy, like an unexpected baby squirming all messy and squalling out of a blindly metastasizing machine. It is our only hope.
Well written. I shall follow with interest
A great piece as always. And thanks to the massage therapist line, I'm currently adding "Dr JSR thinks I should" to the pros column of switching careers to personal training.
This post made me giggle a little due to my personal experience. I liked your sweeping approach - it's scalable, as my tech colleagues say, now to consider interoperability and integration. Integration and interoperability are always unique to the context and individuals involved. They can be impactful, but stories about how integration and interoperability happen are usually compacted. As your work often points out, historical narratives smooth the rough edges to create a more digestible story with centralised lessons and themes.
My brain is trained as someone who usually worked on the post-scaling integration side of tech - my bosses sold the software and I, like a less-expensive accessory, arrived to cheerfully persuade the end users who were not always aware or happy about the sale to try and use the software effectively. Now called 'change management', this is a never-ending process full of compacted 'success' stories that create the illusion of an ending. Even fairly proficient users of Microsoft, one of the world's most widely adopted software systems, rarely use the Microsoft suite to its full and ever 'evolving' capacity. Your post also made me think of COBOL debates I've heard. COBOL is still essential to the US banking system while also, according to some, holding back a more updated system like that which we see in the UK. I don't know enough about this system to weigh in on any debate, but working in Europe, I see similar issues. In Belgium, for example, the heart of Europe, transportation apps exist for every individual transportation system, which can be frustrating if you take buses across the linguistic border, which I do almost daily. You need both the Flemish bus app (deLigne), the inter-city bus app (TEC), the train app (SNCB), and the Brussels app for the intra-city bus/tram/metro app. You see similar interoperability issues across other countries and regions despite many incentives from users and the EU to address this. Many of these systems are hybrid, backed by government and private funds, and government technology is notoriously complicated to develop for reasons both good and bad, depending on the agency, constituency, and context.
One of the main advantages of private US-developed systems is that, like VISA, they are 'everywhere you want to be,' they 'work' in that expectations are clear and mostly consistent across different contexts, making these tech platforms convenient in ways EU tech struggles to be sometimes. Again, I want to stress that this is not because the European tech doesn't work or isn't impressive within its ecosystem but because it is regional and focused on supporting a specific infrastructure, and this can be infuriating for those new to a region and unfamiliar with the underlying thought process embedded in the tools and their uses.
In my limited experience, France is an exception, possibly because the country is fairly centralised. But that's not the case for several EU countries. In this way, technology is more reminiscent of language for me—people adopt US tech platforms the way they adopt English: to communicate across borders in a way that can weaken and strengthen the borders simultaneously.
AI is being used to address this in some areas, but regional and linguistic solidarity can complicate this, and users default depending on their preferences and objectives. There's excellent research on AI-assisted federated learning platforms from Belgian universities like KULeuven, UCLouvain, and the famous Gent IDlab. Again, I'm not an engineer. I'm the person engineers hire to explain their genius to external practitioners and organisations. Still, the Belgian teams building these tools do seem to be inherently aware of the complications that arise in systems built to perpetuate one ideology meeting with systems meant to perpetuate a different ideology and how this can complicate, stagnate, perpetuate, etc., any 'Great Updates' over time.
My experience in the robotics & automation sector always make me laugh about the idea of perfect interoperability and integration of all these systems. Because the tech-bros want to privatize everything, there will inevitably be multiple platforms and lack of standardization so that nothing works well together. People like us will have to be there to handle the 'change management'. ;-)
LOL. I know. It's such a rare thing to run into a researcher-engineer who recognises that, but they are structurally motivated to pursue technological 'solutions' at the expense of human interaction. One great 'gentle giant' robotics researcher in human-computer interaction with whom I've had the pleasure of working is Jean Vanderdonckt. Maybe it's because he's Belgian, has the right research area, is generally a listener more than a talker or all of the above. Still, he does consider all aspects of change management as he builds. That said, his focus on this is at the expense of personal promotion, which he shies away from. Being a practical academic and engineer in the current funding system is sometimes challenging. Balancing career, what gets funded vs what needs funding, and staying open to the unknown unknowns is why I think AI sounds so attractive to many, but I think adapting ourselves to the reality will require shifting focus. Sometimes it reminds me of working with medical personnel in the US system - what will the insurance pay for, what does the patient probably need, and what is available/accessible and best suited to the individual? It's not always the same thing or very clear.
Well-observed essay. In some ways, I wish it were simply a clear opposition between "ordinary human corruption [vs] antihuman automated perfection."
On the one hand, we risk diminishing how exhausting, numbing, degrading, and dangerous "ordinary human corruption" can be--think of the miserable situation indicated by your interlocutor from India, or just any old account of suffocatingly incompetent, cronyism-ridden, and otherwise morally compromised bureaucracy, for example in the Soviet Union--how thoroughly human venality can ruin lives! But on the other hand, the "indifferent clerks who give in to your unconventional request after your third desperate plea" is a very real thing too.
The problem is that our well-known and immemorial human problems won't actually be "solved," and certainly not because of some new antihuman remedy--anyone who claims otherwise (often tech bros, execs, investors) are willfully-optimistically delusional, outrightly cynical-deceitful, or (often members of the general public wanting to believe in the promise of an AI-dominated future) vulnerable pollyannas. Though today it might seem increasingly viable, I don't think there will ever be "automated perfection," only the false appearance of it. All code, all the new uses to which AI gets put, will continue to bear the imprint of the (few) human actors directing these things and accumulating wealth and power from them. The hope in India for judges to be replaced by genuinely neutral, disinterested, reasoning computer arbiters is a desperate fantasy. The idea of a smoothly functioning automated federal government that does things "better" or more "efficiently," or AI-run educational institutions responsive to the personal learning needs and tendencies of individual students, or of people "freed" by machines to do more "creative" work--all these are utopian thoughts (even if soullessly utopian), and impossible as realities in our world.
Eric can be mean! A tiny thread I want to pull on: Estrada: is it still a thing in Russia? In Bulgaria it’s one of the few transgenerational bits of popular culture that Tues any kind of society together. It’s weird to see even post 90s kids belting out classic songs from 59 years prior. https://youtube.com/playlist?list=PLYM8HerqanORVr07HDlaiFDtHnes45nd8&si=10g-DqGRGHgfNWZa
Oh excellent, thanks. I think in Russia and Romania, the two Eastern Bloc countries I know best, Estrada-like entertainments are always on TV, and even young people know the old tunes from the communist era, but it no longer seems to have the quality of a proper civic institution.
A genuinely great thing about Eric is that he always tries to remain in dialogue with everyone and always to interpret other people's views as generously as possible -- a European Will Rogers!
European will rogers is a pretty good band name! Yes Eric is a good interlocutor and I learn a lot from reading him even though our politics are so different. On Estrada I’m fascinated by its popularity or lack thereof visa a vis post socialism and if I thought the nsf would still exist in the coming years I would apply for a grant about this. I know no less than 3 djs that spin Bulgarian Estrada and Turkish funk music which is the closest to rapprochement between the center and periphery of the Ottoman Empire I know of. My in laws still blame things on the Turks but will listen to Bulgarian Turkish Estrada 🤷🏻♂️
Enjoying your political writing J, you’re one of the few who has internalized the lesson that broad comparative anthropology is necessary to get a grip on our own culture and selves. I’ve had to rethink my own categories as of late and your conservatism essay truly made me think.
Thanks so much Benjamin, I’m happy to hear it!
This is great work, Justin!
Thanks for this. I hadn't heard of you before but am following based on this wide-ranging rumination.
One small thing that jumped out at me highlights the distinction between a predicament and a problem. I suggest that the real choice is not between an iris scan and an HR meeting, it's between an iris scanner looking for "microtraces of any prohibited affect or longing" and an iris scanner looking for microtraces of irritation toward a corporate indoctrination program.
The medium is the message, in other words. What they are both scanning for is compliance, and how compliance is defined is very much a secondary notion.
I used "is" even though this doesn't exist yet. But having spent some years in corporate bureaucracy, I can't imagine global capital not rolling out a program that can easily police compliance with corporatism.
Very thoughtful essay. Look forward to reading more.
Personally I am (I believe!) in deep alignment with many – certainly including you Justin.
In a Daoist 'way' my life feels like it has been and is about a 'response' to where we are and are going now. I wrote on a "Way Off Autopilot" back in 2008 – but held back from putting things together. Jane Roberts and Seth, the Law of One and similar forms say in various of those works what I now see:
We have had a collective blindspot which we called malignant narcissism and other things like psychopath et al. None of those terms catch T and Co... This in Malignant Intentionality (which in prior times would have been simply called the work of the devil). They are determined to not going to ever bow to Integral Intentionality... Best ; > } Barry
there's a lot i loved about this (hence the multiple restacks) but i must say, this takes a pretty bleak/alienated/uninspired view of "knowledge production."
I don't disagree that the social sciences have lost their way, but your line of thinking here seems much more reminiscent of the "fuck the baby AND its bathwater" style of the wrecking-ball politics that are now ascendant. Humanistic inquiry cannot be neatly split between that which has been touched by institutional perversion and that juicy stuff you imagine we could NOW maybe be free to do, if we handle the machine god arc just right.
Handling things is hard!! that's how we got into this mess in the first place.
I suspect that the way out will involve tapping into a lot of the knowledge that has been ""produced"" because or perhaps even in spite of, the infrastructure that's now close to burning down. To tapping into the currents and creativity and human capital that has *currently* wound up in such siloes (and that which never made it in/out), *because it had no where else to go.*
What if we could give it a new place to go?
Engaging with that question seriously would probably mean acknowledging what does work about it (like in the very generous treatment you gave bureaucracy, DEI, and the like). And taking "knowledge production" more seriously, not less.
Many of the tools and concepts that have given me clarity through all the nonsense these past few years were a result of encountering the fruits of such knowledge production, and my extraordinary luck in encountering people and places who pointed my attention in the right direction.
I think creating better ways to share and build context might sort of be the path to creating a better endgame. Pro-knowledge production, pro-vibes
Excellent. Please keep writing about politics.
Lovely; but the surveillance, the iris scanners, are you sure this is part of the Upgrade, rather than an association that naturally arises in the mind based on many analogous past sci-fi threads? I'm really not convinced that Elon wants to surveil me or my opinions, but maybe I'm just missing something; the desire to algorithmize the operation of the state seems somewhat orthogonal to the surveillance line.
If some techie manages to sell the state of Nebraska "anonymized genital scanners" for its public restrooms, now, that will really be something to note.