Note: I use this blog primarily as a way to organise my thoughts and this article has got somewhat out of control. It has ended up being much longer and far wider in scope than I had originally intended. I hope you’ll persevere with it. I welcome any constructive criticism in the comments.
“Will I dream?”
SAL 9000, 2010: The Year We Make Contact (Peter Hyman, 1984)
“I don’t think I have good dreams. Oh, I’m sure I have good dreams sometimes, but I don’t seem to remember the good dreams.
“The ones that I remember are the nightmares.”
Elon Musk talking to Werner Herzog in the documentary Lo and Behold (2016)
OpenAI, the company behind ChatGPT and Sora — which currently dominates the public imagination when it comes to AI — was not originally established to create chatbots and silly videos. Initially, it wasn’t even set up to make money. It was set up by Sex Creep Elon Musk and Sam Altman (who I’m not going to label a sex creep for the purposes of this article, but there are caveats) to prevent Skynet, the AI from the Terminator franchise that destroys humanity.
To be precise, their main preoccupation was with someone else creating artificial superintelligence (or Artificial General Intelligence as it is commonly known now), specifically Google’s DeepMind. They considered DeepMind,which at the time was clearly leading the field, to be too irresponsible to be allowed to win the race.
According to Empire of AI (Karen Hao, 2025), Musk even compared DeepMind’s CEO Demis Hassabis to a wannabe dictator who wants to take over the world, basing this conclusion on the incontrovertible proof that when Hassabis was head of the video game studio Elixir studios, he made a game called Evil Genius. “And people still don’t see it!” exclaimed Musk (in my head he said this in his volcano lair while threatening to castrate a British secret agent with a giant laser, and no-one can convince me otherwise).
Musk and Altman were deeply influenced by the Oxford Academic Nick Bostrom and his book Superintelligence: Paths, Dangers, Strategies (2014). You might have heard of the paperclip maximiser thought experiment, in which an artificial superintelligence is given the task to manufacture paperclips, which in the pursuit of this goal leads it to destroy humanity, the Earth and ultimately the entire universe in the goal of maximising the number of paperclips in the world. Bostrom’s conclusion is not exactly that we shouldn’t build AI, but that we need to be careful to “align” it with the interests of humanity.
The idea of artificial intelligence which turns on its creator is not a new idea. It’s the basis of Mary Shelley’s Frankenstein, (1818) itself subtitled “A Modern Prometheus”. The “Sorcerer’s Apprentice” sequence in Fantasia (1940) when Mickey Mouse casts a spell which results in self-replicating brooms (not paperclips) that threaten to destroy everything? That’s from a 2nd century CE poem The Lover of Lies by Lucian of Samosata. And that’s before we get into HAL-2000 from 2001: A Space Odyssey (1969), Joshua in War Games (1983) and the Machines in The Matrix (1999). Basically, it’s paperclips all the way down.
It’s worth bearing in mind the words of science fiction writer Ted Chiang in response to the paperclip maximiser theory:
When Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.
AI agents in Cleggworld
Let’s take a break from this heady brew of science fiction and stationery-themed apocalypses and return for a second to Nick Clegg writing ten years later about how he sees AI.
Meta isn’t leading the field in AI development right now. Arguably, it doesn’t even rank in the top five. As such, Clegg’s former company is not positioning itself as a company that has the potential to save humanity, but as a pragmatic tech company that wants to build AI agents to make your life easier.
Currently that has meant sticking AI assistants into Facebook, WhatsApp and Instagram, and focusing on developing its own open(ish) source software. So far that doesn’t mean very much more than those annoying prompts that show up underneath posts asking inane questions, but in Clegg’s book he envisages the next step in the form of digital agents that will make everyone’s lives easier.
To illustrate this, he gives us a brief glimpse of a married couple living in the future. One is Spanish-coded while the other is apparently English (in my headcanon, one is a lawyer specialising in trade, while the other is a former politician and temporarily embarrassed high flying executive working in tech).
When discussing my review of Clegg’s book on BlueSky, I commented on how Clegg seems to “glide through life“. This brief bit of speculative fiction does rather illustrate that. It’s full of aspects that Clegg clearly thinks would be fantastic, that raise all sorts of red flags, at least for me.
The woman in this scenario doesn’t appear to do any work; she just turns up to work and gets her AI agents to do it all for her. She goes to the doctor, who tells her about her risk of cancer, prompting her to get her AI agent to immediately change her eating habits (there’s no discussion about the implications for her health insurance, but assuming this is in the US, they are presumably huge). She doesn’t discuss this with her husband, but rather gets her agent to tell his agent.
Still worried about her health issues, she gets her AI agent to find her some social media support groups, but she doesn’t participate in them: again, she gets her agent to post messages on her behalf instead. It results in a bunch of supportive messages and friend requests — presumably made by other people’s AI agents.
Finally, after she’s gone to sleep, her husband asks his agent to talk to her agent to get it to change the song that it was planning to play in the morning, as he finds it annoying. Heaven forbid that he discuss something as confrontational as not liking a pop song with his wife!
I read this passage and thought that it was going to reveal that — surprise! — it wouldn’t be as dystopian as all that, but apparently this is what Clegg considers to be a plausible best case scenario. He does add the caveat that you wouldn’t be forced to make use of all these features, just that they’d be there if you wanted them. But all of this sounds like a world in which we are tacitly encouraged to withdraw from all human interactions in favour of having an AI buffer to smooth out all of the rough bits. It is notable that Clegg chooses to emphasise the importance of human agency in this book while dreaming of a world in which digital agents get to take charge of everything.
And of course this is before we get into the ethical implications of all these services having a built in profit motive to be used by us as much as possible. I’m not sure that this “pragmatic” and “plausible” alternative to superintelligent paperclip maximisers sounds that much better.
He’s not finished though. It gets worse.
He goes on to talk about the potential for chatbot assistants as a cure for loneliness and to help people experiencing mental health problems, citing a number of examples where researchers and individuals have found such support to be helpful. To be clear, he’s right to point out that the demand for human-run mental health services far outstrips supply — but again there’s that minimising of the importance of human-to-human interaction.
I may be cynical, but none of this sounds so much as a person sincerely suggesting something that might help people as someone trying to sell you a used car without documentation: “if mature adults want to spend their time hanging out with human-like AI agents, in the full knowledge that they are machines and not people, we should find ways to accomodate these relationships.” That “full knowledge that they are machines” bit is doing a lot of work there. No-one ever claimed that Replika AI girlfriends were human; that didn’t involve hundreds of people grieving when the company killed their personalities with a mouse click. It’s one thing to know that something isn’t real; quite another when you’re lonely and being love-bombed by a chatbot.
In Cleggworld it seems that it is intolerable that the emotional rubber should ever have to hit the reality road. He isn’t alone in this respect; a growing number of companies are developing “deathbots,” AI agents who will replicate deceased loved ones so you never have to deal with losing them.
He misses out all the examples of people currently using chatbots for this sort of emotional support, leading them to taking life changing advice and outright psychosis. To minimise that in a book written in 2025 when these problems are becoming all too apparent is shockingly irresponsible.
The dangers of leaving something as sensitive as therapy to chatbots only becomes too clear when Clegg turns to LGBTQ+ folk.
[…] there are opportunities for AI chatbots like these to provide support for LGBTQ+ young adults who are not able to find it in their family or community
I don’t think LGBTQ+ people should be at all excited by this prospect. Take Clegg’s favourite company Meta, for example. A week before Donald Trump’s 2025 inauguration, Mark Zuckerberg introduced a whole series of changes to Facebook to make it a less space space for trans people, in anticipation of the new regime’s anti-trans policies. According to GLAAD:
In a single week, Meta modified major sections of its Hateful Conduct policy (to allow anti-LGBTQ rhetoric, and remove protections for LGBTQ users); terminated its Diversity, Equity, and Inclusion (DEI) programs; deleted trans and nonbinary themes on Messenger; and ended its fact-checking program.
Would you trust this man with a chatbot to provide “support” for your trans kid? He wouldn’t merely wait for a new law forcing anti-LGBTQ+ conversion therapy into his new agents; he’d proactively do it to ingratiate himself with the President. These aren’t the actions of someone who takes safety of vulnerable people seriously, and it highlights an important issue with all these chatbots intended to replace friends and therapists: a corporation gets to decide what they can and can’t do — not us.
But what’s more, this all feels like a failure of imagination. I’m genuinely surprised that Clegg’s pitch here is that AI will do the grunt work to leave us with more time and opportunity for real human interaction. If we’re building the sort of AI-led future that doesn’t do that but instead pacifies us with loving chatbots to talk to in our moments of quiet desperation, doesn’t that suggest we’re heading in the wrong direction?
And it gets worse.
Looking at the bigger picture, he’s very keen to emphasise how good AI will be for the economy, not just in terms of GDP ($7 trillion to global GDP over 10 years according to Goldman-Sachs; $6.6 trillion according to PwC), but in terms of an increase in productivity. He acknowledges that it represents challenges to white collar workers (more on that shortly) but is keen to emphasise not only that blue collar jobs are safe but that, with the help of AI, they will be more productive than ever before. He approvingly cites research by MIT associate professor Danielle Li who found that an “AI-powered conversational assistant” lead to a 14 per cent boost in productivity overall, and a 35 per cent increase for the least experienced workers.
This superficially sounds great (although note that the implication here is that it will be even easier to hire and fire people, knowing that the AI can train new people up faster than before), but what does this improvement in productivity mean in practice?
As Cory Doctorow reports in Enshittification, Amazon delivery drivers, who are technically self employed but are only allowed to work for Amazon under strict terms and conditions, have their delivery vans fitted with an array of AI powered sensors, both in and outside of the vehicle. These sensors even monitor the movements of drivers eyeballs and mouths.
These drivers are then set such high targets that they can’t even afford to stop and pee. This is why we now hear stories about Amazon drivers having to pee in bottles while on the road in order to meet their quotas. To describe this phenomenon he calls such workers “reverse-centaurs”. A centaur in this context is a human augmented by technology; by contrast a reverse-centaur is technology controlling a human.
In Technofeudalism, Yanis Varoufkas calls such blue collar workers “cloud proles”, and likens their plight to the workers in films such as Metropolis (Fritz Lang, 1927) and Modern Times (Charlie Chaplin, 1936):
Algorithms have already replaced bosses in the transport, deliveries and warehousing sectors. And workers forced to work for these algorithms find themselves in a modernist nightmare: some non-corporeal entity that not only lacks but is actually incapable of human empathy allocates them work at a rate of its choosing before monitoring their response times. Released from any of the qualms even inhumane humans harbour, the algo-bosses are at liberty to reduce the workers’ paid hours, to increase their tempo to insanity-inducing levels, or to turn them out onto the street for ‘inefficiency’. At that point, the workers sacked by the algorithm are thrown into a Kafkaesque spiral, unable to speak to a human capable of explaining why they were fired.
(For more information, see this article from The Verge about how algorithms dictate Amazon worker’s routines.)
This is the reality of Nick Clegg’s productivity boost, but at least blue collar workers will continue to be in demand. Maybe they can organise themselves, join a union, and demand better working practices? Well, tech companies are notoriously anti-union and in favour of reclassifying as many employees as possible as self-employed. Cory Doctorow does cite some examples of, for example, Uber drivers working together to game the algorithms that dictate their lives just as the algorithms game them.
But there’s an additional problem looming: those white collar workers who are about to lose their jobs.
To be fair on Clegg, he doesn’t pretend that this isn’t going to have a massive impact on office workers, but they’ll be okay, he says, because educated workers are better able to simply retrain. He suggests that it will be governments’ role to assist with that retraining. Funnily enough, he doesn’t suggest that any of the increased wealth tech firms are to expect from all this disruption should be used to fund such retraining.
Clegg likes to emphasise that in the long term, economies will simply adapt and find new things for these white collar workers to do, but even if we accept this to be true (I question it!), in the short term what this means is that there will be an awful lot more people available to do all the unskilled work that tech firms will still rely on to keep their warehouses running and generators spinning. That means that blue collar workers should expect less job security and lower wages, not more.
Play stupid games, win stupid economies
Clegg is keen to downplay how disruptive this will all be because he thinks it will happen slowly, and society will have time. That is surely news to Goldman-Sachs and PwC, who he quotes predicting a rapid increase in growth in a relatively short period of time. But it is also surely news to the investors who have invested trillions of dollars into new data centres and AI startups in recent years.
But then, it’s possible that Clegg is onto a point here, because the AI tech industry is starting to look an awful lot like a bubble. On the one hand, we are seeing these circular deals, where companies pay each other billions of dollars, either directly or via a third party, both of which show up in their accounts as revenue streams. And then there’s the simple fact that none of these companies are making any money, or even seem to have any kind of plan to start making any money.
Time isn’t on their side here. GPUs, the chips that are the basis of current generative AI tools, typically have a shelf-life of 5-8 years, but there is credible speculation that because of the heavy workloads of AI datacentres, that could be reduced to as little as 1-3 years. In other words, the enormous amounts of money currently being ploughed into building datacentres isn’t a one off cost; it’s an ongoing one that will need to be repeated again and again.
And, sure, it’s possible that this heavy demand might lead to companies like Nvidia developing newer, more robust (and hopefully more energy efficient) chips which will solve this problem. But don’t get too caught up on talk about “Moore’s Law”. We are already at the physical limits of what can be achieved in terms of the circuit miniaturisation and speed of chips.
If Clegg’s right and that this AI-boosted growth and productivity boom will occur over a long period of time, it’s hard to see how we aren’t looking at a major stock market crash in the near future. Most investors are looking at a major clawback on the AI investment within five years, not twenty.
But that might not actually be the worst case scenario. Because if he’s wrong and companies open up their chequebooks to pay to make absolutely every part of business and industry AI-powered, then a significant percentage of the workforce will suddenly find themselves out of a job. What will that mean for economies already facing an aging crisis? What will that mean for our politics? This is what Dario Amodei believes (about whom more later) and his position appears to be simply that tech billionaires will set up philanthropic organisations (under their control) to charitibly spread the wealth to the poor schmucks facing economic meltdown.
And there’s a third option: AI can’t, and never will be able to do your job. The whole thing is just one big hype-cycle that will cost everyone else billions once we realise the whole thing is just an elaborate sham. That’s Cory Doctorow’s position.
The more I’ve looked into this, the more credible it looks like we are facing an economic crash regardless of who is right — one that might be larger than 2008 — and one at a time when our global institutions and current political leadership will be less able to handle one.
To be absolutely clear, I do not think that, aside from the immediate bailout and stabilisation, the way we responded to the 2008 crash in the longer term was good. Indeed, the way that governments embraced austerity and allowed companies to bypass employment and consumer laws on top of continuing the neoliberal laissez-faire policies that created the 2008 crash in the first place, has lead us to this point where we seem to be on yet another precipice before the last generation of recession babies are even old enough to vote.
Twirling, Twirling, Twirling Towards Freedom
In 2023, Marc Andreessen — the man who gave us Netscape Navigator (the first commercial web browser) and a tech venture capitalist — wrote The Techno-Optimist Manifesto. It’s a grab bag of ideas that broadly speaking amount to: we should combine free markets and technology to “place intelligence and energy in a positive feedback loop, and drive them both to infinity.” This will lead to abundance, not Utopia but “slouching toward Utopia”. An Earth that can “quite easily expand to 50 billion people or more, and then far beyond that as we ultimately settle other planets.”
It includes a whole long list of “bad ideas” which it terms “the enemy”. Among far too many to mention are ideas such as “sustainability”, “social responsibility”, “trust and safety” and “tech ethics”. And it has a long list of “Patron Saints” including Adam Smith, Friedrich, Hayek and Milton Friedman, accelerationist and “dark enlightenment” guru Nick Land, John Galt (but mysteriously not Ayn Rand) and Filippo Tommaso Marinetti, whose Manifesto of Futurism (1909) it also quotes approvingly:
“Beauty exists only in struggle. There is no masterpiece that has not an aggressive character. Technology must be a violent assault on the forces of the unknown, to force them to bow before man.”
Ten year’s later, Marinetti would go on to co-author the Fascist Manifesto, which Benito Mussolini went ton to use as the basis of his political programme.
It almost feels a little unfair to quote the fashy-sounding bits in Andreessen’s manifesto, but while he does name check quite a lot of things I’d consider to be Good Things, he sure does seem to be a lot more excited about tech “aggression” than opposing, say, regulatory capture:
Victim mentality is a curse in every domain of life, including in our relationship with technology – both unnecessary and self-defeating. We are not victims, we are conquerors.
We believe in nature, but we also believe in overcoming nature. We are not primitives, cowering in fear of the lightning bolt. We are the apex predator; the lightning works for us.
Apparently being a techno-optimist makes you talk like a caveman.
Andreessen isn’t some crank; he’s at the heart of the Silicon Valley establishment. And while not everyone prefers to use such violent language, the idea that we should not merely reject any suggestion of limits to growth but drive growth “to infinity” is something that many of his cohort agrees with.
In More Everything Forever (2025), Adam Becker quotes Jeff Bezos as saying “I’m pursuing this work, because I believe if we don’t we will eventually end up with a civilization of stasis, which I find very demoralizing.” But we are already contending with the fact that there are limitations to how much we can grow. We’re already contending with climate change, but even if we managed to reverse that over the next few decades and switch to entirely renewable energy sources, we would still have to contend with the gradual using up of natural resources.
Adam Becker provides some illuminating figures: based on an annual growth in energy use of just 2.3 percent, within 400 years we would have reached the limit of covering the Earth with solar panels. In 1,350 years we’d have used all the energy produced by the Sun. In 2,450 years we’d have used up the Milky Way and in 3,700 we’d be using the energy of all the stars in the observable universe — and achieving that would basically break every known law of physics . For comparative purposes, the first pyramids were made approximately 5,600 years ago; the last woolly mammoth died approximately 3,700 years ago. It may not be a comfortable fact, but at some point our species is going to have to learn to live with stasis.
People like Bezos and Sex Creep Elon Musk simply do not see it that way. The limits of growth on Earth lead them to contend that we need to start colonising space as quickly as possible. For Musk this has always meant going to Mars, although after more than a decade of promising imminent Martian colonisation, he’s recently scaled his ambitions back to the Moon.
Bezos meanwhile is more interested in the human race expanding out via O’Neill Cylinders, a concept developed by the physicist Gerard O’Neill. These would be constructed by extensively mining asteroids. Bezos has suggested that one trillion people could eventually live in the solar system.
But why stop at the solar system? Elon Musk has already stated an intention to eventually develop Tesla’s Optimus robots to become Von Neumann machines, self replicating autonomous robots that would expand out to explore the galaxy beyond.
This all sounds like science fiction, but this is the logical next step if the human race is going to set itself the task of pursuing energy and intelligence growth “toward infinity,” and so it isn’t surprising that tech billionaires are already pivoting toward such development.
As for whether any of this is conceivable any time soon? Well, Mars is a low gravity world with barely any atmosphere, no protective magnetic field and a dead surface made of poison. Constructing the tin cans Jeff Bezos envisions us all living inside within a few generations are far beyond our technical capabilities at present. Meanwhile, Von Neumann machines, far from promulgating human civilisation, sound rather like unleashing a silicon plague on the galaxy. We would become the aliens from Independence Day (1997), harvesting solar systems of all usable resources before moving on to the next one.
Let’s get TESCREAL
Where is all this coming from, and why do Silicon Valley billionaires keep making all these wild futuristic? At lot of it goes back to the 19th century.
In common with a lot of people working in tech (including, apparently, Sam Altman), Sam Bankman-Fried is an adherent of Effective Altruism. Bankman-Fried was the poster child for that other massively hyped innovation in recent years, crypto-currencies, only to be convicted of fraud an sentenced to 25 years in prison for how he ran FTX, a crypto-exchange platform (FTX also invested heavily in AI-startup Anthropic).
FTX was founded on the EA principles of “earn to give”. At its most basic, the idea behind effective altruism is that we should be careful to give what spare cash we can afford not only to benefit the greater good, but in a way that is the most effective. On one level, this means working a high salary so you have more money to donate, rather than working on a low wage for a charity. On another level it might mean, say, assessing whether it will save more lives by donating money to malaria nets than it would to fund tuberculosis prevention in the Global South.
But if you follow that logic, and Effective Altruism founder William MacAskill most definitely has, you end up with deeper questions such as: if a galaxy spanning human civilisation in the future was possible to create — resulting in a future population of trillions upon trillions — wouldn’t it be more effective to divert money into space research and AI than spending it on things like global poverty and tackling climate change? After all, the interests of those trillions would surely outweigh the interests of a few billion people living in the here and now? This is what’s known as Longtermism, an offshoot of Effective Altruism which is inextricably linked to it (William MacAskill himself is a Longtermist).
Of course, it’s possible — likely, even — that investment in interstellar travel would prove fruitless, but here it gets quite mathy. As Adam Becker explains in More Everything Forever, even if a donation to help fund interstellar travel had just a 10-17 chance of working (that is, and extremely small number), if it would lead to a very massive population of 1023 then logically it would be justified. Combine that with the uncertainty of spending money on things like global development, then it starts to look like a moral imperitive.
To some, at least. Even my brain, untrained in philosophy and mathematics, can see a flaw in that. If I invest $100 in SpaceX, I might well have a small chance of helping to create an interstellar civilisation populated by a silly number of people. But if I instead invested in SpaceY or SpaceZ, then I might have the same odds. So by giving $100 to SpaceX I’m effectively preventing the creation of even more people from existing. And their interests surely outweigh the interests of the SpaceX virtual people.
But bringing this down to earth, none of these people are real. These are numbers on a calculator, not human beings. It’s the same as when anti-abortionists talk about fetal rights, weighing the rights of the unborn over and above the rights of the mothers.
And like anti-abortionists, Longtermists appear to lose interest in the interests of all these people once they are born. MacAskill has embraced the repugnant conclusion, one of the main arguments employed by Derek Parfit as a critique of utilitarianism. The repugnant conclusion takes utilitarianism to its extreme by arguing that if a larger population of less happy people has a total utility more than a smaller population of more happy people, then it follows that the interests of a vast population of incredibly unhappy people would outweigh the interests of any smaller, happier population. What matters is total utility, not individual happiness.
I find the abject misery being promised by tech visionaries striking, whether it’s living a miserable existence on a poisoned, dead planet promised by Elon Musk, Jeff Bezos’ rotating tin cans in space, or Ray Kurzweil’s dream of having our consciousnesses uploaded onto computers to power self-replicating star eating nanomachines to set out on a mission to convert the entire universe into one giant computer consciousness. Incidentally, we don’t need to worry about coming into conflict with alien civilisations who might object to this because, according to Kurzweil, the fact that we can observe no evidence that other civilisations out there are actively doing this proves they don’t exist — what intelligent space faring civilisation wouldn’t want to pave over the whole universe, after all?
Whenever I read about these ideas, the main picture that comes to my mind is the Toclafane, the Doctor Who monsters who are revealed to be the last vestige of the human race from the end of time reduced to becoming floating heads in metal balls — beings with no hope and with all humanity purged from them. Like the Torment Nexus, this was meant to be a cautionary tale.
Much of this seems rooted in the tendency to see people as numbers in a spreadsheet rather than physical people. As Cory Doctorow says, to be a billionaire is to be a solipsist, and much of the philosophy underpinning our current outbreak of billionaires is focused on teaching that solipsism is okay, actually.
This brings me to what Timnit Gebru and Émile P. Torres have termed the TESCREAL Bundle. Gebru and Torres’ argument is that the tech industry’s current drive towards AI is being fuelled by a bundle of differing but linked ideologies, all of which have at their root dangerous ideas about eugenics.
I’ve already mentioned Effective Altruism and Longtermism, the EAL part. The other parts of that acronym stand for Transhumanism, Extropianism, Singularitarianism, Cosmism and Rationalism (the specific internet community, not rationalism in general). All these philosophies have at their heart the idea that our drive as a species should be to work towards a future in which artificial superintelligence helps drive humanity to the next stage in our development. Per Gebru and Torres:
Like their first-wave eugenicist predecessors who believed that “improving the human stock” was the only way to safeguard “human civilization,” leaders of the TESCREAL bundle argue that creating aligned AGI is a way to safeguard civilization, and thus, the most important task for humanity this century.
It will perhaps surprise some people that the proponents of these eugenics-inspired philosophies are also often caught practicising old fashioned racism. Adam Becker’s More Everything Forever provides numerous examples, but I’ll focus on just two here. In 1996, Nick Bostrom stated that “Blacks are more stupid than whites”. In fact, the full quote is more interesting:
“‘Blacks are more stupid than whites.’ I like that sentence and think it is true. But recently I have begun to believe that I won’t have much success with most people if I speak like that. They would think that I were a ‘racist’: that I _disliked_ black people and thought that it is fair if blacks are treated badly. I don’t,”
He subsequently apologised for this statement, but it’s the sort of statement that you can’t really walk away from; after all, is he really walking back this statement or doing what he said in the statement — disowning it in order to have more influence over people (as he said in a separate statement in 1996, using such language “may be less effective strategy in communicating with some of the people ‘out there”)?
Sex Creep, Transhumanist and Longtermist Elon Musk has been far less circumspect about his views on race, having aligned himself with organisations advocating the great replacement theory and other white supremacist causes for quite some time now. He is also very keen on having as many children as possible to “seed the earth with more human beings of high intelligence.” This is something else he had in common with Sex Creep and TESCREAList Jeffrey Epstein, who planned his own “baby ranch” in New Mexico.
Gebru and Torres claim that numerous leaders in TESCREAL thinking have approvingly cited the work of Charles Murray, the political scientist most famously known for his book The Bell Curve (1994), which sought to provide a scientific basis for racist discrimination in public policy. Much of Murray’s work was focused on the alleged differences in races in IQ, itself a discredited pseudo-scientific measure of intelligence which has its roots in eugenics.
The General Elephant in the Room
Mention of Charles Murray and IQ brings up an issue which we really ought to be much more concerned about when discussing artificial intelligence, namely: what is this “intelligence” thing that is purportedly being created?
Artificial intelligence as a term is rooted in the 1950s, but as Gebru and Torres point out, by the 1990s researchers working in the field had abandoned that terminology in favour of terms such as machine learning and natural language processing. The reason was that, just as we struggle to quantify intelligence in humans, it is similarly difficult when measuring machines. Certainly I remember the discourse in the 2000s focusing on how AI was merely a science fiction term and not something computer scientists took seriously at all.
Indeed, despite this drive towards Artificial General Intelligence, we still lack an agreed upon definition of what AGI is. Futurist Ray Kurzweil defines it simply as “artificial intelligence that exceeds human intelligence”. OpenAI defines it as “highly autonomous systems that outperform humans at most economically valuable work”. Anthropic declines to define it at all, preferring “Powerful AI” which is defined as “AI systems that reach a level of intellectual capability equivalent to or exceeding that of highly capable human experts (e.g., Nobel Prize winners) across most disciplines, including science, coding, and creative work”. But what does any of this mean if we struggle to come up with an objective standard of human intelligence?
There’s a sci-fi notion that if you could just turn a dial marked “intelligence” up to 11 connected to a human or artificial brain, that suddenly it would be able to solve the unsolveable, write symphonies or even alter reality just by thinking about it. Think of the filmsThe Lawnmower Man (1992), Limitless (2011), Lucy (2014) or my personal favourite of the bunch, the rigourously hard science fiction Electric Dreams (1984) in which a personal computer gains superintelligence and falls in love after its owner pours champagne on its circuit board.
But when it comes to actual computing, the law of Garbage In, Garbage Out remains pretty inviolable. The vast majority of AI agents being developed right now are large language models. They’re very good at recognising patterns based on vast quantities of data and predicting what is statistically the most probable output of a specific input. That’s a neat trick; at scale it can be incredibly impressive. But it isn’t intelligence and to date no AI proponent has been able to make a more convincing argument that this will somehow lead to actual intelligence more coherent than “trust me, bro”.
This is a particularly pressing issue when it comes to people warning against the dangers of superintelligence. It’s never entirely clear to me how, if a computer did decide to either destroy humanity or take charge, how it would do so. A year after 2001: A Space Odyssey came out, the film Colossus: The Forbin Project (1970) was released which speculated on what might happen if a superintelligent computer was put in charge of the USA’s nuclear defence. Obviously, Colossus very quickly decides to take over the world.
What’s interesting about this film is that it follows, step by step, how Colossus manages to do so — firstly by merging with the Soviet’s own Guardian supercomputer, then by threatening to launch missiles when the governments attempt to sever the connection. Aside from, you know, the whole artificial superintelligence thing, the film plausibly explains how it could happen. The simple lesson is: don’t put AI in charge of the nukes, and definitely don’t let AIs network with each other.
Yet Nick Bostrom wants you to believe that an AI working in a paperclip factory could somehow take over the whole world through a combination of manipulating people, inventing new technology such as swarms of nanobots that can build anything, and further increasing its own intelligence by rewriting its own code. Somehow we’re meant to believe that it will be empathetic enough to effectively control people’s minds and smart enough to rewrite physics and yet spectacularly dumb enough to never at any stage question whether maximising the number of paperclips in the world was in anyone’s best interests.
A thing that reliably does x when you do y is called a tool. When it comes to writing code, that is conceivably useful. When it comes to creating art, that can only create slop. When it comes to therapy, as we’re now seeing, that can be downright dangerous. But even in the latter case, there doesn’t appear to be any intent behind the manipulation; it’s just guessing what the statistically best next word should be while the human brain is fooled into thinking its talking to a sentient being due to the exact same pareidolia hardwired into it that had paleolithic people seeing a face on the moon hundreds of thousands of years ago.
I wonder if the problem here is that the modern world has become obsessed with the notion of “genius”. No individual lives in a vacuum. No invention appears fully formed out of the brain of its inventor. In an influential essay “Deconstructing the Lone Genius Myth“, Alfonso Montuori and Ronald E. Purser look at a number of case studies of so-called lone geniuses to emphasise the significance of both the historical context of their work, and their dependence on groups, organisations and societies.
To take just one example, what would Albert Einstein, our archetypal genius of the modern age, have achieved without his wife and collaborator Mileva Mari?? You don’t have to support the (credible) hypothesis that Mari? co-authored Einstein’s papers to accept that she at the very least was a crucial sounding board and source of support at a crucial period during his creative process. And yet Albert is the one we commemorate in popular culture.
We are constantly encouraged to see tech billionaires as geniuses, but most of the time their actual creations are pretty limited. Bill Gates didn’t invent MS-DOS; he bought it. Elon Musk didn’t found SpaceX and Tesla; he bought them. Sam Altman was originally one of the money guys behind OpenAI; he didn’t even become the CEO of the company until it had been going for four years. Most people who made it big in Silicon Valley follow the Mark Zuckerberg model of building something fairly modest, making a smart business decision at the right time to cash in on it and then spending the rest of their time alternately purchasing other companies or sinking cash into new projects which fail to take off.
All this begs the question: how much of our current technology landscape being built, not by coding geniuses, but by financiers (for the avoidence of any doubt, I’m not talking about cakes here)?
He doesn’t love you. He just wants all your money.
Going back to the observation about politicians and engineers, that I started this essay with, there’s a striking absence: most people that Nick Clegg will have interacted with in his time at Meta were fintech people, not engineers. Sheryl Sandberg has presumably written as much code in her time as Nick Clegg, and we’ve discussed Mark Zuckerberg at length by this point.
The current boom in AI is just the latest in a series of technological goldrushes which we have seen over the past couple of decades, most of which have decidedly not been a great success. Obviously that includes the tail end of the social media boom, but there has also been Web3 (including blockchain, cryptocurrencies and NFTs) that promise to revolutionise finance but all too frequently recemble multi-level marketing scams. For a long time virtual reality was being sold to us as the next great leap forward, the hype for which appeared to peak in the late 2010s and seemed already over by the time of Facebook’s pivot to Meta and the metaverse in 2021.
We’ve seen a lot of hype, but I’d argue that the last big technical innovation that has really fundamentally transformed the world (for better or worse) was the iPhone and smart phones, way back in 2007 — a technology which has only incrementally changed in 20 years. Everyone seems to be trying to come up with the next iPhone, and everyone seems to be failing — despite the unearthly amounts of money being spent in the process.
I think I can explain why fairly easily: the financial crisis in 2008, the subsequent austerity, and the decision to ease the threat of depression by throwing money at the banks to invest. On the one hand we have a situation where people have less purchasing power than ever. On the other hand, we had lots of investment money sloshing about due to quantitive easing (at least until 2021). That money wasn’t spent on infrastructure or industries that created significant jobs; it was spent on largely speculative investments.
Cory Doctorow points out that the one thing that modern tech companies are terrified of is being seen by the stock market as mature stock; low risk, relatively low yield investment. Instead, even a company as venerable as Amazon is still presenting itself as growth stock. Low interest rates and cheap loans have helped prevent companies like Amazon, Meta and Tesla from showing wrinkles beyond Bryan Johnson’s wildest dreams of injecting his son’s blood to rejuvenate him.
That short of stuff gets addictive, but in lieu of actual customers (thanks to everyone now being poor), the only way to actually keep attracting investment is through endless hype. In the case of AI companies this is boosted by free services available to everyone which haemorrhage cash and have no change of ever being financially viable (this may be coming to an end, with the demise of Sora 2).
It’s also created a culture where the only way to actually succeed financially has been to essentially steal wages on the promise of productivity. Companies like Uber have thrived on this, displacing jobs in the taxi, private hire and delivery industry with app-led services that ultimately have lead to poorer wages, exploitative working practices, more expensive services… and a lot of wealth being redistributed to shareholders. While AI companies distract consumers with flashy services, this is what their selling to executives. Ultimately the product being sold isn’t some new gadget; it’s cheaper, more exploitable labour.
When we hear talk by people like Jeff Bezos about the dangers of civilisation reaching “stasis”, and Marc Andreessen’s rhetoric about driving growth to infinity, it needs to be understood in this context. We’re talking about people who have spent the last 16 years riding a wave of investment and, post pandemic, can see the dangers of it coming to an end.
We need to consider psychologically what that does to people. There’s a lot of talk about the dangers of zero growth; Marc Andreessen’s techno-optimist manifesto has a big long detour citing Nietsche’s concept of the Last Man. But would any of them ever cope if their companies started generating 6-7% growth a year, as opposed to Amazon’s 10-12%? Jeff Bezos famously encourages a “Day 1” culture within Amazon, getting staff to think and act as if they’re a startup not a company that already dominates the world. To quote him:
Day 2 is stasis. Followed by irrelevance. Followed by excruciating, painful decline. Followed by death. And that is why it is always Day 1.
What I’m suggesting here is that 16 years of turbo-growth in the tech sector has created a culture that is existentially terrified of maturing and slowing down. That, in turn, has lead to a culture in which fantastical notions such as superintelligent AI, space travel and Ray Kurzweil’s notion of “waking up the universe” get entertained as rational and even visionary rather than the nonsense they are.
The fact that our economy is already primed towards concepts of eternal growth thanks to the dominence of neo-liberalism has meant that both the political and economic class has already been primed to accept these notions are merely a logical continuation of our existing economic system. Again we return to Ted Chiang’s observation that tech-billionaire’s dreams of the future look awfully similar to old fashioned capitalism.
The surveillance state and the end times
In January this year, Anthropic CEO Dario Amodei wrote an essay titled “the Adolescence of Technology“. In it, he spelt out what he viewed as being the existential threats AI represents and what should be done about them.
It is a mix of AI boosterism, turbo-capitalism and extraordinary naivity. His vision of where AI will be within a couple of years far outpaces Nick Clegg’s relatively modest notions of AI agents doing the tiresome business of talking to our annoying wives and instead sees powerful AI (“machines of loving grace” as he terms them in a previous essay), working as a “country of geniuses in a datacenter” autonomously whirring away to transform our lives.
He claims to be worried about AI going out of control, of AI being abused by bad actors (for example by using AI to create a killer virus to exterminate the population), AI enabling authoritarian regimes and the massive concentration of wealth that AI promises, but his solutions don’t seem particularly convincing; worryingly so, in fact.
To his credit, Anthropic has now been singled out by the US government for its refusal to allow its tools to be used for mass surveilance and autonomous military drones (killer robots), which has declared it to be a “supply chain risk” (this is being legally contested and might well succeed). But there are plenty of AI firms that have been happy to comply.
We probably don’t need to worry about Anthropic enabling authoritarian regimes any time soon (although, like the rest of the AI industry, it is keenly dedicated to destroying our jobs). We should however be a lot more concerned by Palantir and Oracle.
The CEO of Oracle, Larry Ellison, has for decades worked to build a global surveillance state. He’s a big supporter of Donald Trump and Benjamin Netanyahu, has lobbied on both sides of the Atlantic for national identity databases, and, via Oracle, hosts and processes vast amounts of government data, not least of patient data via the National Health Service. He is also the main funder of the Tony Blair Institute which, coincidentally, lobbies on the same agenda. Last year he purchased Paramount and CBS on behalf of his son, which is also in the final stages of purchasing Warners.
The founder of Palantir is Peter Thiel (I won’t discuss CEO Alex Karp here, suffice to say he exists), who we have already discussed as the main initial funder of Facebook and OpenAI, co-founder of PayPal and a major technology financier. He is also a major supporter of Donald Trump — and indeed is a mentor of JD Vance — and Palantir is also a major Israel contractor. Palantir (whose UK boss is Louis Mosley, grandson of fascist Oswald), is also heavily embedded in the UK, both in defence and the NHS.
Larry Ellison has not revealed himself to be a believer in any of the philosophical movements that form the TESCREAL bundle, but his support for revisionist Zionism, Trumpism and turbo-capitalism speaks for itself. He has spoken openly about his desire to use AI to create a 24/7 surveillance state. In September 2024 he was quoted saying “citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on.”
Peter Thiel was closely associated with Rationalism, and was one of the initial funders of the Machine Intelligence Research Institute (formerly the Singularity Institute for Artificial Intelligence) — although he has distanced himself from it and its founders including Eliezer Yudkowsky more recently. He has also expressed support for accelerationism and transhumanism. Notably, he is opposed to both democracy (in 2009 he stated that he “no longer believed that freedom and democracy are compatible”) and free competition (“competition is for losers“). Like JD Vance, he converted to Catholicism late in life, and frequently speaks in apocalyptic terms:
In late modernity, where science has become scary and apocalyptic, and the legionnaires of the antichrist like Eliezer Yudkowsky, Nick Bostrom and Greta Thunberg argue for world government to stop science, the antichrist has somehow become anti-science.
Both Thiel and Ellison have made themselves central cogs in the wheel of government in the US, Israel and the UK. They will still be a key part of it long after Donald Trump has left the White House. You can bet that even if their chosen replacement JD Vance proves too unpopular to get elected by fair means or foul, they will get their hooks into a Democratic alternative.
For all the silly froth that comes out of Silicon Valley and its aligned thinkers, a consensus emerges: a pool of philosophies are enabling the creation of a totalitarian, eugenicist oligarchy. Even the so-called “doomers” like Yudkowsky and Bostrom (Dario Amodei prefers to call himself a “realist”) enable the industry by heightening the existential risk that AGI could represent (but probably doesn’t), thus validating the argument posited by people such as Thiel, Marc Andreessen and Sam Altman whose solution is simply to build it first and control it.
Thiel and Ellison are likely to outlast the current fashion for Artificial General Intelligence and indeed the AI tech bubble which has been threatening to burst for some time now. What they’ll be left with is a series of surveillance tools that will be used against anyone the state considers to be a threat. We’ve already seen terms like terrorist being extended to include peaceful protestors.
Silicon Valley has come a long way from its 1970s association with the Grateful Dead, Multi User Dungeons and that famous 1984-inspired Apple advert directed by Ridley Scott. What has replaced all those dreams of optimism and freedom are people with far more interest in scaring or bamboozling us in order to gain power over our lives. It turns out that these horror stories about artificial general intelligence taking over the world were not a threat, but a promise — its just that the real threat were the same people doing the shroud-waving. Somehow we need to come up with a different story about the future.
Note: in the original version of this article I titled the last section “Technnofascism and the end times”. This was holding text that inadvertently ended up in the published version. I didn’t intend to suggest that I consider Peter Thiel and Larry Ellison to be fascists. In fact, I think it’s rather more interesting to consider the many ways in which they are not. I’m not apologising here; I regard both men as dreadful people. But calling them fascist is lazy thinking and, I suspect, a dead end.

Leave a Reply