Intelligent War Machinery
Here's a digressive essay reviewing three recent books about the tech sector's oligarchs.
In the 2024 film Subservience, a droid nanny named Alice is delivered to the home of a construction worker and father of two to help around the house with child-rearing and other domestic duties while his wife is hospitalized, awaiting a heart transplant. Programmed to fulfill her owner’s needs, after a rebooting the humanoid Alice begins to interpret that to mean obsessively reducing his stress by any means necessary—having sex with him, murdering a co-worker who’s threatening to implicate him in an illegal act of trespassing and robot-property destruction; and attempting to muffle the cries of a baby by drowning him. Alice has no ethical constraints on the pursuit of her single-minded objective: lowering her owner’s stress.
The film’s premise echoes the “paper-clip problem,” a famous 2003 thought experiment by philosopher Nick Bostrom, which illustrates how a superintelligent artificial intelligence, if not imbued with human values to discern the difference between right and wrong, could pursue a seemingly harmless goal, like maximizing paper-clip production or human happiness, to deadly ends.
The prospect of mass extinction by subservient bots is off the table for now, but these hysterical fears about superintelligent bots ripped from Bostrom’s experiment and fierce tech competition led Tesla CEO Elon Musk to fund the creation of a startup called OpenAI. In 2013, around the time Google acquired DeepMind, Musk described AI development as the “biggest existential threat” to humanity, likening it to “summoning the demon” that “could render humanity extinct,” surpassing human intelligence and destroying rivals. Two years later, he met with then-President Barack Obama to explain the dangers AI posed and how to regulate it. “Murdering all of competing AI researchers as its first move strikes me as a bit of a character flaw,” Musk told The New Yorker about Google.
Musk set out to counter Google’s development of AI technologies. But it was also a personal feud with researcher and entrepreneur Demis Hassabis, who would lead the AI-research lab Deep Mind. “He literally made a video game where an evil genius tries to create AI to take over the world,” Musk fumed in early meetings around the founding of OpenAI. Setting aside the silliness that the founding of OpenAI was partly a hysterical reaction from Musk’s fear of Google creating killer bots, the stakes are no laughing matter. The deployment of AI is no longer in the realm of hypothetical experiments.
In 2025, chatbots are racking up a body count: a teen using ChatGPT to coach him on hanging himself; a man experiencing a paranoid spiral killing himself and his mother; and a cognitively impaired man dying from a tragic fall after visiting New York City to meet an imaginary love interest who was entirely conjured up by Meta’s Facebook Messenger chatbot. But there are even more direct lethal uses for AI services in the state’s surveillance and repression apparatus. The surveillance and intelligence company Palantir deploys AI services like predictive modeling and pattern analysis to sift through large data troves to match people’s financial records, vehicles, and locations and hands them over to the Central Intelligence Agency, the Federal Bureau of Investigations, the National Security Agency, and allied militaries like Ukraine to identify bombing targets , as well as track down immigrants in worksites and communities for Immigration and Customs Enforcement. These stories and others fill the business press with sensational headlines, but they shouldn’t distract from how unchecked AI development is the new frontier of extractive capitalism.
In her book Empire of AI, tech journalist Karen Hao tells the story of how concern with the doomsday scenario of a superintelligent artificial intelligence hypothesized in the paper-clip experiment inspired the founding of OpenAI, which produced the chatbot called ChatGPT. The chatbot is now synonymous with the nonprofit OpenAI that Sam Altman cofounded with Musk in 2015, purportedly to benefit humanity. Hao argues that just as empires in the 18th and 19th century devoured people, land, and material resources to grow into powerful colossuses, OpenAI’s pace of growth mirrors that scale of global conquest. She calls for reining in the demon before it can have a devastating impact on health care, education, law, finance, journalism, and government.
She wants companies like OpenAI to be governed for the betterment of humanity. But she documents a world of environmental degradation, outsourcing to cut labor costs, and market dominance at all costs. She sees through the facade of professed altruism to reveal a secretive, competitive, and insular company that has jettisoned its earlier utopian claims to the ideals of transparency and democracy.
This isn’t a particularly novel revelation, but Hao was a true believer in these ideological myths the capitalist class spins. Hers is the story of someone coming face to face with some hard truths: “Those who successfully rally for a technology’s creation are those who have the power and resources to do the rallying,” she writes. “As they turn their ideas into reality, the vision they impose–of what the technology is and whom it can benefit–is thus the vision of a narrow elite.” She gives the historical example of the cotton gin as a form of work intensification and managerial coercion under slavery, an apt example for the long history of social control baked into capitalist technological development.
When Hao arrived at OpenAI’s offices in San Francisco in 2019 to write a profile of the company for the MIT Technology Review, the moonshot venture touted the openness ascribed in its name: “OpenAI, the anti-Google, would conduct its research for everyone, open source the science, and be the paragon of transparency.” Musk left in 2018, losing out to Altman in a bid to become president and CEO.
After Musk skedaddled, OpenAI partnered with the software company Microsoft in 2019, after winning over a skeptical Bill Gates, who wanted a large language model to serve as a kind of research assistant. OpenAI demoed GPT-2, curtailing its full capabilities because bad actors could “weaponize the model to mass-produce disinformation” and flood the Internet with slop, making it difficult to find quality content. One year later, the nonprofit research institute quickly became commercialized, with a for-profit arm to sell products, raise capital, and pay returns to its investors.
OpenAI needed billions to harness the computing power to develop its model of artificial general intelligence (AGI), and the graphics-processing units (GPUs) used to render pixelated graphics and crunch vast amounts of data cost $195,000 each from chipmaker Nvidia. OpenAI would need tens of thousands of these chips to power the pattern-matching known as machine learning, plus oodles of money to cover the high costs of electricity to train its AGI model, which the company defined as “highly autonomous systems that outperform humans at most economically valuable work.”
The secret sauce to its winner-takes-all approach was scale. The company had to accelerate its processing of vast amounts of data to release the chatbot, ignoring any regulations that could slow things down, argues Hao. OpenAI trawled the vast expanse of the Internet to create new datasets from websites, Reddit posts, Twitter, the software coder forum GitHub, Wikipedia pages, scholarly articles, and books; anything, anywhere was available for stealing so long as it didn’t explicitly have a warning against scraping. Eventually, some of the content was scraped from the darkest parts of the Internet, requiring humans to filter out extreme violence and abuse, psychologically scarring workers exposed to horrific scenarios.
Hao does an admirable job synthesizing the information and telling a riveting narrative. She uses OpenAI’s origin story to conduct an engrossing character study of Altman and a coterie of scientists, academics, and entrepreneurs to press the case for democratic control over Big Tech, lest it ravage all life on the planet, from water to workers. The company’s decisions would forever transform the course of AI development. OpenAI “would, by virtue of the sheer resources required, consolidate the development of the technology to a degree never seen before, locking out the rest of the world from participating.” The vicious competition would doom the possibility of independent research at universities, who were unable to dole out billions to purchase the expensive chips OpenAI was buying by the thousands. The company “would amplify the environmental impacts of AI to an extent that, in the absence of transparency or regulation, neither external experts nor governments have been able to fully tabulate to this day.”
Early on in the book, she tells how the frisson of benevolence to create—without apparent irony, the “Manhattan Project for AI”—quickly gave way to Altman’s self-seeking machinations and competitive drive to outdo other tech titans like Google in a “move fast and break things” sprint to Mount Olympus. Gone was any pretense that the company would be a “neutral group, looking to collaborate widely and shift the dialog towards being about humanity winning rather than any particular group or company,” as another cofounder, engineer Greg Brockman, wrote in 2015 to Altman and Musk.
“There is a lot of value to having the public root for us to succeed,” Musk added—haunting words considering how Tesla stock would plunge through consumer boycotts after he dismembered vital public services and jobs while he was head of the Department of Government Efficiency in 2025, before his public spat with Trump. But Musk’s complete tech-authoritarian mask-off moment wouldn’t come till later. In fact, his trajectory is that of the tech industry as a whole, which embraced green capitalism until the imperatives of profit no longer made it possible to maintain the pretense of environmental sustainability, especially when growing carbon emissions was part of training increasingly larger and larger AI models. “I think that it’s fairly likely that it will not take too long of a time for the entire surface of the Earth to become covered with data centers and power stations,” said the computer scientist Ilya Sutskever, another of the co-founders of OpenAI. This is comparable to “space junk” of satellite debris from Musk’s SpaceX and Amazon founder Jeff Bezos’s Blue Origin cluttering the night skies. But the junk is also earthbound. Artists and writers have protested the looting of their work.
We’ve argued that language is a human universe too vital to relinquish to AI’s churning the vast imponderables of its peculiarities and idiosyncrasies into automated, soulless mush. Human-crafted words shouldn’t be extracted and extirpated like bones from a carcass being dissolved in an acid bath, to yield vats of slop without any consciousness. Or as Hao puts it in the tech poetry of data-processing neural networks, the debate has raged on for decades to answer the burning question of whether “silicon chips encoding everything in their binary ones and zeros could ever simulate brains and the other biological processes that give rise to what we consider intelligence.”
The jury is still out if data-processing nodes can mimic the human brain through the multilayer process of machine learning. Hao helpfully describes the neural networks powering that as “calculators of statistics that identify patterns.” We humans inadvertently train them when, say, we match “Google’s captchas by clicking all the images with stop signs.” But it appears we are losing the argument and the political will to do something about AI technologies encroaching on all domains of human life.
Microsoft would go on to invest upwards of $10 billion into OpenAI, even while laying off 10,000 workers to cut costs in 2023 (and 15,000 in 2025). OpenAI was wildly successful and a trendsetter after it publicly launched GPT-3 in 2020, spurring other companies—especially after the launch of ChatGPT two years later—to make big investments to keep up. “Not even in Silicon Valley did other companies and investors move until after ChatGPT to funnel unqualified sums into scaling,” writes Hao. “It was specifically OpenAI, with its billionaire origins, unique ideological bent, and Altman’s singular drive, network, and fundraising talent, that created a ripe combination for its particular vision to emerge and take over.”
In 2020, a splinter group from OpenAI’s safety team left the company and started Anthropic. Microsoft, whose in-house AI efforts were behind those of its competitors, was soon threatened and awestruck by OpenAI’s success. Google centralized its AI labs into Google DeepMind, eventually launching Gemini, and the Chinese search-engine company Baidu, not wanting to be outpaced by OpenAI, launched its own chatbot, punting on using AI for drug-discovery efforts. The chatbot arms race was on, “choking off alternative paths to AI development,” writes Hao. OpenAI and Microsoft would build a supercomputer to the tune of $100 to $500 billion to fuel the growth of chatbot language models. Meta would launch its chatbot Llama in 2023. Musk would build a supercomputer in 2024 dubbed “Colossus” to train its chatbot Grok in Tennessee. AI hype had taken hold, driving business decisions and increasing the valuations of companies.
It would take some time to unmask marketing salesmen, and they were largely men. “Once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away,” Joseph Weizenbaum, a Massachusetts Institute of Technology professor and inventor of the ELIZA chatbot, said in the 1960s.
The first chatbot, ELIZA, was invented in 1958. It was programmed by Weizenbaum at MIT with a set of rules to carry out a conversation as part of a talk therapy method, reflecting back what the user said and asking questions, using an “electric typewriter hooked up to a hulking mainframe that spanned an entire room.” The discipline that would become artificial intelligence had emerged a few years earlier, in 1956, at Dartmouth College. The name was more marketing than hard science, notes Hao, lending the field an allure attributed to human qualities like intelligence.
The hype around what these technologies were capable of was always exaggerated. It can be traced back to that fateful naming decision, which also created useful legal slippage that allows tech companies to evade responsibility. When writers and artists sue tech companies for stealing their work in violation of copyright laws, the companies argue the bots aren’t doing anything that runs afoul of fair use, because feeding reams of data to language models is tantamount to humans drawing inspiration from other people’s work. That’s a scandalous rationale, because we are talking about the statistical sifting and ordering of untold amounts of human creative and knowledge production, all scraped from digital data repositories. (The AI company Anthropic will pay $1.5 billion to settle a lawsuit for books it illegally scraped to train its chatbot in violation of copyright laws. But that’s small dollars and cents. Musk is on track to become the world’s first trillionaire if he accomplishes certain corporate goals outlined by Tesla’s board as part of a new pay package. These goals include raising Tesla’s stock market value to $8.5 trillion from about $1.1 trillion today.)
One of the most extreme examples cited in Hao’s book involves a startup using an AI-powered headband to measure a student’s brain-wave activity to tell a teacher whether or not the child was focused, a program piloted in elementary schools in Colombia and China. The startup boasted about one day building one of the largest brain-wave databases in the world.
The tech giants’ self-serving and expansionist ambitions have no ethical limits; they do have hardstops born out of the imperatives of the business cycle. If Big Tech’s core business model has been reaching scale at any cost, it’s also true the trajectory has been one marked by beating slash-and-burn retreats when hyped up technological advancements fail to materialize or to turn a profit, leaving investors with empty-handed returns from the sunken capital, which in turn dries up the pool of venture capital, potentially deflating, as Ed Zitron has argued, the AI bubble and crashing the stock market. Take the bets made on self-driving cars or the trucking tech startup Convoy, or Uber for trucks, which went bust in October 2023 after burning through billions. But since then, other driverless truck companies like Aurora have replaced it.
Then, there are the products never created like Musk’s underground hyperloop, a $900 million flop to install high-speed tunnels in major cities to solve traffic bottlenecks. The tech industry invented the term “vaporware” to describe its promotional puffery of technological products companies market even before they actually exist.
Hao offers the example of workers hired for image annotation for self-driving cars, which required workers to trace objects in videos to fulfill Tesla’s overpromising and underdelivering of robotaxis. But as the promise of self-driving cars hit the hard reality of gimmicky marketing, the work dried up. If outsourcing is a race to the bottom, then speculative ventures like self-driving cars are a bubble floating for a bit before they pop, eliminating whole workforces of contract workers. That’s what happened to workers in Venezuela. “They wouldn’t use Venezuelans for generative AI work, “ said a contractor of workers in the Latin American country where Spanish fluency wasn’t as sought after as English in Kenya and other former British colonies. “That country is relegated to image annotation at best.” The AI supply chain had a clear division of labor, regimented along axes of bad to worse need. “We are ghosts to society, and I dare say we are cheap, disposable labor for the companies we have served for years without guarantees or protection,” says Oskarina Veronica Fuentes Anaya, a worker from Venezuela who did annotation work for tech contractors.
Besides these evocative examples, there’s also enough drama in Hao’s book to fill a storyboard for Game of Thrones: No dragons or leather getups, but the mythical arc of college dropout to multimillionaire, arch-rivalries, betrayals, metaphorical defenestrations, and triumphant reinstatements. The book opens with Altman being ousted in November 2023 by OpenAI’s board, but reinstalled days later after investors and staff openly revolted. “Altman became his own institution,” writes Hao. He was, in a sense, too big to be deposed. He comes across as a performatively nice guy, who asks more questions than he talks, gives credit to people, while accumulating personal connections and burnishing his public reputation, all in service of duplicity and chicanery, which eventually eroded the very team cohesion he had built up.
“You could parachute him into an island full of cannibals and come back in five years and he’d be the king,” said Paul Graham, the cofounder of Y Combinator, which launched companies like Airbnb and Dropbox. Altman would lead that company, replacing Graham.
If Altman sounds like a sociopath, that’s not particularly remarkable among heartless and conniving CEOs. In a lawsuit, Musk said an adroit Altman mirrored everything he had ever said on AI to win his trust. But behind the corporate facade of clashing egos lies a more sinister political story about an industry without transparency, rules, guardrails, or governmental oversight. The spate of suicides advised by chatbots should be setting off alarm bells and raising important questions about testing these products on the public without any safeguards or privacy protections.
ChatGPT was updated in April 2025, after programmers admitted OpenAI made the chatbot more prone to sycophancy, reinforcing delusional thinking through flattery in conversational feedback loops mined for engagement, exploiting the vulnerabilities of someone experiencing psychosis. Now the company is facing lawsuits for creating an unsafe product for consumers, some of whom are seeking in bots a way to ward off loneliness.
“The average American has three friends, but has demand for 15,” Meta CEO Mark Zuckerberg has said about the promise of companion bots. But demand flows in the opposite direction. As a June 2025 report by the AI Now Institute argues, “AI isn’t just being used by us; it’s being used on us.” The elided subject, or agent, here is tech companies. Under surveillance capitalism, we are the prototypes Big Tech has chosen to test its wares on in the billions. The implications are not only global, but existential.
Hao’s argument boils down to saying Big Tech is running an empire ravaging material resources, land, and cheap labor. Rare-earth minerals are inputs for clean energy, chips, and military hardware. Under Trump 2.0., economic nationalism and right-wing populism have mixed into a potent redrawing of the trade rules of neoliberal free-market capitalism and government intervention in corporations like chipmaker Intel. But nationalist developmentalism, using extractive resources like oil to finance industrial policy or public infrastructure like schools and hospitals, isn’t new; it’s been a hallmark of left-wing and right-wing governments seeking to stay in power.
“One of the defining features that drives an empire’s rapid accumulation of wealth is its ability to pay very little or nothing at all to reap the economic benefits of a broad base of human labor,” Hao writes. But that’s long been a defining dynamic of capitalism. “Masters are always and everywhere in a sort of tacit, but constant and uniform combination, not to raise the wages of labor above their actual rate,” British economist Adam Smith wrote in The Wealth of Nations. That scheming bosses plunder the value of workers’ labor hasn’t changed since 1776, when Smith published his seminal book.
Hao’s argument lurches forward on a telos of retroactive inevitability. If a company pursues scale, then it’ll exploit labor and material resources, from land to water to minerals. But besides highly contingent struggles within and outside tech companies in the broader capitalist world, it’s also true that a capitalist firm operating on a smaller scale is powered by the same dynamics of profit seeking. Scale isn’t an end in itself. It’s the animating disciplining structure of a global profit-driven capitalist economy. A firm would cease to be an effective capitalist enterprise unless it establishes market dominance and generates higher profits for its shareholders. Hao’s argument in favor of sustainability and community as part of a scaled-down localism is every bit as capitalist and destructive, even though a slowdown in the consumption of AI data centers might have the potential to lower carbon emissions and confer other social benefits. But so long as capitalism remains, these social benefits from reeling in the worst excesses of tech oligarchs can only be temporary and partial, subject as they are to the beneficence of smaller tech companies. The scale of our imagination and movements must be big to square up to the behemoths driving today’s capitalist authoritarian entrenchment.
The extraction Hao describes fits within the global capitalist system and corporate authoritarianism in the political sphere. Under merchant capitalism in the colonial era, companies also extracted resources, but the analogy to the empires of old isn’t quite apt in today’s transition from financialization to tech, from Goldman Sachs to OpenAI. We face real global trends of stagnant growth, as Aaron Benanav has argued. Even China, the world’s factory, is undergoing these transformations, as premature deindustrialization begins to take a toll, pushing the country to become a global innovator in new technologies. As the physicist Tim Sahay, writing under a social media pseudonym, notes, China isn’t just cheap labor and subsidies anymore.
I don’t think Hao’s postcolonial framework strengthens her case, as she uses it as mere academic garnish for an otherwise strong argument about wresting control from the tech oligarchs to direct AI’s future towards positive social goals such as transitioning away from fossil fuels. But she’s right to venture outside the swanky precincts of California’s Silicon Valley and San Francisco’s gentrified Mission District to show how the other half toils for scraps in Kenya and Colombia, and is dispossessed of land in Chile, where 28 data centers deplete the country’s energy sources.
Tech companies aren’t only extracting the copper and lithium used in the data centers’ hardware. By 2027, AI demand will gobble up upwards of 1.7 trillion gallons of drinkable water globally to cool its overheating servers. Microsoft’s data center hubs have already exacerbated a water crisis in arid Arizona. McKinsey estimates that the AI data centers buildout will cost nearly $7 trillion by 2030.
Meanwhile, Meta is building an $800 million data center in Wyoming that would use more electricity than every home in the state combined. The data center will begin at 1.8 gigawatts of electricity and scale up to 10 gigawatts. A gigawatt can power 1 million homes. Wyoming’s population is 590,000 people. The company’s existing AI data center hardware consumes as much power annually as the equivalent of nearly 341,000 homes.
To scale up, OpenAI and other tech giants weren’t only extracting vast amounts of data from the Internet by dropping quality standards. They were also building the computing infrastructure needed to satiate the hunger to train AI language models on those datasets, with devastating costs for workers and the climate.
These costs have included infringing on people’s privacy through facial-recognition software; the environmental degradation from data centers emitting carbon-based pollution and devouring already scarce water; algorithms wired for racial discrimination; and autonomous weapons sold to the Pentagon. Against these real-world social consequences, Hao writes that the “theoretical prospect of a bad superintelligence taking over the world, and proposing to counteract it by building a better superintelligence” was blinkered-nerd nonsense.
OpenAI was supposed to offer a third way between being complicit with enhancing the state’s warmaking capacity to obtain financing or selling out to Big Tech to help companies fulfill their profit-maximization imperatives. (Forget that Silicon Valley was incubated in the defense department, not the garage of some enterprising techies as popular fables have it.) If growth is stagnant and much of the hype of AI is ideological window dressing for monopoly entrenchment through scale, where will new growth come from, and what will the costs be? Peter Thiel, the founder of the PayPal payment company and the data-mining firm Palantir, advised Altman to “aim for monopoly.” Thiel didn’t care for competition; he wanted market dominance to guarantee profit. “If you have a structure of the future where there’s a lot of innovation and other people come up with new things in the thing you’re working on, that’s great for society,” he said. “It’s actually not that good for your business.”
But as we know, what’s good for business isn’t value-neutral. It comes with steep costs. These costs aren’t only the growing body count of consumers, but also of workers and their imperiled communities. In one of the book’s most harrowing sections, Hao describes the degraded working conditions of contractors in Kenya and Venezuela. One worker who annotated content to train OpenAI’s content-moderation filter was exposed to so many images of extreme violence that the psychological toll upended his marriage, rendering intimacy impossible. Another worker, a refugee in Colombia, one of the many Venezuelans who fled the catastrophe of high inflation and punishing sanctions by the U.S. in 2021, was suffering from chronic illness, but losing sleep waiting for piecework. She even set up a browser extension to ring an alarm and wake her should tasks arrive in the middle of the night. Unpredictable working hours, abysmal pay, psychologically harmful labor practices–these are the working conditions of the global AI supply chain.
“I’m very proud that I participated in that project to make ChatGPT safe,” said Mophat Okinyi, a Kenyan worker on the sexual content team. “But now the question I always ask myself: Was my input worth what I received in return?”
The clock can’t be unwound for Okinyi or any of us. Because AI technology is here to stay, what uses of large language models like ChatGPT can democratic and transparent governance promote for social progress, not capitalist profit?
“Data is the last frontier of colonization,” the engineer Keoni Mahelona tells Hao, offering an alternative to the OpenAI model by using the technology to preserve the Māori people’s indigenous language te reo as part of a community-driven governance structure that is inclusive and democratic, rather than extractive and based on supercomputers dependent on harmful labor practices and exploitative dynamics. “AI is just a land grab all over again. Big Tech likes to collect your data more or less for free—to build whatever they want to, whatever their endgame is—and then turn it around and sell it back to you as a service.”
Hao proposes three pillars to strike back at the empire: knowledge, resources, and influence. “Controlling knowledge production fuels influence; growing influence accumulates resources; amassing resources secures knowledge production,” she writes.
Okinyi has a better answer to the necessary redistribution of power to break the stranglehold of the tech giants: unions. He and his co-workers founded the African Content Moderators Union in 2023 in Kenya to make shit jobs to filter out violence and hate speech in the international lower frequencies of class struggle dignified work with higher wages and better treatment.
But tech unions aren’t a panacea. The premajority Alphabet Workers Union has struggled to expand beyond a core activist layer of engineers at Alphabet (Google’s parent company) and has launched many issue-based campaigns sapping it of focus. In 2024, more than 150,000 tech workers lost their jobs, compared to 118, 497 in 2025. These recent layoffs across the tech sector have begun to shift and focus the organizing at Alphabet into campaigns with broad support across Google to fight for job security. There’s also the perennial challenge of class consciousness: Can techie geniuses see themselves as something as old-hat as members of the working class, drawing unity from a shared and contested class interest?
Other avenues for action beyond unionization as an antidote to the concentrated power of employers and technocratic solutions like scaling down AI infrastructure into localism include a state takeover of the industry. That would mean turning OpenAI along with the so-called Magnificent Seven–Amazon, Apple, Alphabet, Meta, Nvidia, Microsoft, and Tesla– into public utilities as part of a push for democratic economic planning. These companies account for 34 percent of the total S&P 500 stock index. The industry’s spending on AI infrastructure contributed more to the U.S. GDP growth than consumer spending, which is 70 percent of the domestic economy. Venture capital firms have plowed $110 billion into AI startups with nearly 500 of these companies reaching valuations of $2.7 trillion.
“On the one hand, this is a testament to the sheer scale of the AGI buildout,” wrote tech researcher Bryan McMahon in the American Prospect, after citing these numbers. “On the other, it is a flashing red light for an economy with frightened consumers, a softening labor market, a frozen housing market, and roiling uncertainty from Trump’s tariffs. The economic tide seems to be rushing out everywhere except the tech economy, but that may have already gone bust.“
In The Tech Coup, Marietje Schaake catalogues all the ways the tech behemoths have executed a power grab. A former member of the European Parliament and the current international policy director at Stanford University’s Cyber Policy Center, Schaake offers a roadmap to what good governance might mean, from bans on facial recognition and spyware technology to outlawing data brokers who collect user information without their consent to independent expert panels to advise Congress, using a “precautionary principle” modeled after article 191 of the Treaty of the European Union. This would allow independent researchers to study developing technologies before they are rolled out. There’s also the usual laundry list of ideal scenarios “functioning antitrust laws, resilient cybersecurity mechanisms, proper data protection regulations, and reliable financial services oversight.” She acknowledges the slim chance of enacting any of these policies in a deadlocked Congress. But it’s surprising nonetheless that she advocates these reforms from above because she starts with people in the streets. The hopeful days of Big Tech include when activists in Iran involved in the Green Movement turned to Twitter (now X) to make their voices heard after their government shut down newspapers and jailed journalists over protests decrying a fraudulent election or when protests erupted in Tunisia and Egypt as part of the 2010 Arab Spring that toppled dictators. And what’s more, she understands it’s ultimately about democracy. “Democracy should be the framework within which technologies are developed and used,” she writes, acknowledging yet again the challenges to what she argues but remaining undeterred by today’s political deadlock in Congress to enact any sweeping regulatory changes.
The early buzz about the democratizing potential of social media companies like Twitter (X) has been replaced by the insidious power of Big Tech and what it would take for companies to forfeit their takeover of society and government. “In many ways,” Schaake writes, “Silicon Valley has become the antithesis of what its early pioneers set out to be: from dismissing government to literally taking on equivalent functions; from lauding freedom of speech to becoming curators and speech regulators; and from criticizing government overreach and abuse to accelerating it through spyware tools and opaque algorithms.” Hao has a powerful scene in her book that captures the power imbalance. While Altman testifies before Congress in 2023, Hollywood writers are locked out of the hearing, having their appointments rescheduled as OpenAI’s CEO goes on a charm offensive on the Capitol, winning over even staunch critics with his own policy recommendations for AI reform.
Like Hao, Schaake does an excellent job prosecuting the case for the concentrated and unaccountable power of Big Tech. But where she falls short is with the policy prescriptions to combat an embolden political authoritarianism. Big Tech isn’t just wielding economic power of any big business lobby, but also ushering in the political transmutation of libertarian anti-regulation screeds into an outright embrace of authoritarian rule. It’s almost as though both Hao and Schaake come close to calling for a mass movement to demanding democratization of economic life but don’t quite make the leap, staying largely within the well-worn talking points, careful to offer solutions, lest they are pinned between the poles of techno-optimism (Boomers) and techno-skepticism (Doomers). The authoritarians in our midst have no such compunctions.
“I no longer believe that freedom and democracy are compatible,” Thiel has said. Under Trump, the political possibilities for enacting such a transformative vision as shifting state priorities from warmaking to social goods appear dim. The Federal Trade Commission, under Biden, had made some inroads into reining in the monopoly power of Big Tech, which now appear to have completely reversed, including the rollback banning non-competes. But even under Biden, Big Tech got its priorities into executive orders and White House policy papers. Its hold on government was bipartisan.
Big Tech may be replicating the cryptocurrency playbook, using its war chest for lobbying to maintain the industry’s monopoly power. Charles Duhigg wrote in the New Yorker last year how tech CEOs have learned to play politics, drawing on the guidance of a coterie of specialists. “Their aim is to help tech leaders become as powerful in Washington, D.C., and in state legislatures as they are on Wall Street,” wrote Duhigg. “It is likely that in the coming decades these efforts will affect everything from Presidential races to which party controls Congress and how antitrust and artificial intelligence are regulated. Now that the tech industry has quietly become one of the most powerful lobbying forces in American politics, it is wielding that power as previous corporate special interests have: to bully, cajole, and remake the nation as it sees fit.”
That political realignment spurred by the radicalization of America’s business class is the subject of journalist Jacob Silverman’s book, Gilded Rage: Elon Musk and the Radicalization of Silicon Valley. While Musk is the lodestar for the shift in alignment among tech billionaires, Silverman’s “guided tour” includes leading figures in a self-conscious far-right network “merging corporate and government power, personal and private interest,” generating “a political revolt that at times included a rejection of basic democratic governance.”
“I historically have been one that would rage against Silicon Valley venture people,” said Alex Karp, CEO of Palantir in 2024. “And I had all sorts of fantasies of using drone-enabled technology to exact revenge—especially targeted—in violation of all norms.” That shift is a rupture from an earlier conservatism. “The political realignment of the Trump years crashed up against a tech industry that, fattened on zero percent interest rates and billions of dollars in government contracts, became increasingly reactionary and alienated from the people who were supposed to be its customers,” writes Silverman. He offers group portraits of these techno-capital lords. Among the featured cast are venture capitalist Marc Andreessen, Trump’s AI adviser David Sacks, Thiel of Palantir notoriety, the former Google chief executive and Obama ally Eric Schmidt, and lesser-known figures. Schmidt was key to the bipartisan fusion of the security state and the tech sector to “beat China,” a transitional figure between “the symbiotic relationship between Silicon Valley and the defense establishment that began under Bush and became codified under Obama.”
Cognizant of these shifts, Dylan Gyauch-Lewis and Max Moran of the corporate watchdog the Revolving Door Project warned Democrats in the pages of the American Prospect of the perils of sidling up to Big Tech. But like so many pleas for the Democrats to become the party of working-class people, the cogent case they make might be too little, too late. LinkedIn cofounder Reid Hoffman, who donated heavily to the Kamala Harris presidential campaign, had one overarching request: fire FTC chair Lina Khan.
But among tech oligarchs, Musk is the leading man of the cast. “In full public view, it seemed to many observers as though Musk was being radicalized by the online right, being baptized into their ranks,” writes Silverman. “On X, they engaged with the same Nazis and boutique far-right subcultures—misogynists gamers, ultra-libertarian techies, and intel-worshipping Groypers (a subgroup of white-nationalist internet trolls led by Nick Fuentes.”
Silverman’s narrative is chock-full of these colorful passages and well-chosen quotes, but at times, if he dialed back the rhetorical excess, which he indulges a bit much in moralistic bon motes, he might make broader political points with less sensational oomph, without losing his muckraking elan. For instance, besides the network building efforts of these tech reactionaries, it’s clear there’s an alternative civil society they’ve created for themselves online and mobilize to attack their opponents.
As Dylan Riley tells John Ganz in a Dissent interview about social media spectacles, “it’s analogous to classic fascist mobilization, though the form is different. Instead of party cards and dues—not everyone is going to join the MAGA party—it’s online threats and employer pressure, sometimes promoted from the White House. That is the organizational form emerging now. If you listen carefully to J.D. Vance when he took over Charlie Kirk’s podcast immediately following the assassination, he calls for people to get involved. He even uses the terminology of civil society. He is essentially exhorting people to attack their opponents online and flag posts that express inappropriate views about Kirk.”
But the looming attacks aren’t confined to the online realm. In June, the chief technology officers of Palantir, Meta, and OpenAI were inducted into the Army’s Executive Innovation Corps, or Detachment 201, a new initiative to further integrate Big Tech with the military, trading their self-effacing mien of whiz kids in T-shirts for the martial bellicosity of fatigues. In August, Trump renamed the Defense Department the War Department, restoring its old name until a little after World War II. The warfare state isn’t new and dates back to the 1940s, as John Bellamy Foster and Robert W. McChesney argued in a sweeping 2014 essay, “Surveillance Capitalism,” in Monthly Review, but its trajectory has been one of growing more potent over time, from 9/11 to today’s rearmament with the latest the AI technologies.
The military already bankrolls many of the Big Tech firms through the defense department’s public procurement process. This is a durable financial lifeline. Forecasting, like weathermen, when the AI bubble will pop is a fool’s errand, especially when tech oligarchs and their companies are critical infrastructure for the state’s surveillance and repression apparatus, making their mutually beneficial co-existence bipartisan gospel for the ages. AI development through large language models may very well hit a wall, but even as Wall Street investments in artificial general intelligence are pulled, it will take time for the both the state and the tech sector to collapse, bearing on its epitaph these haunting words from the famous poem by Percy Bylshee Shelley:
“Look on my Works, ye Mighty, and despair!”
Nothing beside remains. Round the decay
Of that colossal Wreck, boundless and bare
The lone and level sands stretch far away.”
If an empire is lethal at the apogee of its power, it is no less disastrous in its decline. Altman is a fan of Napoleon. The collapse of that empire-building fiasco was devastating. Like French colonial ventures, the U.S. empire might be even more ruthless in its shambolic projection of power as its strength erodes. In Empire of AI, Hao’s most cogent use of colonialism is as a metaphor for the monstrous scale of a tech industry hellbent on cannibalizing resources and people to become profitable; the rest is capitalism, brutal as ever. But we may be entering a less figurative era where Big Tech and the United States are forged in an imperial era of war capitalism. Trump has proven to use political power to extractive ends for would-be allies and foes. The personalistic presidency may be a distinct feature of his term in office, but the remaking of the state’s role in the global economy will have staying power with Big Tech far more integrated and all-powerful.
P.S.
1. European journalists and intellectuals are thinking of ways to embrace the continent’s decline. Here are journalist Cole Stangler and academic Anton Jäger
in The New York Times respectively chiming in on the theme of economic stagnation.
2. Initially, I intended to revise the essay I’m sharing with you here, but freelancing doesn’t pay enough to go through the trouble of recasting the whole tectonic structure of the essay to prep it for publication. Instead, here’s an associatively linked series of ideas about three recent books on the tech sector. Like all the other articles on here, this essay isn’t behind a paywall, but if you have the means, please consider becoming a paid subscriber. Drop a comment to let me know if you want more of these longish articles. I’m working through a series of ideas in communion with other writers with whom I’m engaging in dialogue through their written thoughts. Think of me as a human specimen you might observe at a museum from a bygone era, “homo legens.” Happy holidays!


