The increasing reliance on artificial intelligence (AI) poses significant economic risks as businesses approach 2026. The term “slop,” defined by the US dictionary Merriam-Webster as “digital content of low quality that is produced, usually in quantity, by means of artificial intelligence,” was named the word of the year for 2025. This choice reflects a growing awareness of the downsides of AI, even as corporate leaders enthusiastically adopt these technologies to reduce payroll costs.
Ed Zitron, a prominent critic of AI’s trajectory, argues that the “unit economics” of the industry are fundamentally flawed. He maintains that the costs associated with servicing customer requests do not align with the revenue generated. Zitron’s blunt assessment is that the current situation is akin to “dogshit.” Despite witnessing rapid revenue growth as more companies invest in AI, projected expenses continue to outpace gains. In 2025 alone, the industry is expected to attract $400 billion (£297 billion) in investments.
Another vocal sceptic, Cory Doctorow, emphasizes that many AI companies are not profitable and are sustained by massive financial backing rather than sustainable business models. He contends that these firms manage their operations by leveraging significant external funding, which often leads to wasteful spending. The challenge for these companies is that each iteration of large language models (LLMs) tends to incur higher costs due to increased data usage, energy consumption, and the need for highly skilled technicians.
The extensive datacentres required to develop and deploy these AI models demand significant financial investment. Bloomberg’s analysis indicated that in 2025, there were around $178.5 billion in credit deals related to datacentre financing. Many new operators are emerging in what has been described as a “gold rush” atmosphere, collaborating with Wall Street firms to capitalize on the AI boom. Yet, the reliance on expensive Nvidia chips raises concerns over the long-term viability of these investments, as the chips have a limited lifespan that could outlast the loan agreements financing them.
As the AI industry expands, it increasingly mirrors previous economic bubbles characterized by excessive speculation and financial engineering. The belief that generative AI will eventually yield enough returns to justify the immense investments relies on compelling narratives about transformative potential. Proponents claim that LLMs are not just tools for data analysis but are on the verge of achieving “superintelligence,” as suggested by Sam Altman, CEO of OpenAI, and are positioned to replace human interactions, according to Mark Zuckerberg.
However, the potential downsides of AI deployment have come under scrutiny, particularly in sectors where human oversight is crucial. Brian Merchant, author of “Blood in the Machine,” has documented numerous accounts from professionals, including writers and coders, who have been replaced by AI-generated outputs. Many of these individuals report concerns about the lack of quality and the potential risks associated with relinquishing critical tasks to AI systems.
The ramifications of hastily replacing human workers with AI technology are becoming increasingly evident. In the UK, a high court recently warned about the use of AI in legal proceedings after instances were reported where completely fictitious case law was referenced. In a separate incident in Heber City, Utah, police officers were compelled to manually verify the work produced by a transcription tool after it erroneously claimed that an officer had transformed into a frog during a bodycam recording.
Such specific instances underscore the challenges posed by the “slop layer” of AI-generated content that inundates the digital landscape, complicating efforts to discern factual information. Doctorow warns that AI does not represent the imminent emergence of superintelligence but rather a collection of tools that can enhance productivity when used judiciously by workers.
Recognizing AI’s capabilities and limitations is crucial, yet the current enthusiasm surrounding these technologies may not be sufficient to justify the inflated valuations and ongoing investments. If the industry reassesses its trajectory, the financial markets could face significant upheaval. The Bank for International Settlements (BIS) recently highlighted that the so-called “Magnificent Seven” tech stocks now make up approximately 35% of the S&P 500 index, a notable increase from 20% three years prior. A correction in share prices could reverberate beyond the tech sector, affecting retail investors across various markets, including Europe and Asia.
In the UK, the Office for Budget Responsibility (OBR) has projected that a “global correction” scenario, with a 35% decline in stock prices, could lead to a 0.6% reduction in the country’s GDP and a deterioration of public finances by £16 billion. Although this would be less severe than the 2008 global financial crisis, the impact would still be felt in an economy striving for stability.
While some may find satisfaction in the idea of tech giants facing challenges, the interconnected nature of the economy means that the repercussions will affect a broad spectrum of stakeholders. As the industry continues to evolve, the balance between harnessing the benefits of AI and managing its risks will be critical to ensuring sustainable growth in the coming years.