Since the release of ChatGPT at the end of 2022, the field of AI has seen impressive developments almost every month, sparking widespread speculation about how it will change our lives. One of the central questions concerns its impact on the workplace. As fears surrounding this issue persist, I believe it's worth revisiting the topic from time to time. Although the development of AI is dramatic, over time we may gain a clearer understanding of such questions, as empirical evidence continues to accumulate and more theories emerge attempting to answer them. In this article, I’ve tried to compile the most relevant theories—without claiming to be exhaustive—as the literature on this topic is expanding by the day. The question remains: can we already see the light at the end of the tunnel, or are we still heading into an unfamiliar world we know too little about?
Technology as a Tool Theory
The "technology as a tool" theory argues that AI is not a revolutionary new phenomenon but merely the latest in a long line of productivity-enhancing technologies, following in the footsteps of the steam engine, electricity, and the computer. Proponents of this view argue that while artificial intelligence will undoubtedly automate certain tasks and eliminate some jobs, it will also create new jobs and industries—meaning that, in the long run, employment will not decline but rather transform.
This theory is supported by many economists, who cite historical examples of how every major technological breakthrough initially displaced workers from the labor market, but over time, new types of occupations and production methods emerged to absorb the workforce. The printing press eliminated the work of monks who copied manuscripts, but it gave rise to the publishing industry. Computers replaced analog data storage devices but created new careers in computer science, data analysis, and software development.
However, in contrast to this optimistic view, the unique nature of artificial intelligence has led to numerous criticisms over time. AI is not only capable of replacing physical labor processes, but increasingly also cognitive, creative, and decision-making functions—capabilities that were once considered the exclusive domain of human workers. So while past technologies largely complemented human abilities, AI in many cases replaces them.
According to Erik Brynjolfsson, director of the Stanford Digital Economy Lab, the transformation driven by AI will be broader and faster than any previous technological revolution. Automation now affects not only factory workers but also lawyers, developers, marketers, and illustrators. Voice recognition, image generation, content production, and software development are becoming increasingly automatable, leading to declining demand in many creative professions.
The tool-like nature of technology comes to the fore when it is used not merely to cut costs but to truly expand human capabilities. Employers’ decisions play a crucial role in shaping how AI transforms the world of work: a company might use AI to support its existing workforce and explore new opportunities, or it might aim to maximize profits through layoffs and task automation. The latter approach, however, risks exacerbating social inequalities and tensions, especially if the benefits of AI are concentrated in the hands of a narrow elite.

Many argue that historical parallels can be misleading, as the impact of AI is not linear and does not necessarily follow the patterns of the past. According to Nobel Prize-winning economist Joseph Stiglitz, while new jobs will indeed emerge, their number is unlikely to fully compensate for those lost. Given the rapid pace of AI development, global competition, and the short-term profit motives of corporations, traditional market adjustments—such as reskilling or retraining workers—may not be able to keep up with the transformation in time.
The Great Displacement Theory
According to the theory of the "The Great Displacement" the emergence and spread of artificial intelligence is not merely a new technological wave but represents a fundamentally new development: it is capable of automating not only physical tasks but also cognitive work processes that were previously carried out exclusively by humans. The theory was formulated by Erik Brynjolfsson and Andrew McAfee in their book Race Against the Machine, in which they argue that technological progress is advancing more rapidly than individuals and institutions can adapt. As a result, the labor market is undergoing a comprehensive, gradual, yet lasting transformation in which a significant proportion of jobs will either disappear or be radically redefined.
Empirical data already supports this restructuring. According to Goldman Sachs economist Joseph Briggs, unemployment among tech workers aged 20 to 30 has increased by three percentage points since early 2025. This is particularly notable given that the tech sector had experienced steady employment growth over the past two decades. However, the rise of AI has slowed the hiring of new employees, especially in junior positions. This is because companies like Alphabet, Microsoft, and Salesforce already generate 30–50% of their code using artificial intelligence, meaning that much of the work previously done by entry-level developers can now be automated.
The situation is further complicated by the fact that AI can replace not just individual tasks but entire jobs. According to research from MIT, automation accounts for 50–70% of the income inequality that has emerged since the 1980s. Thus, artificial intelligence not only reduces the number of jobs but also deepens socio-economic inequality, as the benefits of productivity gains primarily go to capital owners and technology companies, while the middle class and lower-skilled workers are left behind.

Nobel Prize-winning economist Paul Krugman also paints a bleak picture of AI’s impact. In his view, many white-collar jobs are at risk, especially those that do not rely on creativity or original thinking but involve routine, repetitive cognitive tasks. Krugman argues that it is difficult for governments to respond effectively to the structural changes brought about by AI, as these changes are rapid and far-reaching, and most policy interventions may prove inadequate.
At the same time, critics of the theory argue that the “great reshuffling of labor” is overly deterministic. They point to the so-called "lump of labor" fallacy—the mistaken belief that the amount of work available in the economy is fixed. Historically, new technologies have often created entirely new industries and job opportunities. Artificial intelligence may likewise generate new types of demand and, in doing so, create new forms of employment—just as social media and e-commerce led to the emergence of completely new job categories.
In summary, the theory suggests that AI will not simply cause temporary disruption in the labor market but will lead to deep and lasting changes in the world of work. The process has already begun, starting with the most easily automatable roles, and is expected to result in broader economic and societal transformations over time. The central claim of the theory is that it is not just jobs that will vanish—the very meaning and function of work in the economy may be fundamentally redefined.
The Idea of Basic Income
The idea of universal basic income (UBI) has gained renewed momentum in recent years, mainly due to the increasing impact of AI and automation on the labor market. Among the most prominent advocates of the concept are tech billionaires such as Mark Zuckerberg, Elon Musk, and Sam Altman, who believe that technological unemployment caused by automation is inevitable but manageable—provided that society responds in time with a new kind of social contract. A central element of this contract could be basic income: a regular, unconditional cash payment to all citizens.
Supporters of the idea argue that UBI would not only provide economic stability but also uphold human dignity, as it would enable people to work by choice rather than necessity. Elon Musk has stated that “there will come a point where work is no longer necessary” because “AI will be able to do everything.” In this light, basic income is not merely a form of welfare but a kind of social dividend that allows everyone to share in the benefits of technological advancement.
When it comes to financing, research by Aran Nayebi suggests that if AI-driven productivity were to increase by five to six times compared to current levels, it could generate enough output to cover a UBI costing 11% of GDP. In addition, several economists believe that alternative sources—such as land value tax, consumption taxes, or a robot tax—could help fund the system. The idea of a robot tax, advocated by figures including Bill Gates, could create a direct link between automation and social redistribution.
The historical roots of basic income stretch back to ancient Athens and Enlightenment thinkers such as Thomas Paine and Thomas Spence. In the 20th century, the idea of a “social dividend” gained new momentum through feminist and civil rights movements, and more recently, it has become relevant again due to the COVID-19 pandemic and the rise of AI. Over the past few decades, more than 160 UBI experiments have been conducted around the world, most of which have shown positive effects on mental health, education, housing stability, and entrepreneurship. The impact on willingness to work has been mixed, but there is no consistent evidence of a significant decline.
At the same time, the concept of basic income has faced strong criticism. Some argue that UBI is nothing more than a hidden corporate subsidy for tech companies, which would thus be spared from bearing the social costs of layoffs. Others fear that a UBI system would foster dependence on government aid or even serve as a tool for surveillance and data collection. Philip Alston, former UN Special Rapporteur on extreme poverty, has been especially outspoken, warning that digital welfare systems can easily be turned into instruments of corporate profit-making and state control.
Political resistance is also on the rise. In the United States, for instance, the Foundation for Government Accountability (FGA)—a lobbying group largely funded by billionaires—has launched a campaign to ban basic income experiments at the state level beginning in 2024. The FGA claims that UBI reduces people’s willingness to work and increases reliance on the government, though these claims are not supported by empirical evidence. On the contrary, an experiment conducted in Austin, Texas, found that employment among those receiving $1,000 per month did not significantly decline; in fact, many used the time and security gained to pursue education, start businesses, or take on caregiving responsibilities. Critics also point out that the FGA is backed by an opaque network of donors with questionable funding sources, and its arguments are widely dismissed in some circles.
Nevertheless, the camp of UBI supporters is growing, and the accumulation of empirical evidence is providing increasingly strong arguments in favor of its implementation. Given the inevitability of technological unemployment, basic income is no longer just an idealistic vision—it may become a necessary response to the challenges of a fundamentally transforming economic and social system. According to UBI’s most prominent advocates, the real question is no longer whether to introduce basic income, but rather when and in what form it will happen.
The Apocalypse Vision
The "apocalypse vision" is one of the darkest and most alarming interpretations of the impact of AI and automation on the labor market. According to this view, AI-driven job losses could rapidly lead to a social crisis—especially since the most vulnerable segments of society will bear the brunt, while their political influence is too limited to demand an adequate response from decision-makers.
One of the most vocal alarmists is Dario Amodei, CEO of Anthropic, who believes that AI could eliminate up to half of all entry-level white-collar jobs within the next 1–5 years, causing unemployment rates to reach 10–20%. These roles—such as customer service, financial analysis, legal document review, and programming—are areas where LLM-based systems are already capable of human-like performance. This rapid and radical transformation could become a breeding ground for social tensions and unrest, especially if those affected are slow to adapt and feel excluded from economic value creation.
Historical precedents also support this theory. During the COVID-19 pandemic, automation disproportionately impacted low-skilled, vulnerable workers who lacked both the necessary digital skills and the means to represent their interests. The experience of the pandemic suggests that AI-driven labor transformation could distribute the burden in a similarly selective and unfair manner.
The situation is further exacerbated by the fact that AI and automation not only usher in a new technological era but also intensify existing social inequalities. According to research from MIT, automation has been the primary driver of income inequality in the United States since 1980. The wages of unskilled workers have stagnated or declined, while the wealth of highly educated and technologically capitalized groups has grown exponentially. AI, as the latest and most comprehensive tool of automation, is accelerating this trend to the point where society can no longer process the pace and scale of change.

Kentaro Toyama, a researcher at the University of Michigan, notes that throughout history, technological unemployment has often led to social unrest. During the Industrial Revolution, members of the Luddite movement attempted—unsuccessfully—to prevent the loss of their livelihoods by destroying machines. However, today’s situation may be different: if the white-collar intellectual class also faces unemployment, rebellion may no longer come from the periphery, but from within proximity to power. Thanks to their networks, rhetorical skills, and political awareness, intellectual workers may lead more effective protests this time. Their goal may not be to shut down technology, but to force deep economic and political structural changes.
Another theory suggests that while AI will initially complement human labor (augmentation), this will quickly transition into full automation, where AI no longer assists but fully replaces humans. Companies will increasingly prefer the efficiency of AI over the cost and unpredictability of human labor. As "agent-like" AI systems (agentic AI), capable of handling complex work processes, emerge across various industries, the scenario in which jobs vanish en masse—without adequate new opportunities emerging for displaced workers—is becoming increasingly realistic.
What makes this apocalyptic vision truly frightening is not just the loss of jobs, but the social, economic, and political upheaval that may follow. According to Amodei, if people lose their role in creating economic value, one of the foundations of democracy could collapse: the political weight of individuals, which is largely derived from their role in the labor market. If a significant portion of society can no longer contribute to economic value, power will increasingly concentrate in the hands of those who control the technology.
There are counterarguments, claiming that this vision underestimates human adaptability and overestimates the capabilities of AI. Optimists point out that past technological revolutions also initially caused job losses, but later gave rise to new industries and opportunities. However, many caution that the speed, depth, and scope of the AI revolution are unprecedented, making it uncertain whether historical patterns will apply this time.
The vision of apocalypse, therefore, is not just a pessimistic scenario but also a complex warning: without deliberate social, political, and economic preparation, the AI transformation could trigger a crisis of such magnitude that current institutional systems are unprepared to handle it.
Confusion and Division Among Nobel Prize-Winning Economists
One of the most interesting aspects of the debate on the impact of AI on the labor market is the stark difference of opinion among Nobel Prize-winning economists. While they all recognize the significance of AI, their interpretations of what it means for the economy and society vary widely.
Paul Krugman, recipient of the 2008 Nobel Prize in Economics, is moderately pessimistic about the effects of AI. In his view, large language models—which he considers misleadingly labeled as artificial intelligence—are more akin to a “supercharged automatic text-completion tool.” At the same time, he notes that many well-paid white-collar jobs are fundamentally similar, making them particularly vulnerable to AI. Krugman argues that the impact of AI is “too widespread and too diffuse” to be effectively addressed using traditional economic policy tools. For this reason, he does not believe that governments will be able to mitigate the technology’s negative labor market effects effectively.
Joseph Stiglitz, who was awarded the Nobel Prize in 2001, takes a much more critical stance on the economic and social consequences of AI. He believes that, if left unregulated, AI will further concentrate wealth and weaken workers’ bargaining power. He argues that this phenomenon not only deepens economic inequality but also threatens the dignity of work. Stiglitz rejects the idea of a universal basic income (UBI) as a solution, instead advocating for state-guaranteed jobs. In his view, society’s primary responsibility is to provide meaningful, well-paid employment for everyone, as people’s identity and self-esteem are often tied to their work. However, he warns that AI could disrupt the functioning of the market economy, for instance by “appropriating” consumer surplus and converting it into profits for the wealthiest segments of society.
Daron Acemoglu, winner of the 2024 Nobel Prize, takes a more nuanced perspective. He is critical of the excessive optimism surrounding AI, particularly the narratives suggesting it will lead to explosive productivity growth and economic booms. According to his analysis, AI is expected to increase total factor productivity (TFP) by only 0.66% over ten years, which translates to an annual increase of just 0.06%—far below the most optimistic projections. Acemoglu argues that current AI developments focus too heavily on automation rather than on complementing human capabilities. He believes this approach is not only economically inefficient but also socially damaging, as it preserves or worsens existing inequalities while failing to generate enough new, high-quality job opportunities.
The positions of these three economists clearly illustrate the divide within the academic community regarding AI’s future. Krugman approaches the issue with technological skepticism and economic resignation. Stiglitz calls for structural reforms in the name of social justice. Acemoglu offers a cautious, empirically grounded analysis that situates AI within the broader context of long-term economic development. What unites them is the belief that AI’s impact should neither be overestimated nor underestimated—and that the quality of society’s response will ultimately determine how AI shapes the future of work.
The Productivity Paradox
Despite impressive technological breakthroughs brought about by advances in artificial intelligence in recent years—and widespread talk of a productivity explosion—the reality is that productivity growth in advanced economies is stagnating or even declining. This is what we call the modern productivity paradox. The phenomenon is best captured by Robert Solow’s classic remark: “You can see the computer age everywhere but in the productivity statistics.” The problem is so puzzling that four main explanations for this contradiction have emerged in the literature.
1. False Hopes
According to this explanation, the real productivity impact of artificial intelligence falls far short of earlier expectations. This skeptical perspective is linked to the argument made by Robert J. Gordon, who believes that historical technologies—such as electricity and the internal combustion engine—led to far more significant productivity gains than AI. At the same time, history also shows that the effects of such general-purpose technologies (GPTs) tend to become apparent only after a long period, through complementary innovations and organizational transformations.
2. Measurement Problems (Mismeasurement)
According to this view, the productivity benefits of AI already exist, but traditional economic statistics fail to capture them accurately. Free digital services—such as search engines, social media platforms, and chat applications—offer significant value to users, but since they do not involve direct monetary transactions, their impact is not reflected in GDP. Hal Varian's example—the rapid growth of digital photography—clearly illustrates how statistics can obscure technological progress.
In 2000, people still predominantly used analog film cameras and took around 80 billion photos per year worldwide. These required film, development, and printing, so each image cost around 50 cents. Because these involved actual monetary transactions, they were included in GDP: analog photography was measurable as an economic activity, both as a product and as a service.
By 2015, however, thanks to the rise of digital technology, around 1.6 trillion (i.e., 1,600 billion) photos were taken annually—most of them with digital devices like smartphones. From that point on, photography no longer required film, development, or paid services: images were stored and shared digitally, free of charge. As a result, the price per image dropped to virtually zero, while the value for users increased dramatically, as it became possible to take many more photos using far fewer resources.
The issue is that because these new digital photos do not involve monetary transactions, they are not captured in GDP data. Statistically, it thus appears as if the photography industry has shrunk or even disappeared, when in fact people are taking far more photos and gaining far more value from the activity—just not in monetary terms.
This example highlights how technological breakthroughs that reduce prices or create entirely free services often become “invisible” to traditional economic indicators. As a result, these statistics tend to underestimate the actual impact of technological progress on prosperity and productivity. At the same time, several studies have concluded that measurement errors alone are not sufficient to fully explain the productivity slowdown.
3. Unequal Distribution
The productivity gains from AI are largely concentrated in a small number of large companies—the so-called “superstar firms”—while the rest of the economy sees little benefit.
This can lead to industry concentration, reduced competition, and weaker incentives for innovation. According to research by the OECD and other organizations, productivity gaps are widening within industries between leading firms and the average company, while labor income growth is stagnating—especially for median workers. This inequality leads to economic and social tensions and obscures the macro-level benefits of AI.
4. Implementation Lags
The most likely explanation—also favored by Brynjolfsson, Rock, and Syverson—relates to the time required for the technology to mature. As a general-purpose technology, AI can only reach its full potential if companies reorganize their operations sufficiently and develop new processes, capabilities, and management structures. Past examples, such as the adoption of electricity or computers, show that such transformations can take decades. Until then, AI-related investments tend to appear as costs, while their benefits materialize only later—and may even initially have a negative effect on productivity indicators.
Therefore, the productivity paradox may not necessarily reflect AI’s ineffectiveness, but rather the time needed for structural adaptation, the limitations of statistical measurement, and the concentrated nature of the benefits. Nevertheless, understanding the paradox is essential for shaping both economic policy and corporate strategy.
Proposals for Avoiding Expected Unemployment
Although the situation remains highly uncertain, the most widespread and arguably most well-founded fear is that we may face a surge in unemployment—either temporary or even permanent. Classical economic theory identifies five compensatory mechanisms that, at least in principle, can offset job losses caused by technological advancement. These mechanisms are: the creation of new technological jobs (e.g., AI development and maintenance), increased investment driven by higher profitability, expanded demand resulting from lower prices, the emergence of entirely new industries, and wage growth fueled by productivity gains, which boosts consumption and thus employment. These theoretical mechanisms proved effective in the past, especially during the Industrial Revolution, but today’s technological developments challenge these models in several key ways.
Unlike earlier waves of mechanization, modern AI technologies are increasingly capable not only of replacing human labor but also of performing many of the new roles created by compensatory effects. For example, AI may not only replace customer service staff but also operate and maintain itself. This phenomenon weakens the classic compensatory mechanisms and suggests that technological unemployment may become more persistent.
Several approaches are emerging to address this issue. One option is the continuous retraining and education of workers, with a special focus on highly skilled, creative, and technology-intensive jobs. As the range of “simple” jobs continues to shrink due to the growing capabilities of AI, people will increasingly need to prepare for more “complex” roles that demand higher cognitive skills, specialized knowledge, and a readiness to learn. Governments and companies have a key role to play in this process: they must promote lifelong learning and support access to training, especially for those most affected by technological change.
Another potential response is political-level intervention to help manage social transformation. Governments have a responsibility to optimize the technological transition and ensure that its benefits are broadly shared. This includes the development of regulatory and economic policy tools that encourage ethical technological innovation while minimizing the risk of mass unemployment. One possible measure is to revise tax policy—for example, by introducing a “robot tax” on automated systems. Such a tax could partially offset lost tax revenues from displaced human workers and fund programs to support societal adaptation.
One of the most debated yet increasingly discussed proposals is the introduction of an unconditional basic income. I have already addressed this in more detail above.
Conclusions
Although the various theories differ significantly in terms of the extent and direction of the changes they predict, there is a growing consensus that AI will not simply eliminate jobs but will radically transform the structure and nature of work.
Experts agree that the speed and depth of this transformation are unprecedented, and this alone presents serious challenges to society’s ability to adapt. The spread of automation and AI is already having a tangible impact on workers across various sectors of the economy, especially those performing routine, automatable tasks. This becomes particularly problematic when political and institutional responses are delayed or poorly targeted.
Historical examples may serve as reference points, but they do not guarantee that the adaptation mechanisms that worked in the past—such as the spontaneous creation of new jobs or market self-regulation—will be sufficient during the current AI revolution. This is because AI is increasingly capable of replacing not only physical but also cognitive work processes, making it more difficult to reallocate the workforce in traditional ways.
In light of all this, political intervention becomes crucial. To mitigate negative effects, comprehensive, forward-looking regulation is essential—regulation that can not only influence the direction of AI development but also ensure that the benefits of technological progress are distributed more equitably across society. Education, retraining, employee support systems, and the strengthening of social safety nets are all areas where government involvement can play a decisive role in shaping the outcome of this transformation.
Ultimately, the relationship between AI and work will not be determined by the nature of the technology itself but by our collective choices. Whether this paradigm shift strengthens or undermines social cohesion, increases or reduces inequality, will depend on our ability to respond proactively and in solidarity to the challenges of a rapidly changing world.
References
-
https://www.weforum.org/stories/2025/04/linkedin-strategic-upskilling-ai-workplace-changes/
-
https://www.sciencedirect.com/science/article/pii/S0040162523004353
-
https://www.technologyreview.com/2013/06/12/178008/how-technology-is-destroying-jobs/
-
https://digitaleconomy.stanford.edu/publications/race-against-the-machine-2/
-
https://www.cnbc.com/2025/08/05/ai-labor-market-young-tech-workers-goldman-economist.html
-
https://www.businessinsider.com/paul-krugman-says-ai-will-mean-job-losses-no-solution-2023-8
-
https://news.mit.edu/2022/automation-drives-income-inequality-1121
-
https://www.weforum.org/stories/2024/08/why-ai-will-not-lead-to-a-world-without-work/
-
https://nationalaffairs.com/publications/detail/the-case-for-ai-optimism
-
https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic
-
https://www.thepeoplespace.com/ideas/articles/will-ai-jobs-revolution-bring-about-human-revolt-too
-
https://www.ian-leslie.com/p/5-reasons-there-wont-be-an-ai-jobs
-
https://aipioneers.org/paul-krugman-asks-if-ai-is-communist/
-
https://basicincome.org/news/2019/05/joseph-stiglitz-on-ubi-and-the-future-of-work/
-
https://theaiinsider.tech/2024/05/29/two-views-experts-disagree-on-future-economic-impact-of-ai/
-
https://economics.mit.edu/news/daron-acemoglu-what-do-we-know-about-economics-ai
-
https://www.nber.org/system/files/working_papers/w24001/w24001.pdf
-
https://www.bruegel.org/blog-post/ai-and-productivity-paradox
-
https://sevenpillarsinstitute-org.sevenpillarsconsulting.com/technology-and-unemployment/
-
https://cepr.org/voxeu/columns/fear-technology-driven-unemployment-and-its-empirical-base
-
https://voxdev.org/topic/labour-markets/how-will-ai-impact-jobs-emerging-and-developing-economies
-
https://www.weforum.org/stories/2025/04/ai-jobs-international-workers-day/
-
https://www.cedefop.europa.eu/en/projects/digitalisation-and-future-work
-
https://www.politico.eu/article/geoffrey-hinton-ai-artificial-intelligence-win-nobel-prize-research/
-
https://www.gzeromedia.com/gzero-ai/what-two-nobel-prizes-mean-for-ai
-
https://impact.economist.com/new-globalisation/the-ai-glass-floor
-
https://www.ibm.com/think/insights/ai-and-the-future-of-work
-
https://www.numberanalytics.com/blog/technological-unemployment-guide
-
https://www.developmentaid.org/news-stream/post/173022/technology-impact-on-employment
-
https://en.unesco.org/inclusivepolicylab/analytics/ubi-stuck-policy-trap-heres-how-reframe-debate
-
https://pollution.sustainability-directory.com/term/technological-unemployment-risks/
-
https://blogs.cdc.gov/niosh-science-blog/2022/02/15/tjd-fow/
-
https://www.cnbc.com/2017/12/27/what-billionaires-say-about-universal-basic-income-in-2017.html
-
https://basicincometoday.com/what-ubi-critics-get-wrong-about-human-nature/
-
https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1488457/full
-
https://www.sciencedirect.com/science/article/pii/S2199853125001428
-
https://pulse24.ai/news/2025/6/25/13/ais-impact-on-employment
-
https://mpra.ub.uni-muenchen.de/122244/1/MPRA_paper_122244.pdf
-
https://www.ft.com/video/6114d18f-2149-4c5e-9a7b-d7459b9ef610
-
https://jacobin.com/2025/07/artificial-intelligence-worker-displacement-jobs
-
https://paulkrugman.substack.com/p/what-deindustrialization-can-teach
-
https://www.nber.org/system/files/working_papers/w28453/w28453.pdf
-
https://www.sciencedirect.com/science/article/pii/S2773032824000154
-
https://paulkrugman.substack.com/p/how-should-we-think-about-the-economics
-
https://www.bbva.com/en/innovation/scientists-say-next-ai-stage-economists-disagree-heres/
-
https://www.linkedin.com/pulse/labeling-yourself-ai-optimist-pessimist-means-youve-already-evan
-
https://www.tse-fr.eu/ai-techno-pessimism-or-techno-optimism
-
https://www.reddit.com/r/ask/comments/1bido33/are_you_an_ai_optimist_or_an_ai_pessimist/
-
https://hbr.org/2024/05/ai-is-making-economists-rethink-the-story-of-automation
-
https://infinitive.com/the-future-of-ai-optimism-pessimism-and-what-lies-ahead/
-
https://www.econstor.eu/bitstream/10419/300127/1/1875956077.pdf
-
https://www.nytimes.com/2025/05/30/technology/ai-jobs-college-graduates.html
-
https://cordis.europa.eu/article/id/430224-how-automation-affects-work-economies-and-society
-
https://www.bostonreview.net/forum/ais-future-doesnt-have-to-be-dystopian/
-
https://www.city-journal.org/article/zohran-mamdani-artificial-intelligence-jobs
-
https://www.sciencedirect.com/science/article/abs/pii/S0016328716302063
-
https://www.sciencedirect.com/science/article/pii/S0304393219301965
-
https://www.reddit.com/r/singularity/comments/1eec7ct/if_ai_takes_all_of_our_jobswhos_going_to_buy/
-
https://compass.onlinelibrary.wiley.com/doi/10.1111/soc4.12962
-
https://www.nber.org/system/files/chapters/c14007/c14007.pdf
-
https://www.theatlantic.com/sponsored/google/the-jobs-equation-erik-brynjolfsson-qa/3872/
-
https://books.google.com/books/about/Race_Against_the_Machine.html?id=IhArMwEACAAJ
-
https://dezernatzukunft.org/en/the-productivity-paradox-a-survey-2/
-
https://www.technologyreview.com/2018/06/18/104277/the-productivity-paradox/
-
https://www.getabstract.com/de/zusammenfassung/race-against-the-machine/18042
-
https://ide.mit.edu/insights/analysis-the-productivity-paradox-digital-abundance-and-scarce-genius/
-
https://www.reddit.com/r/scifi/comments/1lwn1fd/the_ai_apocalypse_i_fear_most_isnt_skynet_its_a/
-
https://www.weforum.org/stories/2018/05/preventing-an-ai-apocalypse/
-
https://link.springer.com/article/10.1007/s11625-020-00848-0
-
https://en.wikipedia.org/wiki/Existential_risk_from_artificial_intelligence
-
https://www.sciencedirect.com/science/article/pii/S0305750X24000573
-
https://www.goodreads.com/en/book/show/7488625-the-lights-in-the-tunnel
-
https://www.sciencedirect.com/science/article/abs/pii/S0001879120300646
-
https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1136&context=ecis2022_rp
-
https://www.foresightfordevelopment.org/sobipro/download-file/46-767/54
-
https://books.google.com.mt/books?id=13UqxRhU0U8C&printsec=copyright