Worker Power in the Age of AI Monopolies: Why We Need Structural Solutions Now
Elizabeth Wilkins,
President/CEO, The Roosevelt Institute
Ten years ago, the Roosevelt Institute published “Technology and the Future of Work: The State of the Debate.“
We got some important things right—particularly our insight that technology was “underlying and enabling a vast reorganization of both corporations and the overall economy” and our focus on how technological change was reducing “both the political and workplace power of American workers.” But we also got crucial things wrong. We anticipated gradual change over decades, not the AI revolution marked by market concentration that arrived in just a few years. We worried about platform workers but didn’t foresee how a handful of companies would control the foundational infrastructure of AI itself. Most importantly, while we correctly identified the need to put workers at the center of the story, we failed to honestly and pragmatically assess the challenges worker-centered solutions would face against monopoly-scale technological disruption.
We have come a long way in making sure workers are a part of the conversation about how AI is deployed in their workplaces. But while we talk, tech monopolies are consolidating the power to make development and decisions unilaterally. If we are to ensure democratic development and deployment of new AI technology in a way that leads to shared economic prosperity, our solutions have to be bolder, and they have to be now.
The Scale of AI Concentration
The barriers to entry for frontier AI development have become prohibitively high for all but the most well-funded organizations, creating an unprecedented concentration of technological, economic, and political power. It is no surprise that the biggest AI titans of the 2020s are the same big tech companies we allowed to concentrate over the 2010s (with the notable exception of OpenAI, which has had the aid of Microsoft). This is in large part because anyone who hasn’t been a winner in the data surveillance economy up to now has a hard time generating the data feedback loops necessary to compete on AI quality. AI systems also exhibit powerful data network effects, creating a self-reinforcing cycle where companies with the most users and the most access to those users’ data develop the best AI systems, which attract even more users. Google, for example, was touting over a year ago 1 billion uses of its AI overview function per month.
In addition to network effects, initial and ongoing costs have been high enough to be all but prohibitive for many would-be new entrants. Take training costs: while the original Transformer model cost only $930 to train in 2017, some estimates suggest that future models may cost over $1 billion to train by 2027. While these costs might be on their way down, the advantage of being the first has already been achieved. Or take the talent competition: Meta is reportedly offering “$100 million signing bonuses” and total compensation packages reaching “$300 million over four years” to recruit top OpenAI researchers, with OpenAI itself paying top researchers over $10 million annually. Only companies with massive cash flows can afford to compete. And now data center infrastructure costs are skyrocketing.
We have a narrow window before AI gains become permanently entrenched—once AI systems are deployed and business models are established, they become harder to change. At the same time, our best tools move slowly. Traditional regulation takes time, and building union density takes decades. Tech monopolies are still open to structural intervention, but this window is closing fast.
First Side of the Coin: Corporate Regulation
As SEIU President April Verrett says in her contribution to this collection, “every day, millions of American workers play by rules they never wrote.” We need corporate regulatory solutions that inject worker voice into the managerial decisions of AI companies. For example, we could mandate worker representation on boards of companies above certain AI compute thresholds or data-processing scales. Unlike external regulation, board members can influence AI deployment decisions as they happen. Tech workers understand industry-specific AI risks that generalist regulators may miss, and board representation creates internal advocacy for workers affected by AI decisions. Such regulation has been proposed: Senator Elizabeth Warren’s Accountable Capitalism Act would require any company with more than $1 billion in revenue to obtain a federal charter, which would obligate “company directors to consider the interests of all corporate stakeholders – including employees, customers, shareholders, and the communities in which the company operates” and “ensure that no fewer than 40% of its directors are selected by the corporation’s employees.” For AI companies specifically, this would mean tech giants like Google, Meta, and OpenAI would need federal charters requiring them not only to put workers on the board but also to consider worker, community, and societal impacts of AI development—not just shareholder profits.
In addition to putting workers in decision-making seats, we should consider other reforms that would mitigate some of the concerning incentives of their revenue models and increase broad, democratic accountability for tech giants. For example, we could use ex ante antimonopoly tools like structural separation—which limit companies from competing in adjacent markets in order to prevent self-preferencing—to choke off avenues for increasing power grabs. Or we could use the tax code, imposing things like a digital ad tax to disincentivize the continuous collection and use of data as core to the revenue model for these companies. Interventions like these, while not directly related to worker power, help curb the power consolidation of Big Tech such that the rest of us can have more of a say in how AI is developed and where and how it’s deployed.
Second Side of the Coin: Worker Power Through Sectoral Bargaining
We need to be honest about the enormity of the task of building worker power capable of countering the employer power arranged behind AI deployment. Industry-wide bargaining is a key mechanism to match the scale and speed of AI transformation. AI impacts entire industries, not just individual workplaces, so sectoral bargaining prevents race-to-the-bottom dynamics where individual employers are undercut by competitors ignoring worker protections. It also aggregates worker knowledge across companies to understand industry-wide AI risks and can coordinate with regulatory agencies on industry-specific AI safety standards. For example: in health care, standards for AI diagnostic tools and patient data use; in transportation, autonomous vehicle deployment timelines and safety standards; in finance, AI lending algorithms and job displacement schedules.
If the moral imperative for workers to have a seat at the table isn’t enough justification, then let’s look at the business case. A recent MIT study found that 95% of business AI deployment pilots are failing, and a McKinsey study found that 80% of companies attempting to deploy AI are finding no bottom line impacts. S&P Global found that 42% of companies starting pilots have abandoned them, up from 17% last year. So far, AI is not producing the returns on productivity that have been promised. If we want this new technological frontier to generate economic returns at all, we have to have workers at the table to help understand how to make it so.
Tech workers are uniquely positioned to wield countervailing power given their placement at the very companies doing the development and their technical knowledge and expertise. They understand how AI systems actually work and can identify risks that external regulators might miss. It is therefore urgent to focus time and resources on organizing tech workers in this cause.
The Choice Ahead
The kinds of reforms we need are enormous, and the current political will is disturbingly low. But change happens because we have solutions ready when political windows open. The AI anxiety of millions of workers is building as AI disruption accelerates, and we will need to be ready with solutions that fit the scale of the problem.
Worker power in the AI age requires structural intervention at the scale of the transformation itself. We need both sides of the coin: corporate regulation that puts workers and the public in the driver’s seat, and sectoral bargaining—especially in the tech sector—that gives workers industry-wide influence over how AI is deployed. This isn’t about slowing innovation. It’s about ensuring innovation serves workers and communities, not just shareholders and executives.
A corporate-captured future is not inevitable. Our choices — in business, in policy, and in organizing — will shape whether AI becomes a tool for shared prosperity or further concentration of power. We’ve built countervailing power against concentrated industries before. The question is whether we’ll act with the urgency this moment demands, or whether we’ll let another technological revolution pass us by. The window for structural change is open, but it won’t stay that way forever.
About the Author
Elizabeth Wilkins, CEO and President at the Roosevelt Institute, formerly served as chief of staff to the chair and director of the Office of Policy Planning at the Federal Trade Commission (FTC).

Before joining the FTC, she was senior advisor to the White House chief of staff. Elizabeth has also worked in several senior leadership roles at the Office of the Attorney General for the District of Columbia, including senior counsel for policy and chief of staff. She previously served as a law clerk to Associate Justice Elena Kagan of the US Supreme Court and to then-Chief Judge Merrick Garland of the U.S. Court of Appeals for the DC Circuit. Before law school, Elizabeth was a policy advisor in the White House Domestic Policy Council. She began her career as a political organizer for the Service Employees International Union Local 32BJ in New York City. Elizabeth holds a bachelor’s degree from Yale University and a law degree from Yale Law School.
About This Series
This post is part of a series called “Back to the ‘Future of Work’: Revisiting the Past and Shaping the Future,” curated by the Aspen Institute’s Future of Work Initiative. For this series, we gather insights from labor, business, academia, philanthropy, and think tanks to take stock of the past decade and attempt to divine what the next one has in store. As the future is yet unwritten, let’s figure out what it takes to build a better future of work.
About the Future of Work Initiative
The Aspen Institute’s Future of Work Initiative, part of the Economic Opportunities Program, empowers and equips leaders to innovate workplace structures, policies, and practices that renew rather than erode America’s social contract.
About the Economic Opportunities Program
The Aspen Institute Economic Opportunities Program advances strategies, policies, and ideas to help low- and moderate-income people thrive in a changing economy.
Join Our Mailing List
To receive occasional emails about our work — including new publications, commentary, events, fellowships, and more — join our mailing list.
Connect on Social Media
For news and updates every day, connect with us on the social media platform of your choice.