Close

Three Pathways to Distributed Power in the AI Economy

Matt Prewitt

January 20, 2025

On Jan 15, 2025 at Stiftung Mercator in Berlin, RadicalxChange Foundation, along with partners Global Solutions Initiative and Sciences Po Technology and Global Affairs Innovation Hub, co-hosted a side event to the Paris AI Action Summit. We focused on the future of collective bargaining in the context of the AI revolution. The discussions helped to advance our thinking in several important ways. Here are some quick initial reflections.

History suggests that following significant technological breakthroughs, individuals and communities often endure temporary but harmful losses of economic bargaining power. (For example, real living standards declined in industrializing countries between the mid-18th and the early-to-mid 19th centuries, in part because individuals’ contributions to vital productive processes became more interchangeable and therefore lacked bargaining power.) On a longer arc of history, new technology’s benefits usually accrue to whole societies, but such short-term social disruptions partly offset those benefits and frequently destabilize societies. It is therefore important to strategize toward achieving social equilibrium quickly, robustly, and without undermining the processes of technological development.

Power rebalancing after technological breakthroughs occurs through at least three pathways: technological, political, and social. Technological rebalancing occurs when the dissemination or cheapening of the relevant technology undermines the advantage of the technology’s owners (as in the personal computer and software revolutions). Political rebalancing occurs when direct state interventions check the rights of businesses to exploit the new technology (as in the 18th century, when speech controls and intellectual property statutes limited the power of printing press owners). Social rebalancing occurs when social or labor organizations form a collective counterpower, achieving an economic foothold vis-a-vis the technology’s owners (as in the late part of the industrial revolution). These pathways are not mutually exclusive, possess unique benefits and drawbacks, and are more or less suitable in different societal and technological situations.

What might these modes of rebalancing look like in the nascent AI revolution? Which are likeliest to mitigate losses of bargaining power and/or uphold the integrity of individuals and communities? We will first define, then critique and evaluate three pathways.

Three Pathways to Mitigating AI Power Concentration

  1. Technology dissemination: Open source. Possibly, open source AI models will develop in such a way that they remain competitive with the top proprietary models. If so, the power that accrues to the top models’ owners may not be extreme or unprecedented.
  2. State power: Public AI and regulation. Possibly, the state or states with the best AI technology will both (a) remain democratic and (b) retain meaningful and public-interested power over the top models. If so, they may be able to redistribute profits and ownership enough to offset disruption, and regulate harmful misuses.
  3. Collective bargaining: Collective information protection and production. Both more computing power and more high-quality data have the potential to unlock new frontier AI capabilities. Computing power is a fairly direct function of money, so this factor is likely to tend toward a concentration of AI’s power. However, new data cannot always be straightforwardly purchased. Thus, if important unique datasets are (a) produced, and (b) not easily or practically available for the top AI model owners, the owners of these datasets have a unique chance of producing AI technology on non-frontier or open source models whose performance can compete, at least in certain domains, with the leading proprietary models. Thus, maximizing both “open” model provision and “closed” data collection might provide the best formula to give leverage to organizers of AI counterpower. In this picture, trusted managers of significant, unique, collectively-produced datasets could act as tomorrow’s version of labor unions.

Evaluating These Three Pathways

  1. The technology dissemination approach. An important worry with this approach, amply documented elsewhere, is AI safety. But what about the power concentration logic – can “open” AI forestall power concentration? While far from certain, it is possible that the capacities of frontier and open models will not remain as similar as they are now, i.e. that the frontier models will pull away, especially as the availability of new general training data diminishes and the techniques for improving the models through compute power continue to advance.
  2. The state power approach. States that do not control the leading models are in a similar position to any other powerful agent in society. They have an incentive to reduce the extent to which the leading models achieve strong economic or intelligence dominance over them and their societies. If accountable to their citizens, they will try to secure broad-based benefits to their societies from AI, but their leverage to act as a counterpower is unclear. On the other hand, states that do control leading models are, for that very reason, likely to have difficulty remaining truly accountable to citizens. If privately-controlled AI achieves extreme capability it will likely accrue enough capital to thoroughly control politics, undermining democratic accountability. If state-controlled AI achieves such capabilities, the state will simply become an unaccountable power. Regulations directly limiting AI or distributing its power in the public interest – a modern version of those that limited the power of the printing presses through speech restrictions and IP – are unlikely to enable these nations to act as a counterpower, because of the international nature of the technology. However, intelligent application of state power and regulation may significantly strengthen the other two approaches: maximizing the competitiveness of open source models, and removing obstacles to the collective creation of unique, deep datasets that can anchor countervailing economic power.
  3. The collective bargaining approach. Specialized data, controlled by trusted intermediaries, is unlikely to be a critical ingredient to creating premier general AI models. However, it may allow its beneficiaries (from small collectives to sovereign states) to build non-frontier models that are maximally competitive with frontier models in the domains pertinent to the data. The deeper and more unique their data, the likelier this is to be true.

Synthesizing These Three Pathways Into a Productive Agenda

None of these approaches suffices by itself to insure against extreme power concentration. In concert, they reinforce each other, pointing toward a promising strategy:

  1. Seek to minimize the distance between open models and leading proprietary ones (within the bounds of safety).
  2. Do not overreach with direct regulation of non-frontier AI…instead, develop a regulatory program that makes it simple and minimally legally risky to set up “trusted data intermediaries” (on the scale of communities, industries, and countries) which can act as collective bargaining agents in the marketplace, and which are obligated to advocate actively for their constituents’ complex interests in a fiduciary-style manner.
  3. Encourage the production of unique datasets, managed by trusted intermediaries, which enable non-frontier AI to compete or outperform in particular domains or industries. To have a meaningful role in the marketplace, trusted data intermediaries must do much more than simply steward local data. They must, instead, become a socio-political vector that unlocks the unique collection of exponentially more data than presently exists.

Dearth of Precedents?

The dearth of major, successful precedents for trusted data intermediaries is a common theme in this policy area. However, the reasons for this scarcity are quite clear.

Two thought experiments.

Appendix

Here are four premises that may help clarify my reasoning.

  1. It is not “hype” to predict that AI will get much, much better.
  2. AI will not replace people, but it could make their economic contributions more interchangeable and therefore reduce their bargaining power. An important counterweight to this reduction of bargaining power may be organizing toward the production and trusted management of high value datasets.
  3. It is often claimed that data should be open because it is a non-rivalrous good, but this requires qualification. Let us distinguish between “data” and “information” and place them at two ends of a spectrum.
  1. It is sometimes supposed that most of the important data, which will form the basis of the AI revolution for the foreseeable future, has already been collected and ingested by models. This is true in certain domains, such as the open internet and public domain books. But it is not true in general. In the future, important changes will occur in data collection and access which will make all the world’s current data look like only an extremely sparse sample. For example, consider the data that new neural implant technology could collect, or the minute-by-minute blood chemistry data that future medical devices might collect, or 24/7 time-synced high resolution video capturing peoples’ movements and facial expressions at a population level. Consider also the possibilities of analyzing now-private data, such as existing health records, or population-level phone and text message content.