Regulation of AI Could Potentially Exacerbate Monopolies

Aside from becoming the standout year for artificial intelligence, 2023 also witnessed the splintering of the AI community into different factions: supporters of acceleration, pessimistic prognosticators, and those advocating for control.

At the end of the year, it appeared that the supporters of acceleration had emerged victorious. Power had consolidated in the hands of a few of the largest Big Tech companies investing in the most promising startups; innovative AI products were being rapidly developed; and the pessimistic prognosticators were retreating in the face of AI risks. The supporters of control were aggressively pursuing the supporters of acceleration, unveiling bold proposals for regulation and pushing bills through to become law, particularly in light of the elections and the anticipated flood of AI-fueled misinformation.

Ironically, the supporters of control may unintentionally be contributing to the accelerationists’ advantage: New regulations could inadvertently enhance the market power of the accelerationists.

How is it possible that regulators tasked with safeguarding the public interest might take actions that worsen the situation? Do we require new regulations to rein in an even more dominant industry? Are there innovative alternatives for protecting the public interest?

First, contemplate the reasons why the AI industry is already predisposed toward consolidation.


Above all, AI development is predominantly led by industry, rather than government. Despite AI being prioritized as a national concern, the leading AI-producing nation, the United States, relies on businesses for its supremacy. The private sector’s share in the largest AI models increased from 11 percent in 2010 to 96 percent in 2021.

This focus on industry is not limited to the U.S. In recent negotiations over AI regulations in the European Union, Germany, France, and Italy opposed restrictions that might disadvantage their emerging private sector champions. Even in China, the major AI players are private companies, albeit under close state monitoring. These conditions are structural: Businesses already possess the essential resources for developing AI, such as talent, data, computational power, and capital.

While large AI development teams exist within the biggest companies, a small number of smaller enterprises are the most dynamic innovators in creating foundational models—yet they rely on a few large companies for other crucial resources. They require extensive datasets, computational resources, cloud computing access, and substantial capital.

Therefore, it’s unsurprising that companies with these resources—Nvidia, Salesforce, Amazon, Google, and Microsoft—are the primary investors in leading AI startups. Last year, investments totaling more than $18 billion by Microsoft, Google, and Amazon accounted for two-thirds of all global venture investments in generative AI ventures, with just three innovative companies—OpenAI, Anthropic, and Inflection—benefiting from their support.

These investments circulate back to the larger companies in various ways: AI developers turn to Nvidia for graphics processing units or to the cloud providers, such as Amazon and Microsoft, to operate the models. Additionally, Google and Microsoft are actively integrating the AI models into their core products to defend their most crucial assets.

In a pre-existing climate of concentration within the industry, regulations risk further consolidating power in the hands of a few. AI regulation is emerging as a fragmented landscape. China was an early adopter, but last year saw regulators on both sides of the Atlantic take decisive steps to establish guidelines. An October 2023 White House executive order on AI safety will be executed by several government agencies this year, while the EU’s AI regulation, announced at the end of 2023, will be voted on in early 2024.

Many experts anticipate a “Brussels effect,” with EU laws influencing regulations and industry standards globally, while variations are expected due to the political implications of artificial intelligence. The African Union may implement its own AI policy this year, while the United Kingdom, India, and Japan are anticipated to adopt a more hands-off approach.

Now, let’s consider the potential impacts of various aspects of AI regulations.

Firstly, we have the issue of inconsistent rules. In the United States, with no national legislation from Congress and a standalone White House executive order, states are formulating their own AI regulations. For instance, a California bill would mandate transparency requirements for AI models using a certain threshold of computing power. Other states are regulating AI-aided manipulated content—South Carolina is considering legislation to ban deepfakes of candidates within 90 days of an election, and Washington, Minnesota, and Michigan are pursuing similar election-related AI bills.

Such divergences between states put smaller companies at a disadvantage, as they lack the resources and legal support to comply with multiple laws. And these challenges are amplified when considering the global patchwork of regulations.

Then there are the red-teaming requirements. The executive order and EU regulations stipulate that generative AI models above a certain risk threshold should publish results from structured testing—involving simulated “red team” attacks to identify security vulnerabilities. This approach differs significantly from the less expensive manner in which many tech startups have tested product safety, wherein early versions are released, users detect and report bugs, and updates are issued. This proactive approach is not only costly but also requires diverse forms of expertise—legal, technical, and geopolitical. Additionally, a startup is unlikely to be able to vouch for externally sourced AI models, favoring the large players even more.

Moreover, the executive order and EU regulations also call for the “watermarking” of all AI-generated content, embedding information to identify content as AI-generated. While this seems reasonable, watermarking is not foolproof and can be bypassed by black hat hackers, posing technical and legal challenges for smaller companies relying on external content. According to an analysis from the Center for Data Innovation, small and medium enterprises could face compliance costs as high as 400,000 euros (about $435,000) by using some of the higher-risk AI models proposed by the European Union. Regardless of the debatable numbers, the main concern remains that AI regulators impose costs that disproportionately burden smaller firms, potentially hindering their entry into the market.

Without advocating for further regulation, such as of the antitrust nature, what can be done? Or should we simply allow natural market forces to play out?


One solution is to promote new sources of competition. In a rapidly evolving field like artificial intelligence, we can expect innovative newcomers. The internal discord within the industry, as seen in the leadership upheaval at OpenAI, the company behind ChatGPT, suggests that competitors will likely emerge to address perceived gaps.

The availability of open-source AI models can provide these entrants with an opportunity to compete. The U.S. Federal Trade Commission even acknowledges this possibility. Open-source models also create prospects for international competitors, each leveraging distinct features and creative ideas. However, open-source models are not a cure-all, as many that were initially open have transitioned to closed over time. For instance, LlaMA 1, Meta’s open-source model, was transparent about its dataset, whereas the release of LlaMA 2 did not disclose its training data.

Meanwhile, despite foundational model-building being one of the most prized components of AI today, there is already fierce competition in this area: Google’s Gemini, launched in late 2023, is challenging OpenAI’s generative AI suite, as are open-source models.

Even the concentrated infrastructure layer could witness competition. Microsoft, Google, and Alibaba are challenging Amazon’s dominance in cloud services, and a significant chipmaker, AMD, is competing against Nvidia’s near-monopoly on chips, alongside competition from Amazon and Chinese chipmakers. Opportunities for differentiation could shift to applications and services that use upstream models and infrastructure, tailoring AI to the needs of end users and making the technology more competitive.

The computational requirements of AI applications could decline for various reasons. Chips might become more efficient and many applications could utilize smaller, specialized models through knowledge distillation, reducing the need for “large” language models trained on massive datasets under the control of a few companies.

If these developments materialize, such market forces could take time. Policymakers could consider other interim measures through proactive negotiations.

Leaders of the major AI companies are actively involved in advocating for regulation to participate in the rule-making process. This gives policymakers leverage to negotiate alternative arrangements with the major players. For example:

  • Implementing industrial innovation models from history: Policymakers could draw inspiration from the 1956 U.S. federal consent decree involving AT&T and the Bell System. AT&T, a national telecommunications monopoly and leader in technological innovation, was required to license all its patents royalty-free to others under the consent decree, while retaining the monopoly.
  • Adopting existing public investment frameworks: The public sector could invest in collaborative AI development with the major companies, applying the Bayh-Dole Act, which permits businesses to retain ownership of inventions and grants the government a license to use the intellectual property for public purposes.
  • Utilizing DPI’s public utility principles: Policymakers could draw from the model of digital public infrastructure (DPI), envisioning “public rails” on which digital applications can be built by anyone. AI models could be requisitioned by governments as public rails.
  • Applying progressive taxation principles: One approach to subsidize regulatory burdens on smaller companies is to levy a tax on AI-related revenues, with the tax rate increasing in relation to company size.

AI industry consolidation carries implications beyond the typical concerns of potential abuse of market power by dominant firms. Access to user data is already a problem for Big Tech and with AI, new concerns emerge. Fewer firms mean a narrower focus on applications and reinforcement of biases in datasets and algorithms. Overreliance on a select few companies also increases the risk of rapid spread of systemic failures, posing global crises through financial networks reliant on a few key AI platforms, or cyberattacks targeting commonly used AI platforms, which can impact multiple organizations or entire sectors simultaneously.

2024 will be the year when we will witness more regulations governing AI. Let’s ensure that these regulations don’t elevate only a few, while neglecting a much-needed AI faction—the new entrants.

Leave a Reply

Your email address will not be published. Required fields are marked *