The rise of artificial intelligence (AI) has the potential to completely upend antitrust law as we know it, according to Professor Daniel Crane. 

In a recent article in the New York University Law Review, Crane notes that AI is at the core of a number of emerging technologies with the potential to reinvent the entire economy—and that idea carries profound implications for the practice of law. 

Crane—the Richard W. Pogue Professor of Law—recently answered five questions about the issue:

1. Do you see the impact of AI on antitrust being greater than on other areas of the law?

Probably not. 

The cumulative effect of all of this is going to fundamentally reshape the whole way the economy works. It’s relevant to everything that touches our economy, from the way we write contracts, to the way securities are offered, to the way insurance policies are written, to the way labor markets work. 

In his book The Coming Wave, Mustafa Suleyman describes how AI has synergies with things like synthetic biology, robotics, quantum computing, nanotechnology, energy expansion, and so forth. 

I wanted to take what is being said by these technologists who are looking forward 10 or 20 years out and ask what it means for antitrust law. 

Of course, the predictions could be wrong. But if these trends do continue, they have dramatic implications. It’s not limited to antitrust. It really applies to all human existence, honestly.

2. Your article details the four pillars of antitrust law that you expect to buckle if trends continue on their present course. Of the four, which one would be affected first?

I think we’re seeing one already: this idea that one function of competitive markets is to obtain information about people and what people want. Friedrich Hayek famously made the point that no central planner ever has enough information to really understand human desires, and I think he was clearly right. 

Yet now the technology we already have is so far ahead in getting into our brains, and almost reprogramming our brains. Amazon has already patented a business method for prospective delivery of things you don’t know you want yet. We’re on that track pretty fast. 

From scans of your eyeballs or your fingertip movements to looking at facial patterns in a movie theater, AI-enabled systems are able to figure out how people are reacting to products and services. Businesses are already widely using this to know us as consumers and even to program us as consumers. 

So the traditional consumer discovery function that we attribute to markets is becoming obsolete.

3. You see new, comprehensive regulation as a likely response to these issues. What might that look like?

AI will certainly be regulated. 

I’m focusing on what kind of regulation can replace the functioning of competitive markets. Because if the predictions are true, competitive markets will go the way of the dodo bird. We’ll have a few large companies controlling even much bigger swaths of the economy than they do today. 

So what could regulation look like? 

Some technologists are saying that instead of trying to program the outputs of an AI-driven system, we might instead program the inputs. An AI system has to have marching orders, called its objective function. The problem with regulating AI systems is that what happens between the objective function, the command, and the outputs is a big black box. 

So one regulatory response would be to address what the business is allowed to tell their AI to do. For example, instead of “Maximize the profits of the firm,” the AI might be required to allow innovation without dramatic price increases.

4. You speculate that the coming wave might actually overwhelm antitrust law altogether. How likely do you think that is to happen?

In the short run? Not very likely. 

If we’re looking 20 or 30 years out, I think it’s increasingly going to overwhelm antitrust law. If the current path of these technologies continues, it’s going to be very difficult for antitrust law to continue doing the things that it tries to do. 

Think about all the cases against big tech right now. They’re largely based on internal documents and emails. Once you turn over more business discretion to an AI, you’re not going to have that anymore. You’re not going to have evidence of a predatory intent or a predatory practice. Firms will get bigger and bigger because the best AI will have a slim advantage, which will multiply itself across all the business outputs of the firm and lead to dominance.

I think there’s very little that regulators can do to stop that from happening, at least using the current antitrust law. There’s almost an inevitability to the continuing roll-up of the economy in a few big firms. The paper is really ultimately a call to be prepared for this.

5. How should we prepare for these changes?

A lot has to do with making sure that we have expertise and understanding of these emerging technologies in the right places in government. The wave is going to wash over us whether we want it to or not. 

The question from a citizen’s perspective is, how do you get ready to participate in democracy where this consolidation of power is likely to continue? How do we advocate for regulatory solutions that give agency to you and me as citizens, as voters, as participants in markets and in workforces? 

Regulation of objective functions—the inputs of AI systems—seems to me like a promising way to focus some energy, because it says we’re not giving up a democratic voice in how the economy operates. The economy may fundamentally change, but what shouldn’t ever change is the fact that the economy works for people and that the citizens need to have a voice in how their economy works.