When AI can independently book your travel, analyze your emails and make pricing decisions, compliance concerns multiply. Morgan Lewis attorneys Joshua Goodman, Minna Naranjo and Phillip Wiese explore how agentic AI’s unprecedented autonomy challenges existing privacy and antitrust frameworks.
Many in the tech industry view AI agents as the next frontier in the development and commercialization of AI technology. AI agents, or “agentic AI,” can be thought of as AI-powered software applications that can take in information on their own, without the same level of human instruction typically relied upon by current generative AI tools, and then use that information together with other tools to accomplish a goal.
This nascent technology is likely to raise many important questions for the sector, including novel legal questions involving antitrust and data privacy law that expand ongoing AI debates and concerns.
In fact, agentic AI is so new that currently there is no consensus in the technology industry on the precise definition of an AI agent, but the basic concept is that an AI agent is a software application that can take in information and then act on it by using a wider variety of tools to achieve a goal with a much higher degree of autonomy than current technologies. Think of a software application that could independently use other software tools like a web browser, spreadsheets or a word processor to accomplish goals defined for it by a user providing instructions in natural language — in much the same way you might instruct another person to help with a task.
In contrast, the generative AI tools that many people have become familiar with over the past couple of years typically rely on detailed prompts — specific and well-defined instructions — to guide the generation of new text, images or other media. AI agents are expected to operate in a somewhat similar way but with much more autonomy, vaster capabilities and more broadly defined goals.
To illustrate, an AI agent might be given a task like scheduling a trip for you to visit another city, including by connecting autonomously with various travel websites and booking appropriate hotels and transportation on its own. Another example might be asking an AI agent to handle a task in which the agent will analyze a set of spreadsheets for certain data, process the data as needed, go to a particular website and send the results of the data analysis into an online form. Yet another example might be an agent that will read through your emails at regular intervals to find a certain type of correspondence and then take appropriate actions based on the email, such as placing a particular order or arranging for a product return.
While these technologies are not yet in widespread use, and their capabilities and resulting impacts remain largely unknown and untested, several companies expect to roll out AI agents over the coming year.
Antitrust concerns
One use for AI agents could be to assist with product pricing and negotiation tasks. If AI agents are assigned to accomplish goals in these domains, agentic AI applications could potentially raise antitrust concerns that expand upon the issues that have commonly arisen so far in connection with algorithmic pricing.
To date, algorithmic pricing antitrust cases have alleged that competing companies use algorithms to collect sensitive nonpublic data and generate pricing recommendations that effectively fix prices for rental properties.
For example, in United States v. RealPage, the DOJ and various state attorneys general alleged that RealPage, an algorithmic pricing tool, allowed competing landlords to share nonpublic information about apartments, including rent pricing, which was then used to generate pricing recommendations that were noncompetitive and harmful to renters. Other states and the District of Columbia have filed independent litigation regarding RealPage. Similar allegations have been made by private plaintiffs in the multifamily and apartment rental property industry and hotel industry. At present, no court has found there to be antitrust liability based on these types of allegations.
While all of this litigation remains ongoing, two federal district courts have dismissed complaints alleging algorithmic antitrust violations in the hotel industry. Among other issues, those courts cited the plaintiffs’ failure to allege any actual agreement to exchange confidential pricing information, to adhere to recommended prices or even to pool nonpublic, competitively sensitive information from different competitors via the relevant software during its generation of specific price recommendations.
Similarly, in DC’s RealPage case, the court dismissed a defendant from the case based on a showing that the defendant’s use of RealPage’s software did not involve any exchange of proprietary data. The DOJ and Federal Trade Commission (FTC) — which have filed statements of interest in several of the ongoing algorithmic pricing antitrust cases — have so far taken the position that the use of algorithms for pricing decisions can lead to antitrust violations even without explicit agreements to fix prices, and even where the software-generated pricing recommendations are nonbinding and deviated from in practice. It remains to be seen whether courts will endorse that view.
The legal landscape remains fluid as courts continue to navigate these issues, and we expect that similar and new theories of anticompetitive harm may arise in connection with AI agents. For instance, in a 2017 article, Ariel Ezrachi and Maurice E. Stucke distinguished “hub-and-spoke” algorithmic collusion concerns — where a single algorithm acts as the hub of a hub-and-spoke pricing conspiracy — from more complex concerns arising from the conduct undertaken by AI agents. The existing algorithmic pricing antitrust cases, as alleged, basically fall into the “hub-and-spoke” category.
AI agents used in pricing, on the other hand, may raise concerns about autonomous tacit collusion, which to date has not been a major issue with existing generative AI tools given their limitations. Specifically, AI agents acting independently of each other and of humans may be alleged to be capable of engaging in consciously parallel pricing behavior in a way that is more stable, disciplined and effective than human pricing actors. Consciously parallel, unilateral pricing behavior is typically lawful under US antitrust law. Accordingly, even if such outcomes are found to result empirically from the use of AI agents — a big if — antitrust liability under existing law is doubtful.
It is also conceivable that agentic AI with pricing goals could resort to autonomously creating anticompetitive agreements with each other or with human counterparties, absent any express instruction from a human to do so. While it remains unclear if this possibility is realistic or practical, AI agents would seem to heighten the risk of this possibility compared with existing software because of the higher degree of autonomy and wider range of tools they may engage.
This possibility also raises complex and novel antitrust liability questions that will need to be addressed if this type of conduct is found to occur. For instance, could there be liability under antitrust law if an AI agent entered into a collusive agreement, and to whom would that liability apply? Are there practical scenarios in which an AI agent could even disregard express instructions not to reach anticompetitive agreements, similar to how human actors sometimes disregard instructions to act lawfully and, if so, how would that impact the liability question? And if two AI agents were to reach an anticompetitive agreement, what evidence is likely to exist that it occurred? Identifying evidence to bring claims under antitrust theories for AI agents, as well as the premises embodied in such theories themselves, may introduce a level of complexity far beyond the current algorithmic pricing cases.
Employees Need Clear Guidance on AI; Have You Written Your Policies Yet?
Compliance leaders should look at codes of conduct and other policies
Read moreDetailsData privacy and cybersecurity issues
The use of agentic AI also raises a number of privacy and cybersecurity considerations. As companies roll out their AI agents, consumers may be wary about the collection and use of their personal data, including address or credit card information. Data security and privacy will be important issues for companies to proactively address in order to maintain trust and loyalty with their consumers.
Using an AI agent may mean collecting large amounts of data to complete a task. For example, scheduling a trip to another city would require information about one’s travel schedule, travel preferences, credit card and identifying information to book hotels and transportation, and potentially other identifying information to complete the process. Similarly, if an AI agent analyzes emails to automate certain actions, the AI agent may obtain personal information contained within consumers’ email traffic.
As companies collect large amounts of data, it makes them a more attractive target for bad actors and more likely to face cybersecurity attacks. When a company falls victim to a cybersecurity attack, it may face myriad state and federal reporting obligations depending on the data that was lost in the incident, the company’s relationship to the data (i.e., was it a data owner/controller or a service provider) and the residency of the consumer whose data was implicated in the incident. Following a cybersecurity attack, companies may also face civil liability or lawsuits from impacted consumers.
In addition to cybersecurity concerns, agentic will raise privacy considerations. Under numerous state privacy laws, including the California Consumer Privacy Act, companies may need to identify to consumers what personal information they collect, for what purpose that personal information is collected and to whom that personal information is shared. Companies will have to know and disclose what consumer information is provided to others.
In the travel example above, the company may need to disclose which travel websites its AI agent will use to book the travel and allow the consumer an opportunity to opt out of sharing personal information. State privacy laws may also require companies to notify consumers about automated decision-making depending on how it is used, which could include using agentic AI, and companies may also need to provide consumers the opportunity to opt out. Companies may also have an obligation to notify downstream vendors if a consumer decides to opt out of automated decision-making or the sharing of their personal information.
Compliance considerations
While any company considering implementing agentic AI will need tailored legal guidance for its own particular circumstances, we see a few high-level compliance considerations from a US federal antitrust law perspective that companies may want to consider.
First, using an AI agent to monitor or enforce an express agreement between horizontal competitors to fix prices or output, rig bids or allocate markets will be treated as per se unlawful. Beyond avoiding the use of AI to facilitate a traditional, human-devised anticompetitive agreement, it may also be prudent to ensure that the prompts or configurations for AI agents involved in pricing tasks include appropriate limitations to prohibit the AI agent from seeking to enter into an express anticompetitive agreement on its own. Such limitations may be part of a broader approach to promote safe and ethical AI activity and could also potentially reflect limitations on other actions that raise antitrust risks short of entering into an express agreement, such as sharing certain information.
Second, the antitrust “rule of reason” generally applies to exchanges of information among competitors that are not predicated on agreements to fix prices or other traditionally per se unlawful categories of activity. Because the rule of reason considers procompetitive benefits, make sure that the applicable procompetitive benefits of using agentic AI are well-documented. Where appropriate, this might include having the agent document the information and steps taken in connection with its actions.
Third, it is also advisable to carefully train business personnel to use and monitor the ongoing deployment of AI agents and to retain and exercise appropriate human oversight of certain key tasks.
With respect to privacy and data security, consumer disclosure and consent is key. Companies developing or using AI agents should identify to consumers what data is collected, for what purposes and who else will receive it so that consumers can make informed decisions about using such agents. Consumers have come to expect that a company’s privacy policy will spell out this information.
Additionally, companies should collect the minimum amount of data necessary to achieve the goals of their AI agent and document how certain information will be used and when it will be deleted. Strong data maintenance and retention policies will help minimize adverse consequences if data is lost in a cyberattack. Companies should routinely undergo risk analyses of their data and their systems holding personal consumer information to minimize the risk of a cyberattack in the first instance.