Sen. Cruz Seeks Answers on FTC’s Lawless Attempt at Regulating AI
In letter drafted with ChatGPT’s help, Sen. Cruz warns agency it has no authority to regulate constitutionally-protected speech
WASHINGTON, D.C. – U.S. Senate Commerce Committee Ranking Member Ted Cruz (R-Texas) sent a letter to Federal Trade Commission (FTC) Chairwoman Lina Khan seeking answers regarding her plans to regulate artificial intelligence for “disinformation” and “bias.” The letter explains that the FTC’s review of large language models used to train AI, which comes without any explicit statutory congressional authorization, may also infringe on constitutionally-protected speech. The letter comes ahead of today’s Commerce subcommittee hearing on the “The Need for Transparency in Artificial Intelligence.” As Ranking Member Cruz’s letter explains, the AI model ChatGPT assisted in the drafting of his oversight inquiry.
Sen. Cruz wrote:
I am writing to request information about the stance of the Federal Trade Commission (“FTC” or “Commission”) on the regulation of artificial intelligence (“AI”). Your public comments, as well as comments made to this Committee by senior FTC staff, suggest the FTC intends to play a role in aggressively policing AI despite receiving no explicit statutory authorization to do so from Congress. As further evidence of the FTC’s intent, on July 13, 2023, a leaked Civil Investigative Demand (“CID”) sent by the FTC to OpenAI—the California-based company best known for its development of ChatGPT—shows the FTC is pursuing AI regulation under legal theories that exceed the agency’s statutory authority and would entail regulation of constitutionally protected speech.
Like many computer applications, AI is a productivity tool that is useless without human guidance. In fact, ChatGPT assisted in drafting this letter. But AI computer code, apart from its use by a consumer, has no inherent ability to violate the Civil Rights Act or Section 5 of the FTC Act as your May 3rd op-ed in the New York Times, titled “We Must Regulate A.I. Here’s How,” implies. You wrote that “A.I. tools are being trained on huge troves of data in ways that are largely unchecked. Because they may be fed information riddled with errors and bias, these technologies risk automating discrimination—unfairly locking out people from jobs, housing, or key services.”
For the FTC to undertake new regulation or an investigation, more than fearmongering and fanciful speculation are required by law. The FTC Act requires that the Commission have a “reason to believe” that a party possesses evidence of an unfair or deceptive act or practice in order to issue a CID. Your op-ed argues for going after “not just the fly-by-night scammers” but also “the upstream firms that are enabling them” by producing problematic AI “tools.” This approach is a stark departure from past FTC practice, as the Commission has traditionally focused on the harm caused by a product’s use—not its design—in its enforcement actions. Furthermore, such regulation would represent an astonishing expansion of power over otherwise-benign products. It would be akin to the FTC regulating a cell phone’s design in order to enforce the do-not-call registry.
Your comments were reinforced by FTC staff during a subsequent briefing to the Committee about AI on June 2, 2023. During the briefing, FTC staff made clear that the agency is looking for ways to determine if data sets used to train AI models are biased, discriminatory, or contain “misinformation,” suggesting the FTC was considering an expansive regulatory approach to AI to crack down on non-commercial speech. Your staff’s response to concerns that the FTC would, in assessing bias or misinformation, be operating outside its statutory authority and acting as “speech police” for broad swaths of data were vague and unsatisfactory.
While the FTC undoubtedly has the statutory authority to initiate enforcement actions against companies engaged in “unfair or deceptive acts or practices,” the FTC may not launch a preemptive regulatory approach against code underlying AI systems in order to prevent “bias” or preclude the use of undefined “discriminatory” datasets. Such an extralegal approach would inevitably involve the policing of constitutionally protected speech, including the internet or user-derived data used to train AI models. This is well beyond FTC’s statutory mandate. The FTC has no authority or business attempting to regulate constitutionally protected speech.
Given this context, the CID that the FTC sent to OpenAI is particularly troubling, as is the fact that the CID was leaked. As Sam Altman, CEO of OpenAI, noted, such a leak “does not help build trust” between the company and government regulators. Moreover, the questions and document requests within the CID suggest that the FTC is now implementing many of the alarming legal theories that senior agency leaders told Committee staff that they were contemplating. The CID seeks information on the training data for OpenAI’s Large Language Model, such as the content categories and languages incorporated. The CID also asks about instances where ChatGPT has led to the “safety challenges” identified in OpenAI’s GPT-4 System Card, which include “harms of representation” and “disinformation.” To the extent it is even constitutional for Congress to prohibit such speech-based harms, Congress has not done so here nor authorized FTC to pursue these issues. Finally, the CID directs OpenAI to snitch on users of ChatGPT who engineered prompts to circumvent ChatGPT filters and rules, a new form of surveillance with the disturbing potential to chill free speech.
To better understand the FTC’s views on its regulatory and enforcement authority with respect to AI, Sen. Cruz is seeking answers to a number of questions regarding the agency’s plans, including details regarding the leak of a Civil Investigative Demand issued to OpenAI.
The full text of the letter is available HERE.