American regulators now appear to be clamping down on generative AI in earnest. The Washington Post has learned the Federal Trade Commission (FTC) has launched an investigation into OpenAI, the creator of ChatGPT and DALL-E. Officials have requested documents showing how the company tackles risks stemming from its large language AI models. The FTC is concerned the company may be violating consumer protection laws through “unfair or deceptive” practices that could hurt the public’s privacy, security or reputation.
The Commission is particularly interested in information linked to a bug that leaked ChatGPT users’ sensitive data, including payments and chat histories. While OpenAI said the number of affected users was very small, the FTC is worried this stems from poor security practices. The agency also wants details of any complaints alleging the AI made false or malicious statements about individuals, and info showing how well users understand the accuracy of the products they’re using.
We’ve asked OpenAI for comment. The FTC declined comment and typically doesn’t remark on investigations, but has previously warned that generative AI could run afoul of the law by doing more harm than good to consumers. It could be used to perpetrate scams, run misleading marketing campaigns or lead to discriminatory advertising, for instance. If the government body finds a company in violation, it can apply fines or issue consent decrees that force certain practices.
AI-specific laws and rules aren’t expected in the near future. Even so, the government has stepped up pressure on the tech industry. OpenAI chief Sam Altman testified before the Senate in May, where he defended his company by outlining privacy and safety measures while touting AI’s claimed benefits. He said protections were in place, but that OpenAI would be “increasingly cautious” and continue to upgrade its safeguards.
It’s not clear if the FTC will pursue other generative AI developers, such as Google and Anthropic. The OpenAI investigation shows how the Commission might approach other cases, though, and signals that the regulator is serious about scrutinizing AI developers.