The United States’ competition watchdog, Federal Trade Commission (FTC), has opened an investigation into ChatGPT creator OpenAI amid suspicions the startup broke the law by scraping public data and publishing false and defamatory information.
In a 20-page letter, FTC has requested OpenAI to provide detailed information about its technology and privacy protections, including any efforts to prevent a repeat of incidents in which its groundbreaking chatbot published false and disparaging information about members of the public.
OpenAI chief executive Sam Altman said the leak of the regulator’s probe was “disappointing” and would not help build trust.
Altman said on Twitter, referring to ChatGPT’s more advanced successor:
We built GPT-4 on top of years of safety research and spent 6+ months after we finished initial training making it safer and more aligned before releasing it. We protect user privacy and design our systems to learn about the world, not private individuals. We’re transparent about the limitations of our technology, especially when we fall short.
Existential risks
The FTC investigation marks potentially the strongest regulatory move yet to rein in the nascent field of artificial intelligence, whose rapid advancement has generated excitement as well as warnings about existential risks to humanity.
On Wednesday, FTC chair Lina Khan told a congressional committee the agency was concerned about “fraud and deception” associated with ChatGPT’s output. “We’ve heard about libel, defamatory statements, flatly untrue things that are emerging,” Khan said, without disclosing the FTC probe.
OpenAI’s ChatGPT set the tech world alight upon its launch in November with its uncanny ability to mimic human speech. But the chatbot has also generated controversy due to its tendency to produce inaccurate and offensive content.
Global regulators have been scrambling to develop rules to govern AI as tech firms are rapidly rolling out more advanced versions of the technology.
The US Senate is considering two separate bipartisan AI bills aimed at ensuring US competitiveness and improving transparency around the technology’s use by the government.
The European Union in April reached a preliminary deal on legislation to govern the use of AI that would categorize technologies as posing an unacceptable risk, high risk, or limited risk.
1 Comment
Pingback: China’s promising OpenAI challenger Zhipu AI receives funding from Meituan - Innovation Village | Technology, Product Reviews, Business