As senators raise concerns about the risks of ChatGPT, Sam Altman says his worst fear is his industry causing ‘significant harm to the world.’
Sam Altman, the founder and CEO of ChatGPT creator OpenAI, testified for the first time at a Senate Judiciary Committee hearing Tuesday about the promise–and the perils–of the artificial intelligence his company has created.
The surging popularity of ChatGPT, a chatbot that answers questions in convincing text, has raised social anxiety around the ethics of A.I. Concerns senators raised in the hearing included ChatGPT’s ability to clandestinely influence individuals, its potential to gather data for misuse, and the inherent unreliability of its systems. ChatGPT reached an estimated 100 million users within two months of launching to consumers.
In testimony to the Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law, Altman himself admitted the need for regulation of generative artificial intelligence, turning much of the content of the hearing from “whether” to build a regulatory structure to “how.”
After noting that OpenAI’s latest release, called GPT-4, is more likely to “respond helpfully and truthfully and refuse harmful requests than any other widely deployed model,” Altman said: “However, we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”
While Altman repeatedly advocated for government regulation–including the creation of an agency responsible for licensing and setting safety standards for large A.I. models–other testimony demanded it, and went further in calling for outside audits of A.I. systems by independent experts. Gary Marcus, a serial entrepreneur of A.I. companies, author, and professor of psychology and neural science at New York University, argued for the urgency of regulating emerging A.I. systems.
“We have built machines that are like bulls in a china shop: powerful, reckless, and difficult to control,” Marcus said, noting that it’s easy to agree on what’s desirable in A.I. systems. “We want, for example, for our systems to be transparent, to protect our privacy, to be free of bias, and above all else to be safe. But current systems are not in line with these values. Current systems are not transparent. They do not adequately protect our privacy, and they continue to perpetuate bias, and even their makers don’t entirely understand how they work.”
Altman, the former president of startup incubator Y Combinator, founded OpenAI in San Francisco in 2015. The company has created other A.I. systems, including Dall-E, and in 2019 began taking what’s become more than $10 billion from Microsoft in investment. Today, Microsoft integrates the company’s products into its own, including the search engine Bing, which has helped ignite a hugely competitive wave of A.I. investment and development in Silicon Valley. (While embarking on this current goodwill tour of sorts in Washington, Altman isn’t shying away from controversial ideas back in Silicon Valley: He’s also raising $100 million for an iris-scan-based global cryptocurrency company dubbed Worldcoin.)
Senator Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee, said that companies creating generative artificial intelligence should be required to test their own systems and disclose known risks, as well as allow independent researchers to access them. He proposed something of a “nutrition label” or scorecard for products that would note limitations of their use.
Senator Dick Durbin, a Democrat from Illinois, said he doubted the speed with which Congress would or could act. “The magnitude of the challenge you’re giving us is substantial,” he said. “I’m not sure that we respond quickly and with enough expertise to deal with it.”
Marcus proposed the creation of both a U.S. agency to regulate companies domestically and a neutral international organization, akin to CERN, the European Organization for Nuclear Research, but focused on A.I. safety rather than high-energy physics.
Altman agreed, more emphatically supporting regulation than Meta founder and CEO Mark Zuckerberg did in his testimony over privacy and electoral misinformation in 2018 and 2020.
“My worst fear is that we … the industry cause significant harm to the world,” Altman said. He went on: “If this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening.”