Governance

‘If AI goes wrong, it can go quite wrong’: Here’s ChatGPT CEO’s full testimony in US Congress

Sam Altman, CEO of OpenAI that owns ChatGPT, testified in the oversight hearing of artificial intelligence

 
By Nandita Banerji
Published: Wednesday 17 May 2023
Samuel Altman testifies before US lawmakers. Screengrab: @SenBlumenthal / Twitter__

Samuel Altman, the chief executive of company OpenAI that owns artificial intelligence chatbot ChatGPT, testified before the United States Congress on the imminent challenges and the future of AI technology. The oversight hearing was the first in a series of hearings intended to write the rules of AI.

Here are a few key takeaways from Altman’s testimony:

AI-generated opening remarks

US senator Richard Blumenthal played an audio clip with “introductory remarks” from a computer-generated voice that sounded just like him. He revealed that the audio was made by a voice cloning software, with words written by ChatGPT. 


Read more: Simply Put: Labour Day 2023


Learning from social media’s mistakes

Congress missed the opportunity to regulate social media at its inception, which led to issues like misinformation and data privacy on social media platforms, said the senator. 

An atom bomb or printing press

Senator Josh Hawley asked if ChatGPT was similar to the printing press or the atom bomb. Altman said it can be a printing-press moment, but acknowledged the threat posed by AI. “If this technology goes wrong, it can go quite wrong,” Altman said.

Threat to jobs

Altman accepted that GPT-4 would entirely automate away some jobs, but it might create new ones that the founder thinks are much better. 

Regulatory intervention

The CEO supported the idea of regulating the AI that would maximise the benefits of the transformative technology while minimising the harms.


Read more: Thirsty AIs: ChatGPT ‘drinks’ half a litre of fresh water to answer 20-50 questions, says study


Here is the full unedited transcript of Altman’s testimony in the Congress.  

Thank you. Thank you, chairman Blumenthal, ranking member Hawley, members of the judiciary committee. Thank you for the opportunity to speak to you today about this. It’s really an honour to be here.

My name is Sam Altman. I’m the chief executive officer of OpenAI. OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives. But also that it creates serious risks we have to work together to manage. 

We’re here because people love this technology, we think it can be a printing press moment. We have to work together to make it so. OpenAI is an unusual company. And we set it up that way because AI is an unusual technology. 

We are governed by a non-profit and our activities are driven by our mission and our charter. Which commit us to working to ensure that the broad distribution of the benefits of AI and to maximise in the safety of AI systems. 

We are working to build tools that one day can help us make new discoveries and address some of humanity’s biggest challenges like climate change and curing cancer. 

Our current systems aren’t yet capable of doing these things, but it has been immensely gratifying to watch many people around the world get so much value from what these systems can already do today.

We love seeing people use our tools to create, to learn, to be more productive. We’re very optimistic that they’re going to be fantastic jobs in the future and current jobs can get much better. 

We also love seeing what developers are doing to improve lives. For example, Be My Eyes used our new multimodel technology in GPT-4 to help visually impaired individuals navigate their environment. 


Read more: The dark side of big data


We believe the benefits of the tools we have deployed so far vastly outweigh the risks but ensuring their safety is vital to our work. And we make significant efforts to ensure that safety is built into our systems at all levels. 

Before releasing any new system, OpenAI conducts expensive testing — extensive testing, engages external experts, improves the models’ behaviour and implements robust safety and monitoring systems.

Before we released GPT-4, our latest model, we spent over six months conducting extensive evaluations, external red teaming and dangerous capability testing. We are proud of the progress that we made. GPT-4 is more likely to respond helpfully and truthfully and refuse harmful requests than any other model of similar capability. 

However, we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models. For example, the US government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities.

There are several other areas I mentioned in my written testimony where I believe that companies like ours can partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes to develop and update safety measures and examine opportunities for global coordination.


Read more: Rise of the machines: Why India need not be afraid


And as you mentioned, I think it’s important that companies have their own responsibility here, no matter what Congress does. 

This is a remarkable time to be working on artificial intelligence. But as this technology advances, we understand that people are anxious about how it could change the way we live. We are too. But we believe we can and must work together to identify and manage the potential downsides so that we can all enjoy the tremendous upsides.

It is essential that powerful AI is developed with democratic values in mind and this means that US leadership is critical.

I believe that we will be able to mitigate the risks in front of us and capitalise on the technology potential to grow the US economy and the world’s and I look forward to working with you to meet this moment and I look forward to answering your questions. Thank you.

Read more:

Subscribe to Daily Newsletter :

Comments are moderated and will be published only after the site moderator’s approval. Please use a genuine email ID and provide your name. Selected comments may also be used in the ‘Letters’ section of the Down To Earth print edition.