OpenAI leadership on Monday said humanity cannot wait and take a reactive approach to the possible existential risks posed by superintelligence powered by artificial intelligence (AI) tools.
In a blog post, OpenAI CEO Sam Altman along with Greg Brockman (Co-founder and Chairman of OpenAI) and Ilya Sutskever (Co-founder and Chief Scientist at OpenAI) made it clear that the world needed an oversight body for AI much like the International Atomic Energy Agency (IAEA) exists for monitoring nuclear energy and inhibiting its use for any military purpose.
“Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” mentioned a blog post co-authored by OpenAI co-founders.
They highlighted that ‘superintelligence’ would have significantly higher upsides and downsides in comparison to other technologies humanity has had to contend with in the past.
“We can have a dramatically more prosperous future, but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive,” the OpenAI leadership said.
To start with monitoring AI, the OpenAI leadership suggested coordination among the leading development efforts across organisations to maintain safety and help smooth integration of these systems with society. They also suggested that governments could set up projects in which current AI efforts could become part of, or even reach a collective agreement on the rate of growth in AI capability at the frontier level – limiting it to a certain rate per year.
They added that the onus was also on individual companies to act with an extremely high standard of responsibility. In a recent interview with BW Businessworld, VMware CEO Raghu Raghuram emphasised that companies at the forefront of AI technology needed to act responsibly.
“The companies that are building these new AI models and developing the technology, they understand the power of these things better than most people like you and me – who are on the outside. Hence, the onus is on them to both educate and put the safeguards as needed. That is where the higher sense of responsibility needs to come from,” Raghuram said.
According to a Goldman Sachs report, generative AI can drive a 7 per cent (or almost USD 7 trillion) increase in global GDP and lift productivity growth by 1.5 percentage points over a 10-year period.
Given the potential risks of AI technology, there have been multiple calls for a pause on its development all over the globe too. In March, billionaire entrepreneur Elon Musk and a group of AI experts and industry executives called for a six-month pause on developing AI systems more powerful than OpenAI’s GPT-4 in an open letter that cited potential risks to society.
But given the pace of development and sheer business potential involved with the technology, it’s unperceivable for such a pause to come about – unless there is a global consensus. However, that remains to be a distant possibility.
OpenAI said that it would be risky and difficult to stop the creation of superintelligence backed by AI. “Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work,” the company’s leadership surmised.
Despite opining that there should be democratically decided limitations for AI systems, the ChatGPT-maker said that it didn’t understand how to design such a mechanism. But nonetheless, OpenAI plans to experiment with it.
“Given the risks and difficulties, it’s worth considering why we are building this technology at all,” said OpenAI leadership and concluded that they had to get it right.