OpenAI CEO Sam Altman is leaving the internal committee OpenAI created in May to oversee “critical” security decisions related to the company’s projects and operations.
In a Blog entry OpenAI announced today that the committee, the Safety and Security Committee, will become an “independent” board oversight group chaired by Carnegie Mellon professor Zico Kolter, Quora CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony EVP Nicole Seligman. All are current members of OpenAI’s board of directors.
OpenAI noted in its post that the commission conducted a security review of o1, OpenAI’s latest AI model, even though Altman was still chair. The group will continue to receive “regular reports” from OpenAI’s safety and security teams, the company said, and will retain the power to delay releases until security concerns are addressed.
“As part of its work, the Safety and Security Committee… will continue to receive regular reports on technical evaluations of current and future models, as well as reports on ongoing post-launch monitoring,” OpenAI wrote in the post.[W]“We are evolving our model release processes and practices to establish an integrated safety framework with clearly defined success criteria for model releases.”
Altman’s departure from the Safety and Security Committee comes after five US senators Questions raised about OpenAI’s policies in a letter to Altman this summer. Nearly half of OpenAI’s staff who once focused on the long-term risks of AI have leftand former OpenAI researchers have accused Altman opposes “real” regulation of AI in favor of policies that advance OpenAI’s corporate goals.
In that vein, OpenAI has dramatically increased its spending on federal lobbying, budgeting $800,000 for the first six months of 2024, up from $260,000 for all of last year. Altman also joined the Department of Homeland Security’s Artificial Intelligence Safety and Security Board earlier this spring, which provides recommendations for the development and deployment of AI across America’s critical infrastructure.
Even with Altman removed, there is little indication that the Safety and Security Committee will make any tough decisions that seriously affect OpenAI’s commercial roadmap. It is telling that OpenAI saying in May that he would seek to address “valid criticisms” of his work through the commission (“valid criticisms” being, of course, a matter of judgment).
In an opinion piece for The Economist In MayFormer OpenAI board members Helen Toner and Tasha McCauley said they don’t believe OpenAI as it exists today can be trusted to be accountable.[B]“Based on our experience, we believe that self-government cannot reliably withstand the pressure of profit incentives,” they wrote.
And OpenAI’s profit incentives are growing.
The company is rumored to be in the midst of a funding round that would raise more than $6.5 billion and value OpenAI at more than $150 billion. To close the deal, OpenAI will likely abandon its hybrid nonprofit corporate structure, which sought to limit investor returns in part to ensure OpenAI remained aligned with its founding mission: developing artificial general intelligence that “benefits all of humanity.”