He has not reached one year of age. Last July, OpenAI announced the creation of a focused team in the risks involved in the development of AGI or Artificial General Intelligence. That is, a type of AI that has the ability to understand, learn and apply knowledge in a way comparable to human intelligence. Led by the co-founder of OpenAI, Ilya Sutskeverand by the researcher Jan Leikehas been dissolved by OpenAI amid accusations of not dedicating enough resources. In recent days, both Sutskever and Leike have announced their departure from Sam Altman's company.
This department was tasked with creating security measures for advanced general intelligence systems that 'could lead to humanity's loss of power or even human extinction', according to a post on the OpenAI blog last July. This is what the company calls super alignment and aims to make an AI behave in a manner aligned with human values, safely and reliablyeven in unforeseen and complex situations. 'Currently, we do not have a solution to direct or control a potentially super-intelligent AI and prevent it from going rogue'the company said then.
'OpenAI takes on an enormous responsibility on behalf of all humanity. But in recent years, the culture and security processes have taken a backseat to innovative products. “We should be taking the implications of AGI incredibly seriously for a long time,” Leike said in a series of posts on X when announcing his departure from the company this Friday.
Both Leike and Sutskever's departures occurred in the days after OpenAI presented its new AI model, GPT-4o. According to CNBC, some members of the super alignment team are being transferred to other departments of the company. Both this medium and Wired collect complaints from several OpenAI employees for not allocating enough computational resources to the super-alignment department.
'I have been at odds with OpenAI leadership over the company's core priorities for quite some time, until we finally reached a breaking point', Leike wrote. 'For the last few months my team has been sailing against the wind. Sometimes We were struggling with computing and it was becoming increasingly difficult to conduct this crucial research'.
Over the past few months my team has been sailing against the wind. Sometimes we were struggling to compute and it was getting harder and harder to get this crucial research done.
— Jan Leike (@janleike) May 17, 2024
Regarding Leike's departure, Altman said in X that 'I am very grateful for @janleike's contributions to alignment research and OpenAI's security culture, and I am very sad to see him go. You are right, we have much more to do; we are committed to doing it. I'll have a longer post in the next few days.'
I'm super appreciative of @janleike's contributions to openai's alignment research and safety culture, and very sad to see him leave. he's right we have a lot more to do; we are committed to doing it. I'll have a longer post in the next couple of days.
🧡 https://t.co/t2yexKtQEk
—Sam Altman (@sama) May 17, 2024
In its announcement about the formation of the security team last July, OpenAI said it was dedicating 20% of its available computing power to security measures long term and hoped to resolve the problem within four years.
Sutskever has not given any explanation about the reasons that led to his departure from the company, although he stated in X that he was 'confident that OpenAI will build (AGI) that is safe and beneficial' under the direction of Altman.
Sutskever was one of four OpenAI board members who participated in the attempt to oust Altman from the company last fall, which resulted in his restitution as CEO a few days after his dismissal.
These have not been the only casualties that the now dissolved super-alignment team has had in recent months. As published by The Information last month, Leopold Aschenbrenner and Pavel Izmailov, two researchers on the team, were fired for leaking company secrets. And according to Ars Technica, Cullen O'Keefe He left his position as leader of research on political borders in April and Daniel Kokotajloan OpenAI researcher who has co-authored several papers on the dangers of more capable AI models, 'left OpenAI due to loss of confidence that it would behave responsibly in the age of AGI.'
OpenAI has noted that the superalignment team's work will be 'more deeply integrated' into its other research teams, a change that is currently underway. Research into the risks associated with more powerful models will now be led by John Schulmanwho co-leads the team responsible for fine-tuning the AI models after training.