OpenAI Executive Jan Leike Resigns, Calls for Stronger AGI Safety Measures

Sam Altman OpenAI ChatGPT EU

Jan Leike, the head of alignment in OpenAI and the leader of ‘Superalignment’ team has left this company due to his worries about its priorities which he thinks are more focused on product development than AI safety.

Leike made a public announcement of his resignation on May 17, through a series of posts on the social media platform X, which was previously known as Twitter. He stated that the OpenAI leadership was wrong in their choice of core priorities and they should place more emphasis on safety and preparedness as AGI development is moving forward.

Jan Leike’s Safety Concerns and Internal Disagreements

Leike, who had been with OpenAI for about three years, pointed out in his posts that the culture and processes around AI safety were being neglected by the development of “shiny products”. He expressed concern over the allocation of resources, saying that his team needed help to get the necessary computing power to carry out important safety research.

“The construction of machines that are smarter than humans is a risky task by itself,” Leike quoted, thus indicating the OpenAI’s responsibility for humanity.

His resignation came almost at the same time as Ilya Sutskever’s departure. Sutskever, the co-leader of the ‘Superalignment’ team and OpenAI’s chief scientist, had already announced his resignation a few days earlier. The departure of Sutskever was a noticeable since he co-founded OpenAI and participated in various research projects, including the development of ChatGPT.

Dissolution of the Superalignment Team

In view of the recent resignations, OpenAI has decided to disband the ‘Superalignment’ team and its functions will be merged with other research projects in the company. Bloomberg told that this decision is a result of the internal restructuring which has been going on since the governance crisis in November 2023 when CEO Sam Altman was temporarily removed and President Greg Brockman lost his chairmanship.

The ‘Superalignment’ team, created to deal with the existential risks that were brought by advanced AI systems and was responsible for developing solutions of controlling and steering superintelligent AI. Their work was considered as the most important in making preparations for the next generations of AI models.

Although the team was dissolved, OpenAI has promised that research on long-term AI risks will go on under the direction of John Schulman, who is leading also a team which develops how to fine-tune AI models after training.

OpenAI’s Current Trajectory and Prospects

Leike and Sutskever’s resignations, together with the disbanding of ‘Superalignment’ team, have been followed by a high-level scrutiny on AI safety and governance at OpenAI. This is the result of a long period of contemplation and disagreement, especially after Sam Altman was dismissed and then later rehired.

The departures and restructuring indicate that OpenAI may not be committed to the safety as it continues to develop and release advanced AI models. Lately, OpenAI has introduced a new “multimodal” AI model, GPT-4o, which can interact with humans more naturally and in an almost human-like way. Although this achievement proves OpenAI’s technological skills, it also reveals the ethical issues concerning privacy, emotional manipulation, and cybersecurity risks.

Though there is a lot of commotion, OpenAI still sticks to the main goal which is to create AGI safely and for the good of humanity. In a post on X, Sam Altman, OpenAI’s CEO admitted Leike’s work and stressed the company’s commitment to AI safety.

“I’m very grateful to @janleike for his great contributions to OpenAI’s alignment research and safety culture, and I am really sad that he is leaving. He’s right we have a lot more work to do; we are determined to do it. I will post my longer version in the next couple of days,” Altman wrote.

Read Also: Binance Pushes For SHIB, USTC, AGIX Liquidity and Trading Boost

The post OpenAI Executive Jan Leike Resigns, Calls for Stronger AGI Safety Measures appeared first on CoinGape.


Earn more PRC tokens by sharing this post. Copy and paste the URL below and share to friends, when they click and visit Parrot Coin website you earn: https://parrotcoin.net0


PRC Comment Policy

Your comments MUST BE constructive with vivid and clear suggestion relating to the post.

Your comments MUST NOT be less than 5 words.

Do NOT in any way copy/duplicate or transmit another members comment and paste to earn. Members who indulge themselves copying and duplicating comments, their earnings would be wiped out totally as a warning and Account deactivated if the user continue the act.

Parrot Coin does not pay for exclamatory comments Such as hahaha, nice one, wow, congrats, lmao, lol, etc are strictly forbidden and disallowed. Kindly adhere to this rule.

Constructive REPLY to comments is allowed

Leave a Reply