The OpenAI team responsible for protecting humanity is no more

In the summer of 2023, OpenAI created a “Superalignment” team whose goal was to direct and control future AI systems that could be so powerful that they could lead to the extinction of humanity. Less than a year later, that team was dead. OpenAI technology-vp&sref=10lNAhZ9″ rel=”nofollow noopener” target=”_blank” data-ylk=”slk:told;cpos:1;pos:1;elm:context_link;itc:0;sec:content-canvas” class=”link “>said Bloomberg that the company […]

The OpenAI team responsible for protecting humanity is no more

In the summer of 2023, OpenAI created a “Superalignment” team whose goal was to direct and control future AI systems that could be so powerful that they could lead to the extinction of humanity. Less than a year later, that team was dead.

OpenAI technology-vp&sref=10lNAhZ9″ rel=”nofollow noopener” target=”_blank” data-ylk=”slk:told;cpos:1;pos:1;elm:context_link;itc:0;sec:content-canvas” class=”link “>said Bloomberg that the company was “integrating the group more deeply into its research efforts to help the company achieve its security goals.” But a series of tweets from Jan Leike, one of the team’s leaders who recently resigned, revealed internal tensions between the security team and the larger company.

In a report posted on On Friday, Leike said the Superalignment team fought for resources to complete the research. “Building machines smarter than humans is an inherently dangerous endeavor,” Leike wrote. “OpenAI takes on an enormous responsibility on behalf of all humanity. But in recent years, safety culture and processes have taken a back seat to shiny products. OpenAI did not immediately respond to a request for comment from Engadget.

Jan Leike

X

Leike’s departure earlier this week came hours after Sutskevar, OpenAI’s chief scientist, announced he was leaving the company. Sutskevar was not only one of the leads of the Superalignment team, but he also helped co-found the company. Sutskevar’s decision comes six months after he was involved in a decision to fire CEO Sam Altman over concerns that Altman had not been “consistently upfront” with the board. Altman’s brief ouster sparked an internal revolt within the company, with nearly 800 employees signing a letter threatening to resign if Altman was not reinstated. Five days later, Altman was back as CEO of OpenAI after Sutskevar signed a letter stating he regretted his actions.

When this announcement After the creation of the Superalignment team, OpenAI said it would devote 20% of its computing power over the next four years to solving the problem of controlling powerful AI systems of the future. “[Getting] this right is essential to achieve our mission,” the company wrote at the time. On X, Leike wrote that the Superalignment team was “struggling with compute and it was becoming increasingly difficult” to carry out crucial AI safety research. “Over the past few months, my team has been sailing against the wind,” he wrote, adding that he had reached “a breaking point” with OpenAI leadership due to disagreements over core priorities. the company.

Over the past few months, departures have increased within the Superalignment team. In April, OpenAI would have fired two researchers, Leopold Aschenbrenner and Pavel Izmailov, for allegedly leaking information.

OpenAI said Bloomberg that its future security efforts will be led by John Schulman, another co-founder, whose research focuses on large language models. Jakub Pachocki, a director who led the development of GPT-4 — one of OpenAI’s major flagship language models — is said to replace Sutskevar as chief scientist.

Superalignment wasn’t the only OpenAI team focused on AI security. In October, the company begin an entirely new “preparedness” team to stem potential “catastrophic risks” from AI systems, including cybersecurity concerns and chemical, nuclear and biological threats.

Updated, May 17, 2024, 3:28 p.m. ET: In response to a request for comment on Leike’s allegations, an OpenAI public relations executive directed Engadget to Sam Altman’s site. Tweeter saying he would say something in the next few days.

This post contains affiliate links; If you click on such a link and make a purchase, we may earn a commission.

Teknory