Former OpenAI executives call for more regulation, highlight toxic leadership

Former OpenAI board members are calling for greater government regulation of the company as CEO Sam Altman’s leadership is criticized. Helen Toner and Tasha McCauley – two of the former employees who made up the cast of characters who ousted Altman in November — say their decision to oust the leader and “save” OpenAI’s regulatory […]

Former OpenAI executives call for more regulation, highlight toxic leadership

Former OpenAI board members are calling for greater government regulation of the company as CEO Sam Altman’s leadership is criticized.

Helen Toner and Tasha McCauley – two of the former employees who made up the cast of characters who ousted Altman in November — say their decision to oust the leader and “save” OpenAI’s regulatory structure was spurred by “long-standing patterns of behavior exhibited by Mr. Altman,” which “undermined board oversight of key decisions and internal security protocols.” “.

Write in an opinion article published by The Economist on May 26, Toner and McCauley assert that Altman’s behavior, combined with a reliance on self-government, is a recipe for AGI disaster.

SEE ALSO:

FCC May Require AI Labels for Political Ads

While both men say they joined the company with “cautious optimism” about OpenAI’s future, buoyed by the company’s seemingly altruistic motivations at the time being exclusively nonprofit, both have since called into question the actions of Altman and the company. “Several senior executives had privately raised serious concerns with the board,” they wrote, “saying that they believed Mr. Altman was cultivating a ‘toxic culture of lying’ and engaging in ‘behavior [that] can be qualified as psychological violence.'”

“Developments since his return to the company – including his reinstatement to the board and the departure of top security-focused talent – ​​bode ill for OpenAI’s self-governance experiment,” they continue. -they. “Even with the best of intentions, without external oversight, this type of self-regulation will eventually become unworkable, especially under the pressure of immense profit incentives. Governments must play an active role.”

Looking back, Toner and McCauley write: “If a company could have governed itself while safely and ethically developing advanced AI systems, it would have been OpenAI“.

Crushable speed of light

SEE ALSO:

What OpenAI’s Scarlett Johansson drama tells us about the future of AI

Former board members oppose the current push for self-reporting and relatively minimal external regulation of AI companies while federal laws stagnate. Abroad, AI task forces are already finding fault with relying on tech giants to lead security efforts. Last week, the EU issued a billion-dollar warning to Microsoft after it failed to disclose potential risks from its CoPilot and AI-based image maker. A recent report from the UK’s AI Safety Institute found that the protections of several of the largest public wide language models (LLMs) were easily jailbroken by malicious prompts.

In recent weeks, OpenAI has been at the center of discussions over AI regulation following a series of high-profile resignations from high-ranking employees who cited divergent views on its future. After co-founder and head of its superalignment team Ilya Sutskever and co-leader Jan Leike left the company, OpenAI disbanded its internal security team.

Leike said he was concerned about OpenAI’s future because “security culture and processes have taken precedence over shiny products.”

SEE ALSO:

One of OpenAI’s security chiefs resigned on Tuesday. He just explained why.

Altman was criticized for a then-revealed business disembarkation policy this requires departing employees to sign NDAs preventing them from saying anything negative about OpenAI or risk losing any equity they have in the company.

Shortly after, Altman and president and co-founder Greg Brockman responded to the controversy by writing on X: “The future is going to be more difficult than the past.” We must continue to elevate our safety work to match the challenges of each new model. ..We also continue to collaborate with governments and many security stakeholders. There is no proven guide on how to navigate the path to AGI.

In the eyes of many former OpenAI employees, the historically “light touch” philosophy of Internet regulation will not be enough.

The subjects
OpenAI artificial intelligence

Teknory