
Emerging Technology
Workers Say OpenAI Rushed Tests And Failed To Make AI Safe
By Amrit Mehra

Updated on Mon, Jul 15, 2024
This included both the co-leads of OpenAI’s introductory Superalignment endeavor, Jan Leike and Ilya Sutskever, which was aimed at creating scientific and technical breakthroughs to steer and control AI systems much smarter than humans.
However, less than a year into the program, both co-leads resigned from the company on the same day due to concerns raised by OpenAI’s lack of safety around its products.
As per Leike, ChatGPT was more focused on “shiny products” instead of “safety culture and processes.”
While Leike joined OpenAI rivals Anthropic to continue working on AI safety soon after, Sutskever remained more reserved in his next move saying updates will come “in due time”.
Just over a month after resigning from the company, Sutskever’s update came in the form of him saying, “I am starting a new company.”
Sutskever, joined by former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy, announced the creation of Safe Superintelligence Inc. (or SSI for short). The company came with “one goal and one product - a safe superintelligence.”
Essentially, the resignation of the two executives in OpenAI opened questions about OpenAI’s AI safety efforts. This concern was addressed by OpenAI saying it had set up a safety committee led by board members and CEO Sam Altman.
However, two months later, OpenAI’s safety stance is once again in question, this time by employees still within the organization.
So, what did the AI giant’s workers reveal? Let’s explore!
What Did OpenAI Employees Say?
-
According to a report by The Washington Post, a number of employees of OpenAI are questioning the safety practices of the company.
-
Despite OpenAI promising the US government it would ensure robust safety-testing of new versions of its products, employees of the company were rushed and pressured to meet a May launch date set by top executives.
-
Three employees who spoke on the condition of maintaining anonymity said the company wanted to breeze past its new testing protocol designed to prevent its products from being harmful.
-
These include protecting from teaching users how to build bioweapons or the enable the creation of new cyberattack techniques.
-
The company even began celebrating its new GPT-4o Omni model, which is intended to power its flagship ChatGPT chatbot.
-
The celebrations included a party at one its San Francisco offices, to which one of the employees said, “They planned the launch after-party prior to knowing if it was safe to launch ... We basically failed at the process.”
-
The tests were “squeezed” to be conducted in a single week, which was sufficient time if pressed.
-
As per the report, one of the employees said, “We are rethinking our whole way of doing it ... This [was] just not the best way to do it.”
-
While the employees expected the model would pass the test, many employees were unhappy with the way the company treated its new safety testing protocol.
-
This led to many of them signing an open letter to be free to warn regulators and the public about safety risks, sans confidentiality agreements.
-
At the same time, OpenAI spokesperson Lindsey Held said the company “didn’t cut corners on our safety process, though we recognize the launch was stressful for our teams.” Instead, it “conducted extensive internal and external” tests.

What Did AI Experts Say?
-
According to Andrew Strait, associate director at the Ada Lovelace Institute in London and former ethics and policy researcher at Google DeepMind, allowing companies to set their own standards can be risky.
-
Strait said, “We have no meaningful assurances that internal policies are being faithfully followed or supported by credible methods.”
-
On the other hand, the US government believes Congress needs to create new laws to govern AI and ensure its safety.
-
Robyn Patterson, a spokesperson for the White House, said, “President Biden has been clear with tech companies about the importance of ensuring that their products are safe, secure, and trustworthy before releasing them to the public.”
-
[Contd.] “Leading companies have made voluntary commitments related to independent safety testing and public transparency, which he expects they will meet.”
In 2023, a wide range of artificial intelligence (AI) industry leaders made pledges to the US government saying they would improve their safeguards on increasingly powerful generative artificial intelligence (GenAI) models.
This pledge consisted of AI companies such as Anthropic, Google DeepMind, Meta, NVIDIA, Palantir, OpenAI and more.
Recently, we reported that OpenAI’s internal messaging system was breached by a hacker in early 2023, who managed to get away with details about the designs of its AI technologies.
At the same time, the company announced a multi-year content deal and strategic partnership with TIME, which would bring its extensive archives from the last 101 years to OpenAI products, as well as CriticGPT, a new generative AI model powered by GPT-4 to find errors in the same model.
Do you think OpenAI should enforce stricter and more robust security and safety policies considering the popularity of the company’s products?
Let us know in the comments below!
First published on Mon, Jul 15, 2024
Enjoyed what you read? Great news – there’s a lot more to explore!
Dive into our content repository of the latest tech news, a diverse range of articles spanning introductory guides, product reviews, trends and more, along with engaging interviews, up-to-date AI blogs and hilarious tech memes!
Also explore our collection of branded insights via informative white papers, enlightening case studies, in-depth reports, educational videos and exciting events and webinars from leading global brands.
Head to the TechDogs homepage to Know Your World of technology today!
Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs' members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs' Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. While we aim to provide valuable and helpful information, some content on TechDogs' site may not have been thoroughly reviewed for every detail or aspect. We encourage users to verify any information independently where necessary.
Trending TD NewsDesk
HR Tech: Rippling Raises $450M, New Microsoft AI Features & Awardees Revealed
By TechDogs Bureau
OpenAI Expands Presence In Asia, Announces CEO Of Applications & Enhances ChatGPT’s Deep Research
By TechDogs Bureau
EdTech Company Pearson & SK Telecom Face Hacks As Meta Beats Spyware Firm NSO
By TechDogs Bureau
Apple’s Testimony Wipes Out $150 Billion From Google Amid New AI Model Launch
By TechDogs Bureau
CrowdStrike, IBM, Google & PwC Navigate Continued Layoffs In 2025 Amid AI-Driven Workforce Shifts
By TechDogs Bureau
Join Our Newsletter
Get weekly news, engaging articles, and career tips-all free!
By subscribing to our newsletter, you're cool with our terms and conditions and agree to our Privacy Policy.
Join The Discussion