TechDogs-"OpenAI Internal Systems Faced A Hack In 2023, Sparks Fears"

Emerging Technology

OpenAI Internal Systems Faced A Hack In 2023, Sparks Fears

By Amrit Mehra

TD NewsDesk

Updated on Fri, Jul 5, 2024

Overall Rating
Coming to the world of artificial intelligence (AI) and generative artificial intelligence (GenAI), recently, we reported on former chief scientist of OpenAI, Ilya Sutskever, launching a new AI company, Safe Superintelligence Inc.

The new company came with one goal and one product – a safe superintelligence – a move that would allow Sutskever to build on his previous stance of the importance of AI safety, a principle he held when at OpenAI.

Ilya Sutskever’s resignation came along with that of Jan Leike, his co-lead of OpenAI’s Superalignment project.

Upon resigning from OpenAI, concerns arose about the company’s efforts to maintain safe and secure practices. This led to OpenAI setting up a safety committee headed by CEO Sam Altman.

While that incident may have been addressed, OpenAI finds itself in the midst of another one that once again questions its safety and security issues, with a sprinkling of fear of harm from foreign adversaries.

So, what security incident was revealed? Let’s explore!
 

What Was Reported About OpenAI’s Hack?

 
  • According to a report by the New York Times, in early 2023, OpenAI’s internal messaging system was breached by a hacker, who managed to get away with details about the designs of its AI technologies.

  • As per two people familiar with the incident who chose to remain anonymous, the details were picked up from an online forum used by employees to discuss the company’s latest technologies.

  • However, the hacker did not get into the systems where OpenAI builds and stores its artificial intelligence.

  • Furthermore, in April 2023, OpenAI revealed the breach to employees during an all-hands meeting and informed members of its board of directors.

  • The incident wasn’t made public as no data or information about customers or partners was stolen.

  • Additionally, as the company didn’t find the case to be a threat to national security as they believed the hacker to be working alone sans any known ties to a foreign government, it did not inform the FBI or other law enforcement agencies.

  • OpenAI did not provide a comment on the matter.

  • However, the incident sparked fears for some OpenAI employees that foreign bad actors could steal its AI technology, which could eventually threaten US’ national security.

  • This included concerns brought up by then OpenAI employee Leopold Aschenbrenner.

 

What Did Leopold Aschenbrenner Say?

 
  • Leopold Aschenbrenner, former OpenAI technical program manager who focused on ensuring future AI products weren’t harmful, had sent a memo to the company’s board of directors after the breach, saying the company wasn’t doing enough to prevent foreign adversaries from stealing confidential data.

  • In a recent podcast, Aschenbrenner mentioned that OpenAI had let him go recently, citing he had leaked other information outside the company, a move that was politically motivated.

  • He also alleged that OpenAI’s security wasn’t strong enough to withstand infiltration attempts by foreign bad actors.

  • However, Liz Bourgeois, a spokesperson for OpenAI, said, “We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation.”

  • Speaking about other allegations made by Aschenbrenner, Bourgeois said, “While we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work. This includes his characterizations of our security, notably this incident, which we addressed and shared with our board before he joined the company.”

  • Ahead of this, Matt Knight, OpenAI’s head of security, said, “We started investing in security years before ChatGPT. We’re on a journey not only to understand the risks and stay ahead of them, but also to deepen our resilience.”


TechDogs-"An Image Of OpenAI's Logo On A Colorful Background As Used By The Company"
While OpenAI believes the hack was conducted by an individual not working for any foreign government, ruling out the bad actor didn’t have any ties to China wouldn’t necessarily be unreasonable.

Yet, under federal and California law, OpenAI can’t prohibit individuals from working at its company based on their nationality, which in itself could hinder the progress of the technology.

OpenAI’s Matt Knight too believes it would be a counter-productive move, saying, “We need the best and brightest minds working on this technology ... It comes with some risks, and we need to figure those out.”

At the same time, OpenAI, Google, Anthropic and others are fitting their offerings with guardrails before release in a bid to curb the spread of misinformation or other issues.

On the other hand, companies such as Meta and others are open sourcing their designs to be shared with the world, a move that enables engineers and researchers around the world to discover and fix problems.

Do you think OpenAI should take a similar approach and share its AI technology for the world to explore and use or do you think they should maintain secrecy considering their leading position in the market?

Let us know in the comments below!

First published on Fri, Jul 5, 2024

Liked what you read? That’s only the tip of the tech iceberg!

Explore our vast collection of tech articles including introductory guides, product reviews, trends and more, stay up to date with the latest news, relish thought-provoking interviews and the hottest AI blogs, and tickle your funny bone with hilarious tech memes!

Plus, get access to branded insights from industry-leading global brands through informative white papers, engaging case studies, in-depth reports, enlightening videos and exciting events and webinars.

Dive into TechDogs' treasure trove today and Know Your World of technology like never before!

Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs' members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs' Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. While we aim to provide valuable and helpful information, some content on TechDogs' site may not have been thoroughly reviewed for every detail or aspect. We encourage users to verify any information independently where necessary.

Join The Discussion

- Promoted By TechDogs -

Join Our Newsletter

Get weekly news, engaging articles, and career tips-all free!

By subscribing to our newsletter, you're cool with our terms and conditions and agree to our Privacy Policy.

  • Dark
  • Light