TechDogs-"Microsoft's AI Research Team Accidently Exposes Terabytes Of Sensitive Data!"

Emerging Technology

Microsoft's AI Research Team Accidently Exposes Terabytes Of Sensitive Data!

By TD NewsDesk

TD NewsDesk

Updated on Wed, Sep 20, 2023

Overall Rating
Hey there, tech adventurers!

Microsoft's AI researchers recently made a little blunder, accidentally spilling terabytes of their digital secrets.

Want to know more? Let's dive into the data drama!

Researchers working on artificial intelligence at Microsoft published open-source training data on GitHub, a cloud-based software development servic, exposing tens of terabytes of sensitive data, including secret keys and passwords.

Wiz, a cloud security business, hinted during its investigation into the unintentional disclosure of cloud-hosted data it came upon a GitHub repository belonging to Microsoft's AI research group.

Open-source code and AI models for picture identification were made available in a GitHub repository, with readers directed to an Azure Storage URL to obtain the AI models. Nonetheless, Wiz discovered that this URL was set up to allow permissions on the full storage account, accidentally exposing extra sensitive business information.

The sensitive material accounted for 38 terabytes, including the backup files from two Microsoft workers' computers, hundreds of Microsoft employees' passwords, secret keys and over 30,000 internal Microsoft Teams messages.

Wiz reports that the URL, which has been publicly accessible since 2020, was incorrectly set to grant "full control" rather than "read-only" permissions, making the exposed data vulnerable to deletion, replacement and injection of malicious content.

Wiz points out that the storage account itself was not compromised. Instead, Microsoft AI programmers made the URL vulnerable by embedding a lax shared access signature (SAS) token. To access the data stored in an Azure Storage account, users can generate shareable links using a method called SAS tokens.
 
“AI unlocks huge potential for tech companies,” Wiz co-founder and CTO Ami Luttwak told TechCrunch. “However, as data scientists and engineers race to bring new AI solutions to production, the massive amounts of data they handle require additional security checks and safeguards. With many development teams needing to manipulate massive amounts of data, share it with their peers or collaborate on public open-source projects, cases like Microsoft’s are increasingly hard to monitor and avoid.”

What has happened since the Microsoft leak was discovered?

Wiz claimed it informed Microsoft of its discoveries on June 22 and Microsoft cancelled the SAS token two days later. On August 16, Microsoft announced that it had concluded its investigation into the potential organizational impact.

Microsoft's Security Response Centre claimed in a blog post that was shared with TechCrunch before publication that "no customer data was exposed, and no other internal services were put at risk because of this issue."

Microsoft stated that as a direct result of Wiz's findings, the company had extended GitHub's secret spanning service to cover any SAS token that may have overly permissive expirations or rights. This service previously monitored all public open-source code updates for the unencrypted exposure of credentials and other secrets.

With terabytes of secrets out in the wild, will Microsoft be able to hit Ctrl+Z on this data debacle? Should AI businesses be mandated to use better security standards for open-source projects?

Let us know your thoughts in the comments section below!

First published on Wed, Sep 20, 2023

Enjoyed what you read? Great news – there’s a lot more to explore!

Dive into our content repository of the latest tech news, a diverse range of articles spanning introductory guides, product reviews, trends and more, along with engaging interviews, up-to-date AI blogs and hilarious tech memes!

Also explore our collection of branded insights via informative white papers, enlightening case studies, in-depth reports, educational videos and exciting events and webinars from leading global brands.

Head to the TechDogs homepage to Know Your World of technology today!

Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs’ members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs’ Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. All information / content found on TechDogs’ site may not necessarily be reviewed by individuals with the expertise to validate its completeness, accuracy and reliability.

Tags:

Artificial Intelligence (AI)Microsoft AI Researchers Data Exposure Sensitive Information Tech Mishap Cybersecurity Digital Blunder Data Breach

Join The Discussion

  • Dark
  • Light