We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience, personalize content, customize advertisements, and analyze website traffic. For these reasons, we may share your site usage data with our social media, advertising, and analytics partners. By clicking ”Accept,” you agree to our website's cookie use as described in our Cookie Policy. You can change your cookie settings at any time by clicking “Preferences.”

TechDogs-"OpenAI To Secure AI Models With ID Verification As Bad Actors Leverage LLMs"

Artificial Intelligence

OpenAI To Secure AI Models With ID Verification As Bad Actors Leverage LLMs

By TD NewsDesk

Updated on Mon, Apr 14, 2025

Overall Rating
It’s been almost two and a half years since OpenAI’s generative artificial intelligence (GenAI) chatbot ChatGPT debuted.

Since then, businesses have excessively embraced artificial intelligence (AI) technology—doubling, tripling, or quadrupling down its use across their organization, revolutionizing almost every task, process, workflow, and more.  

Despite its widespread adoption, one central question remains: How safe is AI?

Since the technology has gone viral, experts have been pushing for stronger guardrails to ensure AI doesn’t cause excessive harm to people—something that’s sort of taken a back seat as far as AI companies are concerned.

While OpenAI has been rather sluggish when it comes to enforcing safety measures, it’s finally taking some concrete steps.

Introducing API Organization Verification.

As per a support page published by OpenAI, the move came as some users have been intentionally violating the company’s policies, and “At OpenAI, we take our responsibility seriously to ensure that AI is both broadly accessible and used safely.”

The Verified Organization status comes with a verification process to “mitigate unsafe use of AI while continuing to make advanced models available to the broader developer community.”

As per the company, organizations can complete the verification process in a few minutes, and it won’t cost anything additional. However, to access OpenAI’s new advanced models and additional capabilities, organizations will have to verify their organization.

To become verified, an organization must share a valid government-issued ID from a supported country (currently over 200 countries), and said ID must not have been used recently to verify another organization—each ID can verify only one organization every 90 days.

At the same time, the AI leader clarified that not all organizations may be eligible for verification. In such cases, the process can be completed when it becomes available. Alternatively, OpenAI mentioned, “Models requiring verification today might become available to all customers in the future, even without verification.”

TechDogs-"An Image Of OpenAI's Logo On A Smartphone On A Laptop Keyboard"
While it is unclear exactly what actions inspired this move, experts believe it might be linked to OpenAI’s accusations that DeepSeek developers used OpenAI’s API and data to train its models.

While infringing OpenAI’s (or any developers’) intellectual property (IP) is certainly a serious problem, the AI world is dealing with another persistent problem—AI-powered and AI-empowered cybercriminals that are wreaking havoc.

From a jungle of numerous types of threats comes AkiraBot, a Python-based framework that targets small to medium-sized business website contact forms and chat widgets.

According to a report by SentinelOne’s SentinelLABS, this bot uses OpenAI’s ChatGPT to generate unique messages that help it bypass spam filters. Each message is tailored to the purpose of the targeted website.

The goal? Deliver dubious Search Engine Optimization (SEO) services, primarily under the brand name “Akira” or “ServiceWrap.” This is what inspired the name—it’s important to note that AkiraBot is not related to the ransomware group Akira.

However, old archives show the bot referred to as “ShopBot,” meaning the bot most likely targeted websites using Shopify. However, as it evolved, the bot began targeting websites built using GoDaddy, Wix, Squarespace, and others.

AkiraBot comes with modifications that allow it to mimic real user behavior, which enables it to evade CAPTCHAs, including hCAPTCHA and reCAPTCHA. To evade network detections and IP-based restrictions and diversify its traffic originating source points, the bot utilizes SmartProxy.

So far, the bot has targeted over 420,000 unique domains and has succeeded in spamming at least 80,000 websites since September 2024. It also failed on 11,000 attempts.

Once OpenAI was notified about the by SentinelLabs, it disabled the API key associated with AkiraBot.

TechDogs-"An Image Depicting An Example Of AkiraBot's Message"
From one of the older varieties, AI just can’t seem to get rid of hallucinations—and now they’re just getting worse.

The latest bout comes courtesy of AI coding assistants, where security and academic researchers found that AI code assistants invent package names. As such, commercial models account for 5.2%, and open models reach 21.7%.

Ordinarily, running such codes would lead to errors, as the invented package doesn’t exist. However, this is where bad actors have found a new avenue to be mischievous—they create malicious packets under the name of the missing link and upload them on official repositories.

As AI code generators hallucinate the same non-existent packages, the malicious code will be deployed.

This type of occurrence has a name—Slopsquatting—which was recently coined by Seth Larson, security developer-in-residence at the Python Software Foundation (PSF). It builds on the word “typosquatting,” which refers to situations where bad actors exploit and dupe people by using alternatives or typos of common terms are used to dupe people.

“We're in the very early days looking at this problem from an ecosystem level,” said Larson. “It's difficult, and likely impossible, to quantify how many attempted installs are happening because of LLM hallucinations without more transparency from LLM providers. Users of LLM generated code, packages, and information should be double-checking LLM outputs against reality before putting any of that information into operation, otherwise there can be real-world consequences.”

However, Larson notes that there could be various reasons for a developer to install packages that don’t exist. This includes mistyping package names, installing internal packages that already exist in public indexes, package and module name differences, and more.

Do you think the unprecedented growth and unchecked nature of AI could end up causing more problems than solutions?

Let us know in the comments below!

First published on Mon, Apr 14, 2025

Liked what you read? That’s only the tip of the tech iceberg!

Explore our vast collection of tech articles including introductory guides, product reviews, trends and more, stay up to date with the latest news, relish thought-provoking interviews and the hottest AI blogs, and tickle your funny bone with hilarious tech memes!

Plus, get access to branded insights from industry-leading global brands through informative white papers, engaging case studies, in-depth reports, enlightening videos and exciting events and webinars.

Dive into TechDogs' treasure trove today and Know Your World of technology like never before!

Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs' members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs' Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. While we aim to provide valuable and helpful information, some content on TechDogs' site may not have been thoroughly reviewed for every detail or aspect. We encourage users to verify any information independently where necessary.

Join The Discussion

Join Our Newsletter

Get weekly news, engaging articles, and career tips-all free!

By subscribing to our newsletter, you're cool with our terms and conditions and agree to our Privacy Policy.

  • Dark
  • Light