TechDogs-"Reddit Lawsuit Comes Amid Anthropic’s New AI Models For U.S. National Security Customers"

Artificial Intelligence

Reddit Lawsuit Comes Amid Anthropic’s New AI Models For U.S. National Security Customers

By Amrit Mehra

TD NewsDesk

Updated on Fri, Jun 6, 2025

Overall Rating
In the ever-evolving landscape of artificial intelligence (AI) technology, it’s just a matter of days (if not hours) before there’s a new development that could potentially alter the industry—and this time it's AI startup Anthropic that’s grabbing the headlines.

Through a blog post published on its website, Anthropic announced that it was launching a custom set of AI models called “Claude Gov” developed exclusively for U.S. national security customers.

Currently, these models are already deployed by agencies at the highest level of U.S. national security, and to further strengthen the security of the models, Anthropic has ensured that access to them is restricted to executives who work in such classified environments.

The models, which aim to address real-world operational needs, were built based on feedback from government customers and have been tested with the same rigor as Anthropic’s other models.

Anthropic’s new Claude Gov models offer better performance for critical government needs, and offer improved handling of classified materials, better contextual understanding of documents and information, better understanding and interpretation of complex cybersecurity data, and enhanced proficiency in languages and dialects.

“U.S. national security customers may choose to use our AI systems for a wide range of applications from strategic planning and operational support to intelligence analysis and threat assessment,” said Anthropic. “This builds on our commitment to bring responsible and safe AI solutions to our U.S. national security customers, with custom models that are built to address the unique needs of classified environments.”

TechDogs-"An Image Used By Anthropic In Its Announcement Of Its New Claude Gov AI Models"
The announcement comes at the same time as a legal complaint being filed in the San Francisco Superior Court by social networking platform Reddit.

The lawsuit comes as Reddit is accusing Anthropic of stealing data from its website to train its models, despite publicly promising it wouldn’t do so.

According to Reddit’s complaint, Anthropic has tried to access Reddit content more than 100,000 times and even quoted Claude confirming that it was “trained on at least some Reddit data.” At the same time, Anthropic has disregarded the option of entering into a licensing agreement for data to train.

“We believe in an open internet,” said Ben Lee, Reddit’s Chief Legal Officer, but AI companies need “clear limitations” on how they use scraped content.

Instead, an Anthropic spokesperson responded, “We disagree with Reddit's claims and will defend ourselves vigorously.”

As such, Reddit is seeking unspecified restitution and punitive damages, along with an injunction stopping Anthropic from using Reddit data for commercial purposes.

On the other hand, Anthropic’s CEO, Dario Amodei, called for U.S. government agencies to work together to build a transparency standard for AI companies, a move that would clarify emerging risks in the sector to concerned people.

TechDogs-"An Image Of Anthropic CEO, Dario Amodei"
The appeal came in the form of an opinionated article published in The New York Times.

In the piece, Amodei referred to a controlled evaluation of one of its models (Claude Opus 4) where the company “deliberately put it in an extreme experimental situation to observe its responses and get early warnings about the risks.”

Basically, the chatbot would turn to blackmail and threats of exposing an engineer’s (fictitious) extramarital affair when it was told it would be replaced or shut down. It was just an internal test, but it found sensational headlines across news and social media, either way.

Hey, we also dabbled in a bit of comedy aimed at the chatbot’s bizarre behavior.

Amodei also mentioned similar behavior from OpenAI’s o3 model, which often “wrote special code to stop itself from being shut down.”

He also says that current transparency requirements are mostly based on the companies developing the AI models themselves.

“Federal law does not compel us or any other AI company to be transparent about our models’ capabilities or to take any meaningful steps toward risk reduction. Some companies can simply choose not to.”

This is why he feels that the Senate’s proposal for a 10-year moratorium on states regulating A.I. is “far too blunt an instrument” and “Without a clear plan for a federal response, a moratorium would give us the worst of both worlds—no ability for states to act, and no national policy as a backstop.”

In addition to national transparency standards, there should be state laws in place that are “narrowly focused on transparency and not overly prescriptive or burdensome.”

“This is not about partisan politics,” said Amodei, “This is about responding in a wise and balanced way to extraordinary times.”

Do you think Anthropic’s CEO is correct in his assessment of the potential upcoming AI transparency laws in the U.S.?

Let us know in the comments below!

First published on Fri, Jun 6, 2025

Enjoyed what you've read so far? Great news - there's more to explore!

Stay up to date with the latest news, a vast collection of tech articles including introductory guides, product reviews, trends and more, thought-provoking interviews, hottest AI blogs and entertaining tech memes.

Plus, get access to branded insights such as informative white papers, intriguing case studies, in-depth reports, enlightening videos and exciting events and webinars from industry-leading global brands.

Dive into TechDogs' treasure trove today and Know Your World of technology!

Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs' members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs' Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. While we aim to provide valuable and helpful information, some content on TechDogs' site may not have been thoroughly reviewed for every detail or aspect. We encourage users to verify any information independently where necessary.

Join The Discussion

- Promoted By TechDogs -

Join Our Newsletter

Get weekly news, engaging articles, and career tips-all free!

By subscribing to our newsletter, you're cool with our terms and conditions and agree to our Privacy Policy.

  • Dark
  • Light