TechDogs-"SoundCloud & Amazon’s AI Training Push Raise Questions Amid Copyright Office Director’s Dismissal"

Artificial Intelligence

SoundCloud & Amazon’s AI Training Push Raise Questions Amid Copyright Office Director’s Dismissal

By TechDogs Bureau

TD NewsDesk

Updated on Mon, May 12, 2025

Overall Rating
We’ve all had that moment when technology surprises us in ways we didn’t expect—whether it’s your smartphone finishing your text for you, or a virtual AI assistant making a decision easier.

So, what happens when technology goes beyond convenience and enters the realm of ethics, responsibility, and governance?

This week, AI took center stage in ways that made us rethink how it’s evolving. From tech giants amending policies to robots that can “feel.” Yes, the AI plot thickens!

If this week had a theme, it would be “AI meets accountability,” as the latest business moves and government scrutiny have become more intense.

Let’s break down what happened!  


SoundCloud Responds To AI Training Backlash


SoundCloud, one of the leading music streaming services, clarified that it has never used artists’ content to train its AI models, nor does it allow third parties to scrape or train on content uploaded to its platform.

The controversy started when SoundCloud's Terms of Service were quietly updated in February 2024, with language that signaled a lenient stance for content being used for AI training—triggering concerns from musicians, AI ethicists and other creators. The change was pointed out by technology ethicist Ed Newton-Rex, who hoped “they address this asap.”

TechDogs-"A Screenshot Of A Tweet By Ed Newton-Rex"
According to Marni Greenberg, SVP at SoundCloud, the update was meant to outline internal use of AI, such as recommendation engines, fraud detection, and organizing content, as it rejected claims about content being used to train generative AI models.

SoundCloud is committed to introducing clear opt-out mechanisms if it ever plans to train generative AI using creator uploads. For the time being, an existing safeguard includes a “no AI” tag, placed on-site to prevent unauthorized model training by external parties.

Yet, creators remain skeptical, with some highlighting the lack of communication about the updated terms, despite the platform’s promise to notify about significant changes.

Yet, SoundCloud emphasized its stance: AI training must be developed responsibly, guided by “consent, attribution, and fair compensation.” Yet, some technology businesses have moved beyond this stage, getting ready to deploy robots with some innovative AI training.  


Amazon’s Vulcan AI Robot Brings A Sense Of Touch To Warehousing


Amazon recently revealed Vulcan, a next-gen warehouse robot with touch-sensitive capabilities, at its Delivering the Future event in Germany.

TechDogs-"Amazon’s Vulcan Robot Using Vision and Tactile Sensors to Pick Items from Warehouse Shelves"
Unlike earlier machines that relied only on computer vision, Vulcan uses force sensors, tactile AI, and stereo vision to navigate cluttered spaces and handle objects gently.

Its arms are equipped with cameras, pressure sensors, and suction cups, helping Vulcan carefully pick, store, and retrieve items, while adapting in real-time just as a human would. It’s capable of managing 75% of Amazon's warehouse inventory and can detect resistance to avoid damaging items or disrupting nearby products.

Amazon describes Vulcan as a “physical AI” breakthrough, training its AI on physical data that incorporates touch and force feedback to help it learn from real-world object interactions and adjust behaviors accordingly.

Rather than replacing jobs, Vulcan is aimed at supporting staff by reducing repetitive tasks. Plus, new roles such as robotics monitors and reliability maintenance engineers are being created, with training programs in place to upskill employees.

Amazon’s CEO, Andy Jassy, praised Vulcan as a landmark in combining “sight and touch” in AI-powered robotics, marking a shift toward smarter, human-like machines in logistics.

While it shows a positive side of training AI on larger datasets for specific use cases, another incident has raised questions about the fair usage of AI training data.  


U.S. Copyright Office Director Fired Over AI Training Report


In a move being called unprecedented and politically charged, Shira Perlmutter, Director of the U.S. Copyright Office, was fired days after her department released a report questioning “fair use” in AI training.

The report, part three of an AI copyright study, challenged the assumption that using copyrighted material to train generative models can always fall under fair use. It noted that while AI training for research may be permissible, mass commercial use—especially when accessing data illegally—crosses a legal line.

The report said, “But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.”

Perlmutter’s firing reportedly came after she refusal to greenlight Elon Musk’s efforts to mine copyrighted works, which critics say triggered her dismissal. Elon Musk had previously called for governments to “delete all IP law,” potentially enabling AI models to train on all the data they could scrape.

Librarian of Congress Carla Hayden, who originally appointed Perlmutter, was also dismissed the same week. Lawmakers, including Rep. Joe Morelle, condemned the decision, calling it a “brazen power grab” and a blow to copyright protection in the AI era.

The Copyright Office recommended expanding licensing frameworks to let copyright holders benefit when their work is used for AI rather than dismissing creators in favor of benefiting AI companies.  


Is The AI Training Dilemma Getting Heated?


From SoundCloud promising transparency (after not being too transparent) and Amazon’s Vulcan’s AI system learning to “feel” its way around goods with physical data to the Copyright Office’s leadership being upended over AI regulation—it’s clear we’re in an AI race where innovation will often outpace policy.

The question is no longer “Can we do it?” but “Should we?”—and who gets to decide whose data falls under fair use?

As AI tools empowering creators and workers get smarter, it is critical to secure copyrighted data by establishing fair use boundaries.

So, will these moves help regulate rampant AI training or signal a move that will benefit Big Tech?

Let us know what you think in the comments below!

First published on Mon, May 12, 2025

Enjoyed what you've read so far? Great news - there's more to explore!

Stay up to date with the latest news, a vast collection of tech articles including introductory guides, product reviews, trends and more, thought-provoking interviews, hottest AI blogs and entertaining tech memes.

Plus, get access to branded insights such as informative white papers, intriguing case studies, in-depth reports, enlightening videos and exciting events and webinars from industry-leading global brands.

Dive into TechDogs' treasure trove today and Know Your World of technology!

Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs' members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs' Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. While we aim to provide valuable and helpful information, some content on TechDogs' site may not have been thoroughly reviewed for every detail or aspect. We encourage users to verify any information independently where necessary.

Join The Discussion

- Promoted By TechDogs -

Join Our Newsletter

Get weekly news, engaging articles, and career tips-all free!

By subscribing to our newsletter, you're cool with our terms and conditions and agree to our Privacy Policy.

  • Dark
  • Light