Two U.S. jury verdicts handed down on March 24 and March 25 have put Meta and Google at the center of a bigger legal battle over Section 230, the decades-old U.S. law that has long helped shield tech platforms from liability over user-generated content. What makes these rulings significant is that both cases targeted platform design and safety decisions, not just the content posted by users.
TL;DR
- A Los Angeles jury ordered Meta and Google to pay a combined $6 million after finding Instagram and YouTube liable for harming a young woman’s mental health.
- A New Mexico jury ordered Meta to pay $375 million after finding it misled users about child safety and enabled exploitation on its platforms.
- Both cases sidestep Section 230 by focusing on platform design and company conduct, rather than third-party content.
- The verdicts could influence thousands of similar lawsuits already pending against major tech platforms.
Why These Verdicts Matter?
The California verdict is especially notable because it is one of the first jury decisions to hold Meta and Google liable in a youth social media addiction case.
A Los Angeles jury found both companies negligent, said their conduct was a substantial factor in harming the plaintiff, and concluded they failed to adequately warn users about the dangers of Instagram and YouTube. The damages totaled $6 million, split 70% to Meta and 30% to Google.
A day earlier, jurors in New Mexico found that Meta violated the state’s Unfair Practices Act, agreeing with prosecutors that the company made false or misleading statements and engaged in unconscionable practices involving children.
The jury awarded $375 million in penalties for thousands of violations, though a judge will still decide in May whether Meta’s platforms created a public nuisance and whether broader remedies should follow.
How Plaintiffs Worked Around Section 230?
The biggest takeaway is legal, not just financial. Section 230 has traditionally protected platforms from many lawsuits tied to user-posted material, but these cases took a different route by arguing that the harm came from design choices, recommendation systems, safety decisions, and product functionality.
Courts are increasingly distinguishing between claims about platform conduct and claims that would impose liability for third-party speech.
That distinction could matter far beyond Meta and Google. More than 2,400 related cases are already centralized in California courts, while legal experts believe future appeals could produce the first major appellate ruling on whether Section 230 protects design-based claims. The implications may also spill into cases involving other youth-focused platforms.
Topics for more insights:
What Meta Said, And What Comes Next?
Meta has already signaled it will fight back. The company said it respectfully disagrees with the verdicts and will appeal, while maintaining that it remains committed to building safe environments for young people. Google has also said it plans to appeal the Los Angeles verdict.
For Big Tech, the larger risk is that these rulings may become a roadmap for plaintiffs trying to narrow one of the internet industry’s strongest legal shields.
Even if appeals take time, the message from both juries is already clear: courts may be more willing to examine how platforms are built, how they keep children safe, and whether their design choices can trigger liability in ways Section 230 may not fully block.


Join The Discussion