TL;DR
- Meta fined $375 million for violating consumer protection laws
- Jury found 75,000 violations tied to deceptive practices
- Case stems from undercover investigation exposing child exploitation risks
- Meta plans to appeal and denies wrongdoing
- Second trial phase may enforce stricter platform safeguards
Meta Found Liable For Misleading Users On Platform Safety
A New Mexico jury concluded that Meta knowingly engaged in deceptive practices by presenting Facebook, Instagram, and WhatsApp as safe for children while allegedly failing to curb harmful content.
The verdict followed a six-week trial and less than a day of jury deliberation. Jurors identified 75,000 violations under the state’s consumer protection law, assigning $5,000 per violation to reach a total penalty of $375 million.
State Attorney General Raúl Torrez described the ruling as a historic win for families affected by online harm, stating it sends a clear message that no company is beyond legal accountability.
Meta pushed back on the outcome, stating, “We respectfully disagree with the verdict and will appeal,” while adding that it continues efforts to improve safety and address harmful content challenges.
Undercover Investigation Exposed Child Safety Failures
The case stemmed from a 2023 undercover operation where investigators created accounts posing as users under 14. These accounts were quickly exposed to explicit material and contacted by adults seeking similar content, leading to multiple arrests.
Prosecutors argued this demonstrated systemic failures in Meta’s safeguards. The state also claimed internal company documents acknowledged risks related to sexual exploitation and mental health but lacked meaningful intervention.
Officials further alleged Meta failed to implement basic safety measures such as effective age verification, while continuing to assure users that its platforms were safe for children.
Platform Design And Internal Warnings Came Under Scrutiny
The lawsuit also focused on product design choices such as infinite scroll and auto-play features, which were said to encourage prolonged engagement and addictive behavior among younger users.
Testimonies from former employees suggested internal warnings about these risks were raised but not prioritized. Evidence presented during the trial indicated that engagement-driven algorithms could inadvertently connect predators with minors.
The jury ultimately ruled that Meta’s actions were not only deceptive but also unconscionable, meaning the company knowingly took advantage of users who lacked awareness of these risks.
Topics For More Insights
- Pinterest CEO Calls On Governments To Ban Social Media For Users Under 16!
- Instagram To Alert Parents If Teens Repeatedly Search Suicide Or Self-Harm Terms
- Portugal Approves Bill Enforcing Under-13 Ban And Parental Consent For Teen Social Media Access
- France’s Lower House Votes To Ban Social Media Use For Children Under 15
- India's Karnataka Moves To Ban Social Media For Under-16s
More Legal Challenges Ahead As Second Trial Phase Nears
The financial penalty may be only the beginning of Meta’s legal troubles in the state. A second phase of the trial, scheduled for May, will examine whether the company created a broader public nuisance affecting residents’ health and safety.
The state plans to seek court-mandated changes, including stronger age verification systems and more aggressive removal of harmful actors. Additional financial penalties may also follow.
Despite the ruling, Meta’s shares rose slightly in after-hours trading, signaling continued investor confidence. However, the verdict marks the first jury decision of its kind against the company over youth-related harm, potentially setting a precedent for similar cases across the country.

Join The Discussion