
Artificial Intelligence
Lmarena Secures $100M In Seed Funding To Bring Scientific Rigor To AI Reliability

SAN FRANCISCO, May 21, 2025 /PRNewswire/ -- LMArena, the open community platform for evaluating the best AI models, has secured $100 million in seed funding led by a16z and UC Investments (University of California) with participation from Lightspeed, Laude Ventures, Felicis, Kleiner Perkins and The House Fund. The funding coincides with the relaunch of LMArena happening next week—a faster, sharper, fully rebuilt platform designed to make AI evaluation more rigorous, transparent, and human-centered.
In a space moving at breakneck speed, LMArena is building something foundational: a neutral, reproducible, community-driven layer of infrastructure that allows researchers, developers, and users to understand how models actually perform in the real world. Over four hundred model evaluations have already been made on the platform, with over 3 millions votes cast, helping shape both proprietary and open-source models across the industry, including those from Google, OpenAI, Meta, and xAI.
"In a world racing to build ever-bigger models, the hard question is no longer what can AI do. Rather, it's how well can it do it for specific use cases, and for whom," said Anastasios N. Angelopoulos, co-founder and CEO at LMArena. "We're building the infrastructure to answer these critical questions."
The new LMArena next week reflects months of feedback from the community and includes: a rebuilt UI, mobile-first design, lower latency, and new features like saved chat history and endless chat. The legacy site will remain live for a while, but all future innovation is happening on lmarena.ai.
"AI evaluation has often lagged behind model development," said Ion Stoica, co-founder at LMArena and UC Berkeley professor. "LMArena closes that gap by putting rigorous, community-driven science at the center. It's refreshing to be part of a team that leads with long-term integrity in a space moving this fast."
Backers say what makes LMArena different is not just the product, but the principles behind it. Evaluation is open, the leaderboard mechanics are published, and all models are tested with diverse, real-world prompts. This approach makes it possible to explore in-depth how AI performs across a range of use cases.
"Our mission has always been to make AI evaluation open, scientific, and grounded in how people actually use these models. As we expand into new modalities and deepen our evaluation tools, we're building infrastructure that doesn't just evaluate AI, it helps shape it" said Wei-Lin Chiang, co-founder and CTO of LMArena. "We're here to ensure AI is reliably measured through real-world use."
LMArena is already working with model providers to help them uncover performance trends, gather human preference data, and test updates in real-world conditions. The company's long-term business model centers on trust: as they look to develop advanced analytics and enterprise services while keeping core participation free and open to all.
"We invested in LMArena because the future of AI depends on reliability," said Anjney Midha, General Partner at a16z. "And reliability requires transparent, scientific, community-led evaluation. LMArena is building that backbone." Jagdeep Singh Bachher, chief investment officer at UC Investments, added, "We're excited to see open AI research translated into real-world impact through platforms like LMArena. Supporting innovation from university labs such as those at UC Berkeley is essential for building technologies that responsibly serve the public and advance the field."
The relaunch of LMArena next week is a significant step forward, but it's far from the finish line. The team is actively shipping new features, refining the platform, and working closely with the community to shape what comes next.
About LMArena:
LMArena is an open platform where everyone has access to leading AI models and can contribute to their progress through real-world voting and feedback. Built with scientific rigor and transparency at its core, LMArena enables developers, researchers, and users to compare model outputs, uncover performance differences, and advance the reliability of AI systems. With a commitment to open access, reproducible methods, and diverse human judgment, LMArena is shaping the infrastructure layer AI needs to earn long-term trust. Learn more at lmarena.ai.
Press Contact:
Cherry Park
cherry@lmarena.ai
View original content:https://www.prnewswire.com/news-releases/lmarena-secures-100m-in-seed-funding-to-bring-scientific-rigor-to-ai-reliability-302462025.html
SOURCE LMArena
Frequently Asked Questions
What is LMArena?
LMArena is an open platform designed for evaluating AI models. It allows researchers, developers, and users to understand how models perform in real-world scenarios through community-driven feedback and rigorous testing.
How does LMArena evaluate AI models?
LMArena uses real-world prompts and community voting to evaluate AI models. The platform publishes its leaderboard mechanics and ensures models are tested with diverse prompts to explore performance across various use cases.
What is the goal of LMArena?
LMArena's mission is to make AI evaluation open, scientific, and grounded in how people actually use these models, ensuring AI is reliably measured through real-world use.
First published on Thu, May 22, 2025
Liked what you read? That’s only the tip of the tech iceberg!
Explore our vast collection of tech articles including introductory guides, product reviews, trends and more, stay up to date with the latest news, relish thought-provoking interviews and the hottest AI blogs, and tickle your funny bone with hilarious tech memes!
Plus, get access to branded insights from industry-leading global brands through informative white papers, engaging case studies, in-depth reports, enlightening videos and exciting events and webinars.
Dive into TechDogs' treasure trove today and Know Your World of technology like never before!
Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs' members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs' Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. While we aim to provide valuable and helpful information, some content on TechDogs' site may not have been thoroughly reviewed for every detail or aspect. We encourage users to verify any information independently where necessary.
Trending PR Newswire
The Home Depot And Google Cloud Launch Agentic AI Tools To Help Customers And Associates Bring Projects From 'How-To' To 'Done'
UOG (United One Group) Wins Three Prestigious Industry Awards At CES 2026
Broadridge Invests In Deepsee, Further Harnessing Agentic AI To Transform Post-Trade Operations
NAVEE Unveils High-Performance Mobility Innovations And Expands Into Outdoor Scenarios At CES 2026
PROMISE Technology To Showcase AI-Ready Storage Infrastructure For Smart City And IVA Surveillance At Intersec Dubai 2026
Join Our Newsletter
Get weekly news, engaging articles, and career tips-all free!
By subscribing to our newsletter, you're cool with our terms and conditions and agree to our Privacy Policy.
Join The Discussion