TechDogs-"Multiverse Computing And Cerebrium Bring Compressed AI To The Cloud, Creating A Blueprint For Economically Sustainable AI At Scale"

Artificial Intelligence

Multiverse Computing And Cerebrium Bring Compressed AI To The Cloud, Creating A Blueprint For Economically Sustainable AI At Scale

GlobeNewswire
Overall Rating

New partnership leverages Multiverse’s quantum-inspired AI compression technology with Cerebrium’s serverless scaling infrastructure to deliver 12x faster inference and 90% smaller AI models, reducing infrastructure overhead

SAN SEBASTIÁN, Spain, Dec. 02, 2025 (GLOBE NEWSWIRE) -- As large language models continue to get larger and the scarcity of compute resources drives up development costs, AI is becoming increasingly cost-prohibitive for many enterprises. Today, Multiverse Computing, the leader in quantum-inspired AI model compression, and Cerebrium, an elastic, serverless AI infrastructure platform, announced a strategic partnership designed to alleviate this burden and create a new foundation for economically sustainable AI. Together, the technologies form a unified system that optimizes GPU utilization, minimizes latency, and lowers cost per inference across the full AI deployment lifecycle, from prototyping to production.

“Compute costs remain one of the biggest barriers to AI progress, setting artificial limits on its societal impact,” said Enrique Lizaso, cofounder and CEO of Multiverse Computing. “We’re fusing efficiency and scale so innovation can finally take off without running into a cost wall. Together, our partnership proves that performance and affordability can coexist, unlocking faster, more accessible, and more sustainable AI for everyone.”

At the heart of the partnership is a seamless pipeline between Multiverse’s CompactifAI model compression engine and Cerebrium’s dynamic container scaling, which can expand to thousands of GPUs near-instantly. The joint solution enables enterprises to deploy high-performance models that run up to 12x faster, consume up to 80% fewer compute resources, and scale globally in seconds, without sacrificing accuracy or availability.

“Organizations around the world want to take advantage of AI, but very few can actually afford to do it at scale,” said Michael Louis, founder and CEO of Cerebrium. “With Multiverse’s compression engine shrinking the computational footprint and our infrastructure expanding elastically to meet demand, we’re chipping away at the last technical and economic barriers between innovation and real-world deployment.”

The partnership also reflects a broader shift in how the AI industry defines progress, moving from sheer scale and parameter counts to a new standard that values efficiency, speed, and economic viability as a measure of performance.

Customers can now leverage Cerebrium’s elastic orchestration engine and CompactifAI models through private deployments. To learn more, reach out to Multiverse Computing at sales@multiversecomputing.com.

About Multiverse Computing

Multiverse Computing is the leader in quantum-inspired AI model compression. The company’s deep expertise in quantum software and AI led to the development of CompactifAI, a revolutionary AI model compression engine. CompactifAI compresses LLMs by up to 95% with only 2-3% precision loss. CompactifAI models reduce computing requirements and unleash new use cases for AI across industries.

Multiverse Computing is headquartered in Donostia, Spain, with offices across Europe, the U.S., and Canada. With over 160 patents and 100 customers globally, including Iberdrola, Bosch, and the Bank of Canada, Multiverse Computing has raised c.$250M to date from investors including Bullhound Capital, HP Tech Ventures, SETT, Forgepoint Capital International, CDP Venture Capital, Toshiba, and Santander Climate VC. For more information, visit multiversecomputing.com.

About Cerebrium

Cerebrium is a serverless infrastructure platform that makes it easy for engineering teams to build and scale AI applications. The platform delivers low startup times, multi-region deployments for low latency and data residency, support for over a dozen GPU types, and can scale to thousands of containers in seconds. Backed by Gradient (Google’s AI fund) and Y Combinator, Cerebrium powers real-time AI workloads for leading teams such as Tavus, Deepgram, and Vapi.

Media Contact
LaunchSquad for Multiverse Computing
multiverse@launchsquad.com

Frequently Asked Questions

What is the core of the partnership between Multiverse Computing and Cerebrium?

The partnership combines Multiverse's quantum-inspired AI model compression technology (CompactifAI) with Cerebrium's serverless scaling infrastructure to address the high costs and resource demands of large language models, making AI more economically sustainable.

What are the key benefits of this joint AI solution?

Customers can achieve up to 12x faster AI inference, 90% smaller AI models, 80% fewer compute resources, and global scalability in seconds, all while maintaining accuracy and availability and significantly reducing infrastructure overhead.

How does the combined technology work?

Multiverse's CompactifAI engine compresses AI models, shrinking their computational footprint. Cerebrium's dynamic container scaling then provides elastic, serverless infrastructure to efficiently deploy and scale these optimized models, minimizing latency and cost per inference.

First published on Mon, Dec 8, 2025

Enjoyed what you've read so far? Great news - there's more to explore!

Stay up to date with the latest news, a vast collection of tech articles including introductory guides, product reviews, trends and more, thought-provoking interviews, hottest AI blogs and entertaining tech memes.

Plus, get access to branded insights such as informative white papers, intriguing case studies, in-depth reports, enlightening videos and exciting events and webinars from industry-leading global brands.

Dive into TechDogs' treasure trove today and Know Your World of technology!

Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs' members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs' Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. While we aim to provide valuable and helpful information, some content on TechDogs' site may not have been thoroughly reviewed for every detail or aspect. We encourage users to verify any information independently where necessary.

Join The Discussion

Join Our Newsletter

Get weekly news, engaging articles, and career tips-all free!

By subscribing to our newsletter, you're cool with our terms and conditions and agree to our Privacy Policy.

  • Dark
  • Light