TechDogs-"Cerebras Launches World's Fastest Deepseek R1 Distill Llama 70B Inference"

Emerging Technology

Cerebras Launches World's Fastest Deepseek R1 Distill Llama 70B Inference

By Business Wire

Business Wire
Overall Rating

New offering is 57x faster than GPUs

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems, the pioneer in accelerating generative AI, today announced record-breaking performance for DeepSeek-R1-Distill-Llama-70B inference, achieving more than 1,500 tokens per second – 57 times faster than GPU-based solutions. This unprecedented speed enables instant reasoning capabilities for one of the industry's most sophisticated open-weight models, running entirely on U.S.-based AI infrastructure with zero data retention.

"DeepSeek R1 represents a new frontier in AI reasoning capabilities, and today we're making it accessible at the industry’s fastest speeds," said Hagay Lupesko, SVP of AI Cloud, Cerebras. "By achieving more than 1,500 tokens per second on our Cerebras Inference platform, we're transforming minutes-long reasoning processes into near-instantaneous responses, fundamentally changing how developers and enterprises can leverage advanced AI models."

Powered by the Cerebras Wafer Scale Engine, the platform demonstrates dramatic real-world performance improvements. A standard coding prompt that takes 22 seconds on competitive platforms completes in just 1.5 seconds on Cerebras – a 15x improvement in time to result. This breakthrough enables practical deployment of sophisticated reasoning models that traditionally require extensive computation time.

DeepSeek-R1-Distill-Llama-70B combines the advanced reasoning capabilities of DeepSeek's 671B parameter Mixture of Experts (MoE) model with Meta's widely-supported Llama architecture. Despite its efficient 70B parameter size, the model demonstrates superior performance on complex mathematics and coding tasks compared to larger models.

"Security and privacy are paramount for enterprise AI deployment," continued Lupesko. "By processing all inference requests in U.S.-based data centers with zero data retention, we're ensuring that organizations can leverage cutting-edge AI capabilities while maintaining strict data governance standards. Data stays in the U.S. 100% of the time and belongs solely to the customer."

Availability

The DeepSeek-R1-Distill-Llama-70B model is available immediately through Cerebras Inference, with API access available to select customers through a developer preview program. For more information about accessing instant reasoning capabilities for your applications, visit www.cerebras.ai/contact-us.

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world’s largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit cerebras.ai or follow us on LinkedIn or X.


Contacts

Media Contact
Press Contact: PR@zmcommunications.com

First published on Fri, Jan 31, 2025

Enjoyed what you read? Great news – there’s a lot more to explore!

Dive into our content repository of the latest tech news, a diverse range of articles spanning introductory guides, product reviews, trends and more, along with engaging interviews, up-to-date AI blogs and hilarious tech memes!

Also explore our collection of branded insights via informative white papers, enlightening case studies, in-depth reports, educational videos and exciting events and webinars from leading global brands.

Head to the TechDogs homepage to Know Your World of technology today!

Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs' members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs' Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. While we aim to provide valuable and helpful information, some content on TechDogs' site may not have been thoroughly reviewed for every detail or aspect. We encourage users to verify any information independently where necessary.

Join The Discussion

- Promoted By TechDogs -

Join Our Newsletter

Get weekly news, engaging articles, and career tips-all free!

By subscribing to our newsletter, you're cool with our terms and conditions and agree to our Privacy Policy.

  • Dark
  • Light