TechDogs-"Tachyum Demonstrates Full BF16 AI Support In GCC And PyTorch"

IT Infrastructure

Tachyum Demonstrates Full BF16 AI Support In GCC And PyTorch

By Business Wire

Business Wire
Overall Rating

LAS VEGAS--(BUSINESS WIRE)--#Linux--Tachyum® today announced that it has successfully integrated the BF16 data type into its Prodigy® compiler and software distribution, which is now available to early adopters and customers as a pre-installed image as part of beta testing.

BF16, or bfloat16, is a shortened floating point data type based on the IEEE 32-bit single-precision floating point data type (f32). It is used to accelerate machine learning by reducing storage requirements and increasing the calculation speed of ML algorithms. Tachyum now fully supports BF16 for use with GCC 13.2 (GNU Compiler Collection); HPC/linear algebra Eigen library optimized for Prodigy Universal Processor; and PyTorch AI framework.

Tachyum’s Prodigy was designed to handle matrix and vector processing from the ground up rather than as an afterthought. Among Prodigy’s vector and matrix features are support for a range of data types (FP64, FP32, TF32, BF16, Int8, FP8, FP4 and TAI); 2x1024-bit vector units per core; AI sparsity and super-sparsity support; and no penalty for misaligned vector loads or stores when crossing cache lines. This built-in support offers high performance for AI training and inference workloads, increases performance and reduces memory utilization.

"We continue to strengthen our software distribution package to ensure the greatest breadth of application, framework and library support for Prodigy in advance of its release," said Dr. Radoslav Danilak, founder and CEO of Tachyum. “The use of BF16 improves hardware efficiency by improving performance. Our support of the format is consistent with our goals of having Prodigy provide the performance required of hyperscale, high-performance computing and AI workloads without modifications and affirms our commitment to transforming data centers around the world.”

As a Universal Processor offering industry-leading performance for all workloads, Prodigy-powered data center servers can seamlessly and dynamically switch between computational domains (such as AI/ML, HPC, and cloud) with a single homogeneous architecture. By eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization, Prodigy reduces CAPEX and OPEX significantly while delivering unprecedented data center performance, power, and economics. Prodigy integrates 192 high-performance custom-designed 64-bit compute cores, to deliver up to 4.5x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.

A video demonstration of image classification using ResNet model utilizing native PyTorch implementation on Tachyum Linux on a Prodigy emulation system is available for viewing at . The demonstrated ResNet model has been quantized using the BF16 data type to take advantage of Prodigy BF16 vector instruction, particularly activation, loss and reduction functions. The next video will demonstrate the completion of FP8 testing.

Follow Tachyum

About Tachyum

Tachyum is transforming the economics of AI, HPC, public and private cloud workloads with Prodigy, the world’s first Universal Processor. Prodigy unifies the functionality of a CPU, a GPU, and a TPU in a single processor to deliver industry-leading performance, cost and power efficiency for both specialty and general-purpose computing. As global data center emissions continue to contribute to a changing climate, with projections of their consuming 10 percent of the world’s electricity by 2030, the ultra-low power Prodigy is positioned to help balance the world’s appetite for computing at a lower environmental cost. Tachyum recently received a major purchase order from a US company to build a large-scale system that can deliver more than 50 exaflops performance, which will exponentially exceed the computational capabilities of the fastest inference or generative AI supercomputers available anywhere in the world today. When complete in 2025, the Prodigy-powered system will deliver a 25x multiplier vs. the world’s fastest conventional supercomputer – built just this year – and will achieve AI capabilities 25,000x larger than models for ChatGPT4. Tachyum has offices in the United States and Slovakia. For more information, visit


Mark Smith
JPR Communications

First published on Tue, May 28, 2024

Enjoyed what you've read so far? Great news - there's more to explore!

Stay up to date with the latest news, a vast collection of tech articles including introductory guides, product reviews, trends and more, thought-provoking interviews, hottest AI blogs and entertaining tech memes.

Plus, get access to branded insights such as informative white papers, intriguing case studies, in-depth reports, enlightening videos and exciting events and webinars from industry-leading global brands.

Dive into TechDogs' treasure trove today and Know Your World of technology!

Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs’ members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs’ Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. All information / content found on TechDogs’ site may not necessarily be reviewed by individuals with the expertise to validate its completeness, accuracy and reliability.


Computer ProductsTachyum AI HPC Software Distribution Data Center

Join The Discussion

- Promoted By TechDogs -

The Brivo Partner Program
  • Dark
  • Light