TechDogs-"Verisilicons Scalable High-Performance GPGPU-AI Computing Ips Empower Automotive And Edge Server AI Solutions"

Artificial Intelligence

Verisilicons Scalable High-Performance GPGPU-AI Computing Ips Empower Automotive And Edge Server AI Solutions

By Business Wire

Business Wire
Overall Rating

Provide AI acceleration with high computing density, multi-chip scaling, and 3D-stacked memory integration

SHANGHAI--(BUSINESS WIRE)--#AI--VeriSilicon (688521.SH) today announced the latest advancements in its high-performance and scalable GPGPU-AI computing IPs, which are now empowering next-generation automotive electronics and edge server applications. Combining programmable parallel computing with a dedicated Artificial Intelligence (AI) accelerator, these IPs offer exceptional computing density for demanding AI workloads such as Large Language Model (LLM) inference, multimodal perception, and real-time decision-making in thermally and power-constrained environments.

VeriSilicon’s GPGPU-AI computing IPs are based on a high-performance General Purpose Graphics Processing Unit (GPGPU) architecture with an integrated dedicated AI accelerator, delivering outstanding computing capabilities to AI applications. The programmable AI accelerator and sparsity-aware computing engine accelerate transformer-based and matrix-intensive models through advanced scheduling techniques. These IPs also support a broad range of data formats for mixed-precision computing, including INT4/8, FP4/8, BF16, FP16/32/64, and TF32, and are designed with high-bandwidth interfaces of 3D-stacked memory, LPDDR5X, HBM, as well as PCIe Gen5/Gen6 and CXL. They are also capable of multi-chip and multi-card scale-out expansion, offering system-level scalability for large-scale AI application deployments.

VeriSilicon’s GPGPU-AI computing IPs provide native support for popular AI frameworks for both training and inference, such as PyTorch, TensorFlow, ONNX, and TVM. These IPs also support General Purpose Computing Language (GPCL) which is compatible with mainstream GPGPU programming languages, and widely used compilers. These capabilities are well aligned with the computing and scalability requirements of today’s leading LLMs, including models such as DeepSeek.

“The demand for AI computing on edge servers, both for inference and incremental training, is growing exponentially. This surge requires not only high efficiency but also strong programmability. VeriSilicon’s GPGPU-AI computing processors are architected to tightly integrate GPGPU computing with AI accelerator at fine-grained levels. The advantages of this architecture have already been validated in multiple high-performance AI computing systems,” said Weijin Dai, Chief Strategy Officer, Executive Vice President, and General Manager of the IP Division at VeriSilicon. “The recent breakthroughs from DeepSeek further amplify the need for maximized AI computing efficiency to address increasingly demanding workloads. Our latest GPGPU-AI computing IPs have been enhanced to efficiently support Mixture-of-Experts (MoE) models and optimize inter-core communication. Through close collaboration with multiple leading AI computing customers, we have extended our architecture to fully leverage the abundant bandwidth offered by 3D-stacked memory technologies. VeriSilicon continues to work hand-in-hand with ecosystem partners to drive real-world mass adoption of these advanced capabilities.”

About VeriSilicon

VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP. For more information, please visit: www.verisilicon.com


Contacts

Media Contact: press@verisilicon.com

Frequently Asked Questions

What are VeriSilicon's GPGPU-AI computing IPs used for?

They are used for demanding AI workloads such as Large Language Model (LLM) inference, multimodal perception, and real-time decision-making in thermally and power-constrained environments.

What AI frameworks do these IPs support?

They support popular AI frameworks for both training and inference, such as PyTorch, TensorFlow, ONNX, and TVM.

What memory interfaces are supported?

They are designed with high-bandwidth interfaces of 3D-stacked memory, LPDDR5X, HBM, as well as PCIe Gen5/Gen6 and CXL.

First published on Mon, Jun 9, 2025

Liked what you read? That’s only the tip of the tech iceberg!

Explore our vast collection of tech articles including introductory guides, product reviews, trends and more, stay up to date with the latest news, relish thought-provoking interviews and the hottest AI blogs, and tickle your funny bone with hilarious tech memes!

Plus, get access to branded insights from industry-leading global brands through informative white papers, engaging case studies, in-depth reports, enlightening videos and exciting events and webinars.

Dive into TechDogs' treasure trove today and Know Your World of technology like never before!

Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs' members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs' Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. While we aim to provide valuable and helpful information, some content on TechDogs' site may not have been thoroughly reviewed for every detail or aspect. We encourage users to verify any information independently where necessary.

Join The Discussion

- Promoted By TechDogs -

Join Our Newsletter

Get weekly news, engaging articles, and career tips-all free!

By subscribing to our newsletter, you're cool with our terms and conditions and agree to our Privacy Policy.

  • Dark
  • Light