
Cloud
Rafay Launches Serverless Inference Offering To Accelerate Enterprise AI Adoption And Boost Revenues For GPU Cloud Providers
By Business Wire
.png?ext=.png)
New offering empowers NVIDIA Cloud Partners and GPU Cloud Providers to rapidly launch high-margin AI services on Rafay-powered infrastructure—accelerating time-to-market and maximizing ROI
SUNNYVALE, Calif.--(BUSINESS WIRE)--Today, Rafay Systems, a leader in cloud-native and AI infrastructure orchestration & management, announced general availability of the company’s Serverless Inference offering, a token-metered API for running open-source and privately trained or tuned LLMs. Many NVIDIA Cloud Providers (NCPs) and GPU Clouds are already leveraging the Rafay Platform to deliver a multi-tenant, Platform-as-a-Service experience to their customers, complete with self-service consumption of compute and AI applications. These NCPs and GPU Clouds can now deliver Serverless Inference as a turnkey service at no additional cost, enabling their customers to build and scale AI applications fast, without having to deal with the cost and complexity of building automation, governance, and controls for GPU-based infrastructure.
The Global AI inference market is expected to grow to $106 billion in 2025, and $254 billion by 2030. Rafay’s Serverless Inference empowers GPU Cloud Providers (GPU Clouds) and NCPs to tap into the booming GenAI market by eliminating key adoption barriers—automated provisioning and segmentation of complex infrastructure, developer self-service, rapidly launching new GenAI models as a service, generating billing data for on-demand usage, and more.
“Having spent the last year experimenting with GenAI, many enterprises are now focused on building agentic AI applications that augment and enhance their business offerings. The ability to rapidly consume GenAI models through inference endpoints is key to faster development of GenAI capabilities. This is where Rafay’s NCP and GPU Cloud partners have a material advantage,” said Haseeb Budhani, CEO and co-founder of Rafay Systems.
“With our new Serverless Inference offering, available for free to NCPs and GPU Clouds, our customers and partners can now deliver an Amazon Bedrock-like service to their customers, enabling access to the latest GenAI models in a scalable, secure, and cost-effective manner. Developers and enterprises can now integrate GenAI workflows into their applications in minutes, not months, without the pain of infrastructure management. This offering advances our company’s vision to help NCPs and GPU Clouds evolve from operating GPU-as-a-Service businesses to AI-as-a-Service businesses.”
Rafay Pioneers the Shift from GPU-as-a-Service to AI-as-a-Service
By offering Serverless Inference as an on-demand capability to downstream customers, Rafay helps NCPs and GPU Clouds address a key gap in the market. Rafay’s Serverless Inference offering provides the following key capabilities to NCPs and GPU Clouds:
- Seamless developer integration: OpenAI-compatible APIs require zero code migration for existing applications, with secure RESTful and streaming-ready endpoints that dramatically accelerate time-to-value for end customers.
- Intelligent infrastructure management: Auto-scaling GPU nodes with right-sized model allocation capabilities dynamically optimize resources across multi-tenant and dedicated isolation options, eliminating over-provisioning while maintaining strict performance SLAs.
- Built-in metering and billing: Token-based and time-based usage tracking for both input and output provides granular consumption analytics, while integrating with existing billing platforms through comprehensive metering APIs and enabling transparent, consumption-based pricing models.
- Enterprise-grade security and governance: Comprehensive protection through HTTPS-only API endpoints, rotating bearer token authentication, detailed access logging, and configurable token quotas per team, business unit, or application satisfy enterprise compliance requirements.
- Observability, storage, and performance monitoring: End-to-end visibility with logs and metrics archived in the provider’s own storage namespace, support for backends like MinIO- a high-performance, AWS S3-compatible object storage system, and Weka-a high-performance, AI-native data platform; as well as a centralized credential management ensure complete infrastructure and model performance transparency.
Availability
Rafay’s Serverless Inference offering is available today to all customers and partners using the Rafay Platform to deliver multi-tenant, GPU and CPU based infrastructure. The company is also set to roll out fine-tuning capabilities shortly. These new additions are designed to help NCPs and GPU Clouds rapidly deliver high-margin, production-ready AI services, eradicating complexity.
To read more about the technical aspects of the capabilities, visit the blog.
To learn more about Rafay, visit www.rafay.co and follow Rafay on X and LinkedIn.
About Rafay Systems
Founded in 2017, Rafay is committed to elevating CPU and GPU-based infrastructure to a strategic asset for enterprises and cloud service providers. Enterprises, NVIDIA Cloud Partner, and GPU Clouds leverage the company’s GPU PaaS™ (Platform-as-a-Service) stack to simplify the complexities of managing cloud and on-premises based infrastructure while enabling self-service workflows for platform and DevOps teams–all within one multi-tenant offering. The Rafay Platform also helps companies improve governance capabilities, optimize costs of CPU & GPU resources, and accelerate the delivery of cloud-native and AI-powered applications. Customers such as MoneyGram and Guardant Health entrust Rafay to be the cornerstone of their modern infrastructure strategy and AI architecture. Gartner has recognized Rafay as a Cool Vendor in Container Management. GigaOm named Rafay as a Leader and Outperformer in the GigaOm Radar Report for Managed Kubernetes.
To learn more about Rafay, visit www.rafay.co.
Contacts
Frequently Asked Questions
What is Rafay's Serverless Inference offering?
It's a token-metered API for running open-source and privately trained LLMs, enabling rapid deployment of AI services.
Who benefits from this offering?
NVIDIA Cloud Providers (NCPs) and GPU Cloud providers can offer it to their customers as a turnkey service to easily build and scale AI applications.
How does it accelerate AI development?
It eliminates infrastructure management complexities, allowing developers to integrate GenAI workflows in minutes, not months, through openAI-compatible APIs and auto-scaling.
First published on Fri, May 9, 2025
Liked what you read? That’s only the tip of the tech iceberg!
Explore our vast collection of tech articles including introductory guides, product reviews, trends and more, stay up to date with the latest news, relish thought-provoking interviews and the hottest AI blogs, and tickle your funny bone with hilarious tech memes!
Plus, get access to branded insights from industry-leading global brands through informative white papers, engaging case studies, in-depth reports, enlightening videos and exciting events and webinars.
Dive into TechDogs' treasure trove today and Know Your World of technology like never before!
Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs' members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs' Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. While we aim to provide valuable and helpful information, some content on TechDogs' site may not have been thoroughly reviewed for every detail or aspect. We encourage users to verify any information independently where necessary.
Trending Business Wire
Cotribute Launches Agentic AI Growth Agents To Cut Member And Customer Acquisition Costs, Deepen Customer Relationships, And Transform Digital Account Origination
By Business Wire
Digitalbridge To Participate In Upcoming Investor And Industry Conferences In June 2025
By Business Wire
H2O.Ai Named A Visionary For 3Rd Consecutive Year In 2025 Gartner Magic Quadrant For Data Science And Machine Learning Platforms
By Business Wire
Hcltech And Uipath Partner To Accelerate Agentic Automation For Global Enterprises
By Business Wire
Keepit Continues Its Commitment To Growth In The UK And Ireland With New VP Hire
By Business Wire
Join Our Newsletter
Get weekly news, engaging articles, and career tips-all free!
By subscribing to our newsletter, you're cool with our terms and conditions and agree to our Privacy Policy.
Join The Discussion