TechDogs-"Akamai Sharpens Its AI Edge With Launch Of Akamai Cloud Inference"

Cloud

Akamai Sharpens Its AI Edge With Launch Of Akamai Cloud Inference

PR Newswire
Overall Rating

New service gives companies the ability to realize a 3x improvement in throughput, 60% less latency, and 86% lower cost than traditional hyperscale infrastructure

CAMBRIDGE, Mass., March 27, 2025 /PRNewswire/ -- Akamai (NASDAQ: AKAM), the cybersecurity and cloud computing company that powers and protects business online, today unveiled Akamai Cloud Inference, to usher in a faster, more efficient wave of innovation for organizations looking to turn predictive and large language models (LLMs) into real-world action. Akamai Cloud Inference runs on Akamai Cloud, the world's most distributed platform, to address escalating limitations of centralized cloud models.

"Getting AI data closer to users and devices is hard, and it's where legacy clouds struggle," said Adam Karon, Chief Operating Officer and General Manager, Cloud Technology Group at Akamai. "While the heavy lifting of training LLMs will continue to happen in big hyperscale data centers, the actionable work of inferencing will take place at the edge where the platform Akamai has built over the past two and a half decades becomes vital for the future of AI and sets us apart from every other cloud provider in the market."

AI inference on Akamai Cloud

Akamai's new solution provides tools for platform engineers and developers to build and run AI applications and data-intensive workloads closer to end users, delivering 3x better throughput while reducing latency up to 2.5x. Using Akamai's solution, businesses can save up to 86% on AI inference and agentic AI workloads compared to traditional hyperscaler infrastructure. Akamai Cloud Inference includes:

  • Compute: Akamai Cloud offers a versatile compute arsenal, from classic CPUs for fine-tuned inference, to powerful accelerated-compute options in GPUs, and tailored ASIC VPUs to provide the right horsepower for a spectrum of AI inference challenges. Akamai integrates with Nvidia's AI Enterprise ecosystem, leveraging Triton, TAO Toolkit, TensorRT, and NVFlare to optimize performance of AI inference on NVIDIA GPUs.
  • Data management: Akamai enables customers to unlock the full potential of AI inference with a cutting-edge data fabric purpose-built for modern AI workloads. Akamai has partnered with VAST Data to provide streamlined access to real-time data to accelerate inference-related tasks, essential to delivering relevant results and a responsive experience. This is complemented by highly scalable object storage to manage the volume and variety of datasets critical to AI applications, and integration with leading vector database vendors, including Aiven and Milvus, to enable retrieval-augmented generation (RAG). With this data management stack, Akamai securely stores fine-tuned model data and training artifacts to deliver low-latency AI inference at global scale.
  • Containerization: Containerizing AI workloads enables demand-based autoscaling, improved application resilience, and hybrid/multicloud portability, while optimizing both performance and cost. With Kubernetes, Akamai delivers faster, cheaper, and more secure AI inference at petabyte-scale performance. Underpinned by Linode Kubernetes Engine (LKE)-Enterprise, a new enterprise edition of Akamai Cloud's Kubernetes orchestration platform designed specifically for large-scale enterprise workloads, and the recently announced Akamai App Platform, Akamai Cloud Inference is able to quickly deploy an AI-ready platform of open source Kubernetes projects, including KServe, Kubeflow, and SpinKube, seamlessly integrated to streamline the deployment of AI models for inference.
  • Edge compute: To simplify how developers build AI-powered applications, Akamai AI Inference includes WebAssembly (Wasm) capabilities. Working with Wasm providers like Fermyon, Akamai enables developers to execute inferencing for LLMs directly from serverless apps, allowing customers to execute lightweight code at the edge to enable latency-sensitive applications.

Together, these tools create a powerful platform for low-latency, AI-powered applications that allows companies to deliver the experience their users demand. Akamai Cloud Inference runs on the company's massively distributed platform capable of consistently delivering over one petabyte per second of throughput for data-intensive workloads. Comprising more than 4,200 points of presence across greater than 1,200 networks in over 130 countries worldwide, Akamai Cloud makes compute resources available from cloud to edge while accelerating application performance and increasing scalability.

The shift from training to inference

As AI adoption matures, enterprises are recognizing that the hype around LLMs has created a distraction, drawing focus away from practical AI solutions better suited to solve specific business problems. LLMs excel at general-purpose tasks like summarization, translation, and customer service. These are very large models that are expensive and time-consuming to train. Many enterprises have found themselves constrained by architectural and cost requirements, including data center and computational power; well-structured, secure, and scalable data systems; and the challenges that location and security requirements place on decision latency. Lightweight AI models, — designed to address specific business problems — can be optimized for individual industries, can use proprietary data to create measurable outcomes, and represent a better return on investment for enterprises today.

AI inference needs a more distributed cloud

Increasingly, data will be generated outside of centralized data centers or cloud regions. This shift is driving demand for AI solutions that leverage data generation closer to the point of origin. This fundamentally reshapes infrastructure needs as enterprises move beyond building and training LLMs, toward using data for faster, smarter decisions and investing in more personalized experiences. Enterprises recognize that they can generate more value by leveraging AI to manage and improve their business operations and processes. Distributed cloud and edge architectures are emerging as preferable for operational intelligence use cases because they can provide real-time, actionable insights across distributed assets even in remote environments. Early customer examples on Akamai Cloud include in-car voice assistance, AI-powered crop management, image optimization for consumer product marketplaces, virtual garment visualization shopping experiences, automated product description generators, and customer feedback sentiment analyzers.

"Training an LLM is like creating a map, requiring you to gather data, analyze terrain, and plot routes. It's slow and resource-intensive, but once built, it's highly useful. AI inference is like using a GPS, instantly applying that knowledge, recalculating in real time, and adapting to changes to get you where you need to go," explained Karon. "Inference is the next frontier for AI."

About Akamai

Akamai is the cybersecurity and cloud computing company that powers and protects business online. Our market-leading security solutions, superior threat intelligence, and global operations team provide defense in depth to safeguard enterprise data and applications everywhere. Akamai's full-stack cloud computing solutions deliver performance and affordability on the world's most distributed platform. Global enterprises trust Akamai to provide the industry-leading reliability, scale, and expertise they need to grow their business with confidence. Learn more at akamai.com and akamai.com/blog, or follow Akamai Technologies on X and LinkedIn.

Contacts

Akamai Media Relations


akamaipr@akamai.com 

Akamai Investor Relations

invrel@akamai.com 

View original content to download multimedia:https://www.prnewswire.com/news-releases/akamai-sharpens-its-ai-edge-with-launch-of-akamai-cloud-inference-302412571.html

SOURCE Akamai Technologies, Inc.

Frequently Asked Questions

What is Akamai Cloud Inference?

Akamai Cloud Inference is a new solution that allows businesses to run AI applications and data-intensive workloads closer to end users, resulting in significantly improved performance and cost savings compared to traditional hyperscale infrastructure.

How does Akamai Cloud Inference improve AI performance?

It achieves 3x better throughput and up to 2.5x lower latency by leveraging Akamai's globally distributed platform. This brings compute resources closer to the data, reducing the distance data needs to travel.

What are the key benefits of using Akamai Cloud Inference?

Key benefits include significantly reduced latency, increased throughput, up to 86% cost savings on AI inference, and enhanced scalability and security through a robust data management system and containerization capabilities.

First published on Fri, Mar 28, 2025

Enjoyed what you read? Great news – there’s a lot more to explore!

Dive into our content repository of the latest tech news, a diverse range of articles spanning introductory guides, product reviews, trends and more, along with engaging interviews, up-to-date AI blogs and hilarious tech memes!

Also explore our collection of branded insights via informative white papers, enlightening case studies, in-depth reports, educational videos and exciting events and webinars from leading global brands.

Head to the TechDogs homepage to Know Your World of technology today!

Disclaimer - Reference to any specific product, software or entity does not constitute an endorsement or recommendation by TechDogs nor should any data or content published be relied upon. The views expressed by TechDogs' members and guests are their own and their appearance on our site does not imply an endorsement of them or any entity they represent. Views and opinions expressed by TechDogs' Authors are those of the Authors and do not necessarily reflect the view of TechDogs or any of its officials. While we aim to provide valuable and helpful information, some content on TechDogs' site may not have been thoroughly reviewed for every detail or aspect. We encourage users to verify any information independently where necessary.

Join The Discussion

Join Our Newsletter

Get weekly news, engaging articles, and career tips-all free!

By subscribing to our newsletter, you're cool with our terms and conditions and agree to our Privacy Policy.

  • Dark
  • Light