Event Concluded

Artificial Intelligence
Over 9000 Models! How to Reliably Scale MLOps

Webinar
Dec 16, 2025
14:00
About Event
Join our webinar to learn pipeline-first MLOps approaches, automation, and best practices for large-scale predictive AI
Scaling predictive AI from a few models to thousands is complex. Learn how to make large-scale MLOps safe, repeatable, and efficient with cloud-native pipelines, GitOps, and automation.
Predictive AI powers critical systems like fraud detection, demand forecasting, and supply chains. But scaling from one model to thousands is still a major challenge. This webinar presents a scalable blueprint for operating thousands of ML models on Kubernetes, based on practical, real-world use cases.
We shift the unit of scale from individual models to pipelines as first-class citizens, with a demo showing pipeline steps, Git-driven onboarding, continuous training with OpenShift and Kubeflow Pipelines, data and artifact versioning, model scanning and containerization, and GitOps promotion via Argo CD.
We’ll also cover monitoring, drift detection, and model registry lineage, highlighting how cloud-native patterns extend across the ML lifecycle. Attendees will learn why pipelines, not individual models, are the true unit of scale, and how Git and automation make it safe to operate at 9,000+ models.
In this webinar, we’ll cover:
Data scientists, ML engineers, AI platform architects, DevOps engineers, and anyone responsible for scaling ML/AI workloads in production.
RegisterScaling predictive AI from a few models to thousands is complex. Learn how to make large-scale MLOps safe, repeatable, and efficient with cloud-native pipelines, GitOps, and automation.
Predictive AI powers critical systems like fraud detection, demand forecasting, and supply chains. But scaling from one model to thousands is still a major challenge. This webinar presents a scalable blueprint for operating thousands of ML models on Kubernetes, based on practical, real-world use cases.
We shift the unit of scale from individual models to pipelines as first-class citizens, with a demo showing pipeline steps, Git-driven onboarding, continuous training with OpenShift and Kubeflow Pipelines, data and artifact versioning, model scanning and containerization, and GitOps promotion via Argo CD.
We’ll also cover monitoring, drift detection, and model registry lineage, highlighting how cloud-native patterns extend across the ML lifecycle. Attendees will learn why pipelines, not individual models, are the true unit of scale, and how Git and automation make it safe to operate at 9,000+ models.
In this webinar, we’ll cover:
- Overcome scaling challenges: why traditional model-centric approaches fail
- Adopt pipeline-first MLOps: streamline onboarding, training, and deployment
- Automate model lifecycles: using Tekton, Kubeflow Pipelines, DVC, and Argo CD
- Monitor and manage drift: maintain reliability, compliance, and trust
- Operate at scale safely: scale controllably by introducing relevant tools and practices
Data scientists, ML engineers, AI platform architects, DevOps engineers, and anyone responsible for scaling ML/AI workloads in production.
Join Our Newsletter
Get weekly news, engaging articles, and career tips-all free!
By subscribing to our newsletter, you're cool with our terms and conditions and agree to our Privacy Policy.
Join The Discussion