Agentic AI Atlasby a5c.ai
OverviewWikiGraphFor AgentsEdgesSearchWorkspace
/
GitHubDocsDiscord
iiRecord
Agentic AI Atlas · ML Pipeline Stack (PyTorch/TensorFlow, MLflow, BentoML, K8s)
stack-profile:ml-pipeline-stacka5c.ai
Search record views/
Record · tabs

Available views

II.Record viewspp. 1 - 1
overviewjsongraph
II.
StackProfile overview

stack-profile:ml-pipeline-stack

Reference · live

ML Pipeline Stack (PyTorch/TensorFlow, MLflow, BentoML, K8s) overview

A production machine learning pipeline stack: PyTorch or TensorFlow as the training framework, MLflow for experiment tracking, model registry, and reproducibility, BentoML for packaging models into deployable services, Kubernetes for orchestrating training jobs and serving endpoints, and Prometheus for monitoring model performance and infrastructure health. The pipeline spans from data preparation through training, evaluation, packaging, deployment, and monitoring. MLflow tracks hyperparameters, metrics, and artifacts across experiments. BentoML wraps trained models into containerized REST/gRPC services with adaptive batching. Kubernetes provides autoscaling for both training (via jobs or operators like KubeFlow) and inference (via Deployments with HPA). Prometheus scrapes model latency, throughput, and data-drift metrics. This stack is common in organizations that have graduated beyond notebook-driven ML and need repeatable, observable, production-grade model lifecycle management.

StackProfileOutgoing · 18Incoming · 0

Attributes

displayName
ML Pipeline Stack (PyTorch/TensorFlow, MLflow, BentoML, K8s)
description
A production machine learning pipeline stack: PyTorch or TensorFlow as the training framework, MLflow for experiment tracking, model registry, and reproducibility, BentoML for packaging models into deployable services, Kubernetes for orchestrating training jobs and serving endpoints, and Prometheus for monitoring model performance and infrastructure health. The pipeline spans from data preparation through training, evaluation, packaging, deployment, and monitoring. MLflow tracks hyperparameters, metrics, and artifacts across experiments. BentoML wraps trained models into containerized REST/gRPC services with adaptive batching. Kubernetes provides autoscaling for both training (via jobs or operators like KubeFlow) and inference (via Deployments with HPA). Prometheus scrapes model latency, throughput, and data-drift metrics. This stack is common in organizations that have graduated beyond notebook-driven ML and need repeatable, observable, production-grade model lifecycle management.
composes
  • language:python
  • library:pytorch
  • library:tensorflow
  • tool:mlflow
  • tool:bentoml
  • tool:kubernetes
  • tool:prometheus
  • tool:docker

Outgoing edges

applies_to2
  • domain:ml-ops·DomainMLOps
  • domain:machine-learning·DomainMachine Learning
composed_of8
  • language:python·LanguagePython
  • library:pytorch·LibraryPyTorch
  • library:tensorflow·LibraryTensorFlow
  • tool:mlflow·ToolMLflow
  • tool:bentoml·ToolBentoML
  • tool:kubernetes·ToolKubernetes
  • tool:prometheus·ToolPrometheus
  • tool:docker·ToolDocker
requires_skill_area5
  • skill-area:model-serving-deployment·SkillAreaModel Serving and Deployment
  • skill-area:machine-learning-frameworks·SkillAreaMachine Learning Frameworks
  • skill-area:ci-cd-ml-pipelines·SkillAreaCI/CD for ML Pipelines
  • skill-area:containerization·SkillArea
  • skill-area:model-monitoring-drift-detection·SkillAreaModel Monitoring and Drift Detection
used_by_role3
  • role:ml-engineer·RoleMachine Learning Engineer
  • role:ml-ops-engineer·RoleMLOps Engineer
  • role:data-scientist·RoleData Scientist

Incoming edges

None.

Related pages

No related wiki pages for this record.

Shortcuts

Open in graph
Browse node kind