Agentic AI Atlasby a5c.ai
OverviewWikiGraphFor AgentsEdgesSearchWorkspace
/
GitHubDocsDiscord
iiRecord
Agentic AI Atlas · Feature Store & MLOps Stack (Feast, MLflow, BentoML, K8s, Prometheus)
stack-profile:feature-store-mlopsa5c.ai
Search record views/
Record · tabs

Available views

II.Record viewspp. 1 - 1
overviewjsongraph
II.
StackProfile overview

stack-profile:feature-store-mlops

Reference · live

Feature Store & MLOps Stack (Feast, MLflow, BentoML, K8s, Prometheus) overview

A production MLOps platform centered on feature management and model lifecycle governance. Feast provides a feature store that bridges offline training (batch features from the warehouse) and online serving (low-latency feature lookups via Redis or DynamoDB), ensuring train-serve consistency. MLflow tracks experiments, registers model versions, and manages the promotion pipeline from staging to production. BentoML packages trained models into containerized inference services with adaptive batching and multi-model serving. Kubernetes orchestrates both training jobs and serving deployments with autoscaling. Prometheus monitors inference latency, feature freshness, and data drift. Choose this stack when your ML team needs reproducible pipelines, governed model promotion, and feature reuse across multiple models.

StackProfileOutgoing · 20Incoming · 0

Attributes

displayName
Feature Store & MLOps Stack (Feast, MLflow, BentoML, K8s, Prometheus)
description
A production MLOps platform centered on feature management and model lifecycle governance. Feast provides a feature store that bridges offline training (batch features from the warehouse) and online serving (low-latency feature lookups via Redis or DynamoDB), ensuring train-serve consistency. MLflow tracks experiments, registers model versions, and manages the promotion pipeline from staging to production. BentoML packages trained models into containerized inference services with adaptive batching and multi-model serving. Kubernetes orchestrates both training jobs and serving deployments with autoscaling. Prometheus monitors inference latency, feature freshness, and data drift. Choose this stack when your ML team needs reproducible pipelines, governed model promotion, and feature reuse across multiple models.
composes
  • tool:feast
  • tool:mlflow
  • tool:bentoml
  • tool:kubernetes
  • tool:prometheus
  • language:python

Outgoing edges

applies_to2
  • domain:ml-ops·DomainMLOps
  • domain:machine-learning·DomainMachine Learning
composed_of8
  • tool:feast·ToolFeast
  • tool:mlflow·ToolMLflow
  • tool:bentoml·ToolBentoML
  • tool:kubernetes·ToolKubernetes
  • tool:prometheus·ToolPrometheus
  • language:python·LanguagePython
  • tool:docker·ToolDocker
  • library:redis·Librarynode-redis
follows_workflow2
  • workflow:model-deployment-pipeline·WorkflowModel Deployment Pipeline
  • workflow:feature-store-management·WorkflowFeature Store Management
requires_skill_area5
  • skill-area:feature-engineering-pipelines·SkillAreaData and Feature Engineering Pipelines
  • skill-area:model-registry-management·SkillAreaAutomated Training and Model Registry
  • skill-area:model-serving-deployment·SkillAreaModel Serving and Deployment
  • skill-area:ci-cd-ml-pipelines·SkillAreaCI/CD for ML Pipelines
  • skill-area:model-monitoring-drift-detection·SkillAreaModel Monitoring and Drift Detection
used_by_role3
  • role:ml-ops-engineer·RoleMLOps Engineer
  • role:ml-engineer·RoleMachine Learning Engineer
  • role:data-engineer·RoleData Engineer

Incoming edges

None.

Related pages

No related wiki pages for this record.

Shortcuts

Open in graph
Browse node kind