stack-profile:feature-store-mlops
Feature Store & MLOps Stack (Feast, MLflow, BentoML, K8s, Prometheus) overview
A production MLOps platform centered on feature management and model lifecycle governance. Feast provides a feature store that bridges offline training (batch features from the warehouse) and online serving (low-latency feature lookups via Redis or DynamoDB), ensuring train-serve consistency. MLflow tracks experiments, registers model versions, and manages the promotion pipeline from staging to production. BentoML packages trained models into containerized inference services with adaptive batching and multi-model serving. Kubernetes orchestrates both training jobs and serving deployments with autoscaling. Prometheus monitors inference latency, feature freshness, and data drift. Choose this stack when your ML team needs reproducible pipelines, governed model promotion, and feature reuse across multiple models.
Attributes
Outgoing edges
- domain:ml-ops·DomainMLOps
- domain:machine-learning·DomainMachine Learning
- tool:feast·ToolFeast
- tool:mlflow·ToolMLflow
- tool:bentoml·ToolBentoML
- tool:kubernetes·ToolKubernetes
- tool:prometheus·ToolPrometheus
- language:python·LanguagePython
- tool:docker·ToolDocker
- library:redis·Librarynode-redis
- workflow:model-deployment-pipeline·WorkflowModel Deployment Pipeline
- workflow:feature-store-management·WorkflowFeature Store Management
- skill-area:feature-engineering-pipelines·SkillAreaData and Feature Engineering Pipelines
- skill-area:model-registry-management·SkillAreaAutomated Training and Model Registry
- skill-area:model-serving-deployment·SkillAreaModel Serving and Deployment
- skill-area:ci-cd-ml-pipelines·SkillAreaCI/CD for ML Pipelines
- skill-area:model-monitoring-drift-detection·SkillAreaModel Monitoring and Drift Detection
- role:ml-ops-engineer·RoleMLOps Engineer
- role:ml-engineer·RoleMachine Learning Engineer
- role:data-engineer·RoleData Engineer