Skip to main content

ML at Scale.
Open Source.

The end-to-end platform for building, deploying, and monitoring ML models

Built for |

Everything you need for ML in production

Feature Management

Centralized feature store with versioning, lineage tracking, and real-time serving for consistent ML features across training and inference.

Model Training

Scalable distributed training with experiment tracking, hyperparameter tuning, and automatic resource management.

Evaluation

Comprehensive model evaluation with automated metrics, A/B testing, and performance benchmarking across datasets.

Deployment

One-click deployment to production with canary releases, traffic splitting, and automatic rollback capabilities.

Monitoring

Real-time model monitoring with drift detection, alerting, and observability for production ML systems.

Simple, powerful API

From training to production in minutes, not months

1import michelangelo.uniflow.core as uniflow
2from michelangelo.uniflow.plugins.ray import RayTask
3
4@uniflow.task(config=RayTask(head_cpu=2, worker_instances=2))
5def train(train_data, validation_data, params: dict):
6 """Distributed training with Ray."""
7 trainer = XGBoostTrainer(
8 params=params,
9 datasets={"train": train_data, "validation": validation_data},
10 )
11 return trainer.fit()
12
13@uniflow.workflow()
14def train_workflow(dataset: str):
15 """End-to-end ML training pipeline."""
16 train_data, val_data = load_and_split(dataset)
17 result = train(train_data, val_data, params={"max_depth": 5})
18 return result
19
20if __name__ == "__main__":
21 ctx = uniflow.create_context()
22 ctx.run(train_workflow, dataset="s3://data/training.parquet")