Model Serving Made Easy

From trained ML models to production-grade prediction services with just a few lines of code

For TeamsGalleryCommunityGet StartedDocs

A simple yet flexible workflow empowering 

Data Science teams to continuously ship prediction services

Unified format for deployment

High performance model serving

100x the throughput of your regular flask based model server, thanks to our advanced micro-batching mechanism. Read about the benchmarks here.

DevOps best practices baked in

Organizations using and contributing to BentoML

Github

Built to work with DevOps & Infrastructure tools

KNative

CloudRun

BentoML for teams (beta)

Serving Admin

Fully managed

Copyright 2020 BentoML

Put me on the waitlist

Keep all your team's models, deployments, and changes highly visible and control access via SSO, RBAC, Client Authentication, and auditing logs.

DevOps-free BentoML workflow, from prediction service registry, deployment automation, to endpoint monitoring, all configured automatically for your team. A solid foundation for running serious ML workloads in production.

Deliver high quality prediction services that speaks the DevOps language and integrates perfectly with common infrastructure tools.

Unified model packaging format enabling both online and offline serving on any platform.

GalleryCommunityDocumentationQuickstart guideContactGithub

BentoML supports all major ML frameworks

Contact Us

Built with BentoML

Learn more about BentoML

Featured

Sentiment analysis with BERT

Image Classification

Check out the gallery

The service uses the BERT model trained with the TensorFlow framework to predict movie reviews' sentiment.

Titanic Survival Prediction

This service uses ResNet50 from ONNX model zoo to identify objects in a given image.

This prediction service use model trained with XGBoost framework to predict surivival rate of giving passenger on the Titanic cruise ship.

Open sourceSign up for teams

Core ML