Model Serving Made Easy

The fastest way to manage, deploy and serve Machine Learning models from any frameworks, powered by our open-source project BentoML.

For TeamsGalleryCommunityGet StartedDocs

The most powerful

Model Serving Platform

Unified format for deployment

High performance model serving

100x the throughput of your regular flask based model server, thanks to our advanced micro-batching mechanism. Read about the benchmarks here.

DevOps best practices baked in

Organizations using and contributing to BentoML

Github

Built to work with DevOps tools

KNative

CloudRun

BentoML for teams (beta)

Model Management

One-click Deployment

Copyright 2020 BentoML

Put me on the waitlist

Effortlessly deploy and manage your API endpoints on the cloud providers you trust—all with a single command.

The central hub for managing trained models, prediction services and collaboration between data science, engineering, and DevOps.

Deliver high quality prediction services that speaks the DevOps language and integrates perfectly with common infrastructure tools.

Unified model packaging format enabling both online and offline serving on any platform.

GalleryCommunityDocumentationQuickstart guideContactGithub

BentoML supports all major ML frameworks

Contact Us

Built with BentoML

Learn more about BentoML

Featured

Sentiment analysis with BERT

Image Classification

Check out the gallery

The service uses the BERT model trained with the TensorFlow framework to predict movie reviews' sentiment.

Titanic Survival Prediction

This service uses ResNet50 from ONNX model zoo to identify objects in a given image.

This prediction service use model trained with XGBoost framework to predict surivival rate of giving passenger on the Titanic cruise ship.

Open sourceSign up for team