You seems to be using
old browser.

To get the most our of #!% please visit us from one of the following browsers.

menu

Services

menu

Solutions

menu

Unified Data Platform

menu

Airport

menu

Consumer Packaged Goods

menu

Retail

menu

Financial Services

menu

Automotive

menu

HR Analytics

menu

Partnerships

menu

Company

menu

About Us

menu

Partnerships

menu

Resources

menu

CSR

menu

Contact Us

Blogs

[Episode 3]: MLOps on AWS using MLflow

By admin, Sep 29, 2021

MLOps

In our earlier articles, we covered installation and implementation of MLOps using MLflow.

For any business, seamless deployment of ML models into production is the key to success of its live analytics use cases. In this article, we will learn about deploying ML models on AWS (Amazon Web Services) using MLflow and also look at different ways to productionize them. Subsequently, we will explore the same process on the two other popular platforms: Azure and GCP. Let’s begin.

MLOps on Azure, AWS and GCP

Deploying an ML model on AWS: Pre-requisites

AWS command line interface (CLI) installed and credentials configured
Once the credentials are verified, the AWS CLI allows connection to AWS workspace
An Identity and Access Management execution role defined that grants SageMaker access to the S3 buckets.
Properly installed and working docker

Once the above steps are done with, here’s how we proceed with the deployment process on AWS –

1. Configuring AWS

Before any model can actually be deployed on SageMaker, Amazon workspace needs to be set up. The models can be pushed from your local mlruns directory similar to process followed during local model deployment. But it is much more convenient and centralized to have all our runs be pushed to AWS and stored in a bucket. This way, all teams can access models that are stored here.

In a sense, this acts as a “Model Registry” although it doesn’t offer the same functionality as the MLflow Model Registry. A single bucket will be sufficient to host all the MLflow runs.

From here, let’s pick a specific run and deploy it on SageMaker. To keep it simple, we will once again use the scikit-learn logistic regression model that we trained as the model we are deploying. So with that, let’s create a simple bucket and name it as per convenience, say mlflow-sagemaker. We can either create it through the AWS CLI or do so through the AWS console on your browser.

AWS management console

Click “create bucket”

create bucket on AWS management console

Here, we have named our bucket mlops-sagemaker-runs. For rest of the options, scroll down to the bottom and click Create Bucket. Once done, the created bucket can be seen in the list of buckets.

import subprocess
s3_bucket_name = “mlops-sagemaker-runs”
mlruns_direc = “./mlruns/”
output = subprocess.run([“aws”, “s3”, “sync”, “{}”.
format(mlruns_direc), “s3://{}”.format(s3_bucket_name)],
stdout=subprocess.PIPE, encoding=’utf-8′)
print(output.stdout)
print(“\nSaved to bucket: “, s3_bucket_name)

2. Deploying an ML Model to AWS SageMaker

Here, MLflow SageMaker module code can be used to push a model to SageMaker. After SageMaker creates an endpoint, the model is hosted here utilizing the docker image that we pushed earlier to the ECR.

To deploy ML model on SageMaker, we will need to gather app_name, model_uri , execution_role, region and image_ecr_url.

SageMaker will be used to host the model once you get to deployment. To do that, the following command can be run in the terminal:

Now, a new container in the portal can be seen as we navigate to Amazon ECR.

import boto3
import mlflow.sagemaker as mfs
import json
app_name = “mlops-sagemaker”
execution_role_arn = “arn:aws:iam::180072566886:role/
service-role/
AmazonSageMaker-ExecutionRole-20181112T142060”
image_ecr_url = “180072566886.dkr.ecr.us-east-2.amazonaws.com/
mlflow-pyfunc:1.10.0”
region = “us-east-2”
s3_bucket_name = “mlops-sagemaker-runs”
experiment_id = “8”
run_id = “1eb809b446d949d5a70a1e22e4b4f428”
model_name = “log_reg_model”
model_uri = “s3://{}/{}/{}/artifacts/{}/”.format
(s3_bucket_name, experiment_id, run_id, model_name)

This will set up all of the parameters that you will use to run the deployment code.

Now, let’s look at the code for deployment:

mfs.deploy(app_name=app_name,
model_uri=model_uri,
execution_role_arn=execution_role_arn,
region_name=region,
image_url=image_ecr_url,
mode=mfs.DEPLOYMENT_MODE_CREATE)

3. Making predictions

Once the model has been deployed and is ready to serve, we can use Boto3 to query the model and receive predictions.

4. Switching Models

MLflow provides functionality that enables swapping a deployed model with a new one. SageMaker essentially updates the endpoint with the new model you are trying to deploy.

MLflow provides explicit AWS SageMaker support in its operationalization code. We have seen how to upload runs to an S3 bucket and how to create and push an MLflow Docker container image for AWS SageMaker to use when operationalizing your models.

This completes the process of deployment of ML models on AWS. In the next article, we will look at how ML models can be deployed on other platforms like Microsoft Azure and on Google Cloud Platform using MLflow.

Author

Data engineering team
GainInsights

info@gain-insights.com

Explore No-code, automated machine learning for analytics teams here.

RECENT POSTS

Looking to connect with us?

Start a conversation