You seems to be using
old browser.

To get the most our of #!% please visit us from one of the following browsers.

menu

Services

menu

Solutions

menu

Unified Data Platform

menu

Airport

menu

Consumer Packaged Goods

menu

Retail

menu

Financial Services

menu

Automotive

menu

HR Analytics

menu

Partnerships

menu

Company

menu

About Us

menu

Partnerships

menu

Resources

menu

CSR

menu

Contact Us

Blogs

[Episode 4]: MLOps on Azure using MLflow

By admin, Sep 29, 2021

MLOps Series - GainInsights

Previously, in the MLOps Series, we covered deployment of models on AWS. In this article, let’s look at MLOps on Azure using MLflow.

As we already know, model deployment can be a complex, but an important cog in the wheel in the MLOps lifecycle and a seamless deployment is key to scaling analytics use cases. With this, let’s get directly to the deployment steps on Azure.

Deploying ML model on Azure

A pre-requisite for the deployment is the installation of azureml-sdk in your python environment. Once installed, let’s proceed with the 4-step process for Azure-

1. Configuring Azure

Here, we use MLflow’s functionality to build a container image for the model to be hosted in. Then, we push it to Azure’s Azure Container Instances (ACI).

Configure Azure
MLOps on Azure

2. Deploying an ML Model on Azure (dev stage)

Here,a built-in azureml-sdk module code can be used to deploy a model on Azure. However, this is a development stage deployment, so this model is not production-ready since its computational resources are limited. One interesting functionality that Azure provides is the ACI webservice. This webservice is specifically used for the purposes of debugging or testing some model under development, which is why it is suitable to be used in the development stage. We are going to deploy an ACI web service instance based on the model image we just created.

Run the following command:
from azureml.core.webservice import AciWebservice, Webservice
aci_service_name = “sklearn-model-dev”
aci_service_config = AciWebservice.deploy_configuration()
aci_service = Webservice.deploy_from_image
(name=aci_service_name,
image=model_image,
deployment_config=aci_service_config,
workspace=workspace)

3. Making predictions

Once the model has finished deployment, it is ready to be queried. This is done through a HTTP request. This is how it can be verified if the model can work once hosted on the cloud since it is in the development stage.

4. Deploying to production

Here, MLflow Azure module code cam be utilized to deploy the model to production by creating a container instance (or any other deployment configuration provided, like Azure Kubernetes Service).

MLflow provides Azure support and helps deploy models directly, using a container instance by default. To execute, Run the following, replacing the names as per convenience:

azure_service, azure_model = mlflow.azureml.deploy(model_uri,
workspace,
service_name=”sklearn-logreg”,
model_name=”log-reg-model”,
synchronous=True)

5. Switching models

MLflow does not provide explicit functionality to switch the models, so it is a mandate to delete the service and recreate it with another model run.

Similarly, models can also be deployed on other cloud platforms such as AWS and on Google Cloud Platform.

Check out the earlier articles in this series to understand how to install MLflow and implementing MLOps using MLflow.

Author

Data engineering team
GainInsights

info@gain-insights.com

Explore No-code, automated machine learning for analytics teams here.

RECENT POSTS

Looking to connect with us?

Start a conversation