You seems to be using
old browser.

To get the most our of #!% please visit us from one of the following browsers.

menu

Services

menu

Solutions

menu

Unified Data Platform

menu

Airport

menu

Consumer Packaged Goods

menu

Retail

menu

Financial Services

menu

Automotive

menu

HR Analytics

menu

Partnerships

menu

Company

menu

About Us

menu

Partnerships

menu

Resources

menu

CSR

menu

Contact Us

Blogs

[Episode 2]: Implementing MLOps using MLflow

By admin, Sep 29, 2021

GainInsights MLOps series

Previously, in the MLOps series, we covered the How to install MLflow. In this article, we explore steps involved in the implementation process.

We will see that the MLflow components can be used to make training, testing, tracking, re-building of models easier throughout the ML lifecycle with a high degree of collaboration between data scientists, data engineers, ML engineers (and those in shared roles). Let’s begin.

MLflow UI

In order to use MLflow, a virtual environment needs to be created, followed by installation of MLflow. Here’s how MLflow UI looks like while running on a localhost.

MLflow UI

On the left panel we see Experiments, where different runs of the same problem can be grouped and managed. Let’s explore step by step the implementation across its modules: MLflow Tracking, MLflow Projects, MLflow Models, Model Registry.

1. MLflow Tracking

It is an API for logging parameters, code versions, metrics, and artifacts while running machine learning code and for visualising results. MLflow Tracking can be used in any environment to log results on local files or on a server and for comparison of multiple runs. Data Science teams can also use the tracking module to compare results from different users.

Training and Logging

Let’s create a file named train.py that has the model with the following features . A selected list of features, a model that needs to be trained and the accuracy metrics.

import warnings
import sys
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
import mlflow.sklearn
import logging

logging.basicConfig(level=logging.WARN)
logger = logging.getLogger(__name__)

mlflow.set_tracking_uri(‘http://localhost:5000′)
mlflow.set_experiment(experiment_name=’Churn Model’)

tags = {“team”: “DataScience”,
“dataset”: “Telecom”,
“release.version”: “2.2.2”}

def eval_metrics(actual, pred):
rmse = np.sqrt(mean_squared_error(actual, pred))
mae = mean_absolute_error(actual, pred)
r2 = r2_score(actual, pred)
return rmse, mae, r2

if __name__ == “__main__”:
warnings.filterwarnings(“ignore”)

#Read the csv file from the URL
csv_url = (
“http://archive.ics.uci.edu/ml/machine-learning /databases/telecom_data.csv”)
try:
data = pd.read_csv(csv_url, sep=”;”)
except Exception as e:
logger.exception(
“Unable to download training & test CSV, check your internet connection. Error: %s”, e
)

X = data.drop[‘Churn’]
scaler = MinMaxScaler().fit(X)
X = scaler.transform(X)
X = pd.DataFrame(X)
y = data[‘Churn’]

# Split the data into training and test sets. (0.75, 0.25) split.
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

with mlflow.start_run(run_name=’LR’):

model = lm.LogisticRegression()
mod = model.fit(x_train, y_train)
predictions = model.predict(x_test)
report = classification_report(y_test,predictions)
roc = roc_auc_score(y_test, predictions, multi_class=’ovr’)
f1 = f1_score(y_test, predictions, average=’macro’)
accuracy = accuracy_score(y_test, predictions)

print(“AUC: %s” % roc)
print(“Accuracy: %s” % accuracy)
print(“F1: %s” % f1)

mlflow.log_metric(“AUC”, roc)
mlflow.log_metric(“Accuracy”, accuracy)
mlflow.log_metric(“F1”, f1)

mlflow.sklearn.log_model(model, “model”)
mlflow.sklearn.log_model(lr, “model”)
mlflow.log_artifact(local_path=’./train.py’, artifact_path=’code’)

Now, the tracking url and name of the Experiment needs to be created:
mlflow.set_tracking_uri(‘http://localhost:5000′)
mlflow.set_experiment(experiment_name=’Churn Model’)
tags = {“team”: “DataScience”,
“release.version”: “2.2.2”}

In this step, we provide instructions to MLflow that a server is up and running at our localhost on port 5000 and an Experiment space named Churn Model has been created. In addition, Tags can also be used as additional metadata for the experiment run.

with mlflow.start_run(run_name=’LR’):

mlflow.set_tags(tags)
#…..
mlflow.log_metric(“AUC”, roc)
mlflow.log_metric(“Accuracy”, accuracy)
mlflow.log_metric(“F1”, f1)
mlflow.sklearn.log_model(lr, “model”)
mlflow.log_artifact(local_path=’./train.py’,
artifact_path=’code’)

With the .set_tags , we link the current run to the tags and using .log_params, we pass the current parameters tuple (name, value) similar to .log_metric for the model’s metrics results.

To save the trained model for later use, use .log_model and MLflow creates metadata around it and can exported it as a pickle file. We can also log custom artifacts using .log_artifact and use it to store images related to the training phase, external resources, datasets or even a copy of the code used to generate this run.

After running the code using python train.py in the same directory.

Different runs are saved in mlruns folder.

View MLflow UI by writing mlfow ui in the command prompt.

MLflow UI

On UI we see the metrics, parameters, and tags we set; it also includes other features automatically, such as the user that ran the code, the start time at local time, the type of model used, and the Git commit under the code was (if you have a git repo setup).

If we click on the model, we can see additional information such as the saved model along with our custom artifacts. It also has a preview of those objects and can be downloaded locally.

Models Comparison

Let’s compare the runs to see the model performance using the logged metrics and maybe even custom images that we have saved as artifacts.

MLflow has a good comparison feature – To explore, select all models with the checkbox and click compare.

1) The first plot we see is a plotly scatter plot. Select the metrics and parameters to plot and compare.

Each dot shows the run name and parameters/metrics associated with it.

2) You can also see a contour plot to see how metrics changes under different combinations of pairs of parameters.

3) We have a parallel coordinates plot where all our metrics and parameters can be brought it.
In case we have hyper parameters in the model, we can use parallel coordinates plot. It plots different combinations of parameters and the accuracy metrics.

With the help of parallel coordinates plot, we can find best combination of parameters.

After comparing different models through the UI, let’s choose the model to be moved to production. Then we go to the experiment tab and register the model.

2. MLflow Projects

If we create a new project or clone an existing one, we can make it an MLflow project by simply adding two YAML files, viz., MLproject File and Conda environment file, to the root directory of the project.

Projects are in standard format for packaging reusable data science code. Each project is simply a directory with code and uses a descriptor file or simply convention to specify its dependencies and how to run the code.

Projects can contain a conda.yaml file for specifying a Python Conda environment. When we use the MLflow Tracking API in a Project, MLflow automatically remembers the project version (for example, Git commit) and any parameters. We can easily run existing MLflow Projects from GitHub or your own Git repository, and chain them into multi-step workflows.

When running an MLflow Project directory that does not contain an MLproject file, MLflow uses the following conventions to determine the project’s attributes:

i. The project’s name is the name of the directory.
ii. The Conda environment is specified in conda.yaml, if present. If no conda.yaml file is present, MLflow uses a Conda environment.
iii. MLflow uses Python to execute entry points with the .py extension, and it uses bash to execute entry points with the .sh extension.

3. MLflow Models

MLflow Models offer a convention for packaging machine learning models in multiple flavors and a variety of tools to help deploy them. Each model is saved as a directory containing arbitrary files and a descriptor file that lists several “flavors” the model can be used in. For example, a Tensorflow model can be loaded as a TensorFlow DAG, or as a python function to apply to input data.

# Directory written by mlflow.sklearn.save_model(model, “my_model”)
my_model/
├── MLmodel
├── model.pkl
├── conda.yaml
└── requirements.txt

Each MLflow Model is a directory containing arbitrary files together with an MLmodel file in the root of the directory that can define multiple flavors that the model can be viewed in.
Flavors are the key concept that makes MLflow Models powerful. They are a convention that deployment tools can use to understand the model which makes it possible to write tools that work with models from any ML library without having to integrate each tool with each library.

This model can then be used with any tool that supports either the sklearn or python_function model flavor. Apart from flavors field, the MLmodel YAML format can contain the following fields: Time_created, run_id, signature, input example.

4. Model Registry

There is a button where we can register the model so we can serve it as an API. If you try it, you can create a name for your model but you probably will see an error saying that it is not a supported URL for model registry data storage. This is because you have to set up your mlflow backend with some of the supported databases, which are PostgreSQL, MySQL, MS-SQL and SQLite. If you have one of these, you will only have to change in the startup command the option for an SQLAlchemy database URL.

Now, let’s go to train.py file and run it again with some different hyper-parameters, so we can have different model versions of the same problem, after running them, the Churn model experiment should look like this.

Now, on the Model tab, it should look like this:

And if we hit version 2, we can move it into production and set version 1 as archived.

Making Predictions

As our last step, we want to make some predictions, for this, MLflow already gives us an API to make the requests, if we go to the model registry, we should already notice that there is a run id and we can use it to make requests to that particular model that we ran, but as we want to use our production model, we can use below code:

import mlflow
import pandas as pd
logged_model = ‘models:/Churn Model/Production’

data = pd.read_csv(‘telecom_data.csv’)

# Load model as a PyFuncModel.
loaded_model = mlflow.pyfunc.load_model(logged_model)
print(loaded_model.predict(data))

The telecom_data.csv has two rows to predict and after running we get an array with each of the predictions.

This completes the implementation of MLOps using MLflow on a local storage. We can also deploy it popular cloud platforms such as Amazon Web Services, Microsoft Azure and Google Cloud Platform, which we will cover in detail in our next series of articles.

Author

Data engineering team
GainInsights

info@gain-insights.com

Explore No-code, automated machine learning for analytics teams here.

RECENT POSTS

Looking to connect with us?

Start a conversation