Model deployment#

After you have trained your model, you can deploy it to production. There are multiple ways to deploy your modelk and each way has its own advantages and disadvantages.

Pre-requisites#

  1. You need to have OneTickML installed on the machine, where you want to deploy your model.

  2. You need to know the address of the MLFlow server you used for model tracking (ex: http://172.16.1.89:5000).

  3. You need to know MLFlow RUN_ID of the run produced model you want to deploy (ex: 264e377b01ef4f7f853a146a42fc3011). You can find it in the MLFlow UI on the server you used for model tracking.

  4. You need to know the name of the model you want to deploy (ex: CatBoostRegressor). You can find it in the MLFlow UI on the server you used for model tracking, by entering the selected run and looking for the Model section.

Deployment options#

1. Local environment#

1.1. Predict directly in Python#

You can use the model directly in your Python code, but it is necessary to have the same environment as the one used for training the model. Sometimes it is not possible to have the same environment or just not convenient to have all ML libraries installed on the machine where you want to use the model. In such cases, you can use one of the remote deployment options.

Example:

import mlflow
from onetick.ml.utils import restore_experiment_from_mlflow

# Set your MLFLow server address
mlflow.set_tracking_uri("http://172.16.1.89:5000")

# Load Experiment (substite RUN_ID with yours)
LoadedExperiment = restore_experiment_from_mlflow(run_id="RUN_ID")
exp = LoadedExperiment()

# data_to_predict must have the same columns as the training data from Experiment datafeeds. 
df = exp.prepare_predict_data(data_to_predict)
results = exp.predict(df)

1.2. Predict from command line#

You can also use the model directly from command line, but it is necessary to have the same environment as the one used for training the model.

Assuming, that you dumped input_data from the previous example to input_data.json file as JSON, like this:

data = input_data.to_json(orient='split')
with open("input_data.json", 'w', encoding="utf8") as fp:
    fp.write(data)

Use the following command to get predictions:

export MLFLOW_TRACKING_URI=http://172.16.1.89:5000
mlflow models predict -m runs:/$RUN_ID/wrapped_$MODEL_NAME -i input_data.json -o output_data.json --no-conda --content-type json

2. Remote deployment (REST API)#

2.1. REST API deployment#

You can deploy your model as a REST API using MLFlow’s mlflow models serve command. This command will start a web server that will serve predictions for your model. You can use any HTTP client to send REST requests to the server.

Example script to start the REST server:

export MLFLOW_TRACKING_URI=http://172.16.1.89:5000
mlflow models serve --no-conda -m runs:/$RUN_ID/wrapped_$MODEL_NAME -p 1234

This will start a web server on port 1234. You can send a request to the server using any HTTP client. Here is Python example:


import requests

# input_data is pd.DataFrame, having the same columns as the data from Experiment datafeeds
data = input_data.to_json(orient='split')
result = requests.post("http://127.0.0.1:5432/invocations",
                       data=data,
                       headers={'Content-type': 'application/json',
                                'Accept': 'text/plain'}).json()

2.2. Docker deployment#

You can also deploy your model as a Docker container, locally running a web server that will serve predictions for your model. First, you need to build a Docker image. You can do this using the mlflow models build-docker command. This command will create a Docker image that will serve predictions for your model.

Example:

mlflow models build-docker -m runs:/$RUN_ID/wrapped_$MODEL_NAME -n $MODEL_NAME

This will create a Docker image named $MODEL_NAME. You can run this image using the docker run command.

Example:

docker run -p 1234:1234 $MODEL_NAME

This will start a web server on port 1234. You can send a request to the server using any HTTP client, follow example with Python requests library as in the previous section.