Advent of 2022, Day 22 – Batch endpoints for batch scoring
This article is originally published at https://tomaztsql.wordpress.com
In the series of Azure Machine Learning posts:
- Dec 01: What is Azure Machine Learning?
- Dec 02: Creating Azure Machine Learning Workspace
- Dec 03: Understanding Azure Machine Learning Studio
- Dec 04: Getting data to Azure Machine Learning workspace
- Dec 05: Creating compute and cluster instances in Azure Machine Learning
- Dec 06: Environments in Azure Machine Learning
- Dec 07: Introduction to Azure CLI and Python SDK
- Dec 08: Python SDK namespaces for workspace, experiments and models
- Dec 09: Python SDK namespaces for environment, and pipelines
- Dec 10: Connecting to client using Python SDK namespaces
- Dec 11: Creating Pipelines with Python SDK
- Dec 12: Creating jobs
- Dec 13: Automated ML
- Dec 14: Registering the models
- Dec 15: Getting to know MLflow
- Dec 16: MLflow in action with xgboost
- Dec 17: Building responsible AI dashboard with Python SDK
- Dec 18: Statistical analysis, plotting graphs and feature engineering
- Dec 19: Statistical analysis and ML comparison of prediction models
- Dec 20: Handling kernels, python packages, YAML files in notebooks and keeping structure and good practices
- Dec 21: Using Azure Machine Learning terminal
Batch endpoints are a great and simple way to run inference over large volumes of data. They simplify the process of hosting your models for batch scoring.
We will import the needed Python libraries
from azure.ai.ml import MLClient, Input
from azure.ai.ml.entities import (
BatchEndpoint,
BatchDeployment,
Model,
Environment,
BatchRetrySettings,
CodeConfiguration,
)
from azure.identity import DefaultAzureCredential
from azure.ai.ml.constants import AssetTypes, BatchDeploymentOutputAction
import random
import string
Once we have the packages covered, we have configured the workspace (for more details, follow notebook on GitHub) we need to
# Creating a unique endpoint name by including a random suffix
allowed_chars = string.ascii_lowercase + string.digits
endpoint_suffix = "".join(random.choice(allowed_chars) for x in range(5))
endpoint_name = "mnist-batch-" + endpoint_suffix
# endpoint configuration
endpoint = BatchEndpoint(
name=endpoint_name,
description="A batch endpoint for scoring images from the MNIST dataset.",
tags={"type": "deep-learning"},
)
And following this, we will configure and create an endpoint.
# configuration
import random
import string
# Creating a unique endpoint name by including a random suffix
allowed_chars = string.ascii_lowercase + string.digits
endpoint_suffix = "".join(random.choice(allowed_chars) for x in range(5))
endpoint_name = "mnist-batch-" + endpoint_suffix
# endpoint configuration
endpoint = BatchEndpoint(
name=endpoint_name,
description="A batch endpoint for scoring images from the MNIST dataset.",
tags={"type": "deep-learning"},
)
# creation
ml_client.begin_create_or_update(endpoint).result()
Followed by the model registration. Make sure to check the data/Day22 folder to get the model files.
model_name = "mnist-classification-torch"
model_local_path = "./Day22/model/"
if not any(filter(lambda m: m.name == model_name, ml_client.models.list())):
print(f"Model {model_name} is not registered. Creating...")
model = ml_client.models.create_or_update(
Model(name=model_name, path=model_local_path, type=AssetTypes.CUSTOM_MODEL)
)
#Let's get a reference to the model:
model = ml_client.models.get(name=model_name, label="latest")
We also need to create a deployment and compute. The compute script is also known:
from azure.ai.ml.entities import AmlCompute
compute_name = "cpu-cluster"
if not any(filter(lambda m: m.name == compute_name, ml_client.compute.list())):
print(f"Compute {compute_name} is not created. Creating...")
compute_cluster = AmlCompute(
name=compute_name,
description="CPU cluster compute",
min_instances=0,
max_instances=1,
)
ml_client.compute.begin_create_or_update(compute_cluster).result()
And we create compute and environment.
from azure.ai.ml.entities import AmlCompute
compute_name = "cpu-cluster"
if not any(filter(lambda m: m.name == compute_name, ml_client.compute.list())):
print(f"Compute {compute_name} is not created. Creating...")
compute_cluster = AmlCompute(
name=compute_name,
description="CPU cluster compute",
min_instances=0,
max_instances=1,
)
ml_client.compute.begin_create_or_update(compute_cluster).result()
env = Environment(
conda_file="./Day22/environment/conda.yaml",
image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
)
And finally, configure the deployment
deployment = BatchDeployment(
name="mnist-torch-dpl",
description="A deployment using Torch to solve the MNIST classification dataset.",
endpoint_name=endpoint_name,
model=model,
code_configuration=CodeConfiguration(
code="./Day22/code/", scoring_script="batch_driver.py"
),
environment=env,
compute=compute_name,
instance_count=2,
max_concurrency_per_instance=2,
mini_batch_size=10,
output_action=BatchDeploymentOutputAction.APPEND_ROW,
output_file_name="predictions.csv",
retry_settings=BatchRetrySettings(max_retries=3, timeout=30),
logging_level="info",
)
And create the deployment
For the last part, you need to invoke (start) the endpoint and the job.
input = Input(
type="uri_folder",
path="https://pipelinedata.blob.core.windows.net/sampledata/mnist",
)
job = ml_client.batch_endpoints.invoke(
endpoint_name=endpoint_name,
input=input,
)
Once you infer on batch data and get the predictions on batch data, you can later do any type of analysis.
Compete set of code, documents, notebooks, and all of the materials will be available at the Github repository: https://github.com/tomaztk/Azure-Machine-Learning
Happy Advent of 2022!
Thanks for visiting r-craft.org
This article is originally published at https://tomaztsql.wordpress.com
Please visit source website for post related comments.