Advent of 2022, Day 11 – Creating Pipelines with Python SDK
This article is originally published at https://tomaztsql.wordpress.com
In the series of Azure Machine Learning posts:
- Dec 01: What is Azure Machine Learning?
- Dec 02: Creating Azure Machine Learning Workspace
- Dec 03: Understanding Azure Machine Learning Studio
- Dec 04: Getting data to Azure Machine Learning workspace
- Dec 05: Creating compute and cluster instances in Azure Machine Learning
- Dec 06: Environments in Azure Machine Learning
- Dec 07: Introduction to Azure CLI and Python SDK
- Dec 08: Python SDK namespaces for workspace, experiments and models
- Dec 09: Python SDK namespaces for environment, and pipelines
- Dec 10: Connecting to client using Python SDK namespaces
A pipeline is set of instructions (or a workflow) for executing particular work of a machine learning task. The idea behind pipelines is that will help the team of data scientists and machine learning engineers standardize workflow and incorporate best practices of preparing data, producing training models, executing the models and deploying them. Pipelines will help improve and build workflow efficiently and in such a way that it can be reusable.
And the idea behind it, is to split a machine learning process into smaller tasks, a multistep workflow, where each step is a separate component than can be developed, upgraded, optimised, configured, automated, and deleted separately. And these steps, connected through interfaces, form a workflow.
Pipelines can be created using a Designer, with Python SDK or with Azure CLI.
Under Assets, there are Pipeline Jobs, Pipeline endpoints and pipeline drafts. Pipeline jobs are the multi-step standardised ways to do any type of machine learning task. Pipeline endpoints are endpoints, that invoke the jobs from external systems and can be managed repeatedly for batch scoring and retraining scenarios.
Now, let’s focus on creating pipelines.
Using Python SDK
Creating pipelines with the following codE:
from azureml.core import Experiment
from azureml.pipeline.core import Pipeline
pipeline = Pipeline(workspace=ws, steps=[batch_score_step])
pipeline_run = Experiment(ws, "Tutorial-Batch-Scoring").submit(pipeline)
Creates a new pipeline for batch scoring
And with the:
pipeline_run
We can check the status of pipeline
And pipeline was create using Python SDK and the ParallelRunStep function.
from azureml.pipeline.steps import ParallelRunStep
from datetime import datetime
import uuid
parallel_step_name = "batchscoring-" + datetime.now().strftime("%Y%m%d%H%M")
label_config = label_ds.as_named_input("labels_input").as_mount("/tmp/{}".format(str(uuid.uuid4())))
batch_score_step = ParallelRunStep(
name=parallel_step_name,
inputs=[input_images.as_named_input("input_images")],
output=output_dir,
arguments=["--model_name", "inception",
"--labels_dir", label_config],
side_inputs=[label_config],
parallel_run_config=parallel_run_config,
allow_reuse=False
)
Complete code is available on GitHub and consists of Notebook, YAML file and Python code.
All three files are prefixed with “Day11-” and are stored in “notebook” folder.
Designer
Another way to create pipelines is by using a Designer. When in Assets Pipelines, click on “+ New Pipeline” and you will be directed to Designer.
You can create a pipeline from prebuilt pipelines or simply create new.
After completion, the pipeline will also appear under the pipelines. Each pipeline creation, submitting, and run, must have a compute also available. And this applies, both to Python SDK, Designer and Azure CLI.
Tomorrow, we will look into creating a job and submitting a job.
Compete set of code, documents, notebooks, and all of the materials will be available at the Github repository: https://github.com/tomaztk/Azure-Machine-Learning
Happy Advent of 2022!
Thanks for visiting r-craft.org
This article is originally published at https://tomaztsql.wordpress.com
Please visit source website for post related comments.