Advent of 2022, Day 12 – Creating jobs
This article is originally published at https://tomaztsql.wordpress.com
In the series of Azure Machine Learning posts:
- Dec 01: What is Azure Machine Learning?
- Dec 02: Creating Azure Machine Learning Workspace
- Dec 03: Understanding Azure Machine Learning Studio
- Dec 04: Getting data to Azure Machine Learning workspace
- Dec 05: Creating compute and cluster instances in Azure Machine Learning
- Dec 06: Environments in Azure Machine Learning
- Dec 07: Introduction to Azure CLI and Python SDK
- Dec 08: Python SDK namespaces for workspace, experiments and models
- Dec 09: Python SDK namespaces for environment, and pipelines
- Dec 10: Connecting to client using Python SDK namespaces
- Dec 11: Creating Pipelines with Python SDK
An Azure ML job executes a task against a specified compute target. This is also how the job is created. By configuring a new job, you can also scale out model training, since there are single node and distributed training available.
A simple job command would be to execute a command in a Docker container. And further parameter sweeping can be executed, by specifying it in the job itself. Job will need the following components:
- Compute
- Data source
- Code source
- Environment
- inputs and outputs
Jobs also enable systematic tracking for your ML experimentation and workflows. Once a job is created, Azure ML maintains a run record for the job that includes the metadata, any metrics, logs, and artefacts generated during the job, code that was executed, and the Azure ML environment used.
In Studio, click on Jobs (under Assets in navigation bar) and select “all jobs”.
You will see that from this point on, the process is relatively straightforward. You only need to have all the “ingredients” prepared in advance.
Compute
For compute, I will be using the already created compute (Day 5, we created a compute).
Select compute type: compute instance
Select Azure ML compute instance: AMLBlog2022-ds12-v2
We proceed to the next step.
Environment
For this type of job, we will be using a curated environment with LightGBM.
Select Environment type: Curated environments
Choose environment: LightGBM 3.2
We proceed to the next step.
Job Settings
In this step, we will define the model, data, etc.
Name: Training IRIS
Experiment name: default
Code: Choose local file -> I am adding file Day4_train.py (file is on Github)
Enter the command to start: “Python day4_train.py“
Under the inputs, I have added the data file:
There are no environmental variables, so I leave this out. The distributed training,
Distributed Type: Mpi
Number of nodes: 1
And add some tags.
Review
After creating all the steps, you can close the job creating with the review.
The YAML Specs:
$schema: 'https://azuremlschemas.azureedge.net/latest/commandJob.schema.json'
environment: 'azureml:AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu:48'
command: 'python Day4_train.py --data${{inputs.Iris_dataset}}'
compute: 'azureml:AMLBlog2022-ds12-v2'
resources:
instance_count: 1
experiment_name: lightgbm-iris-toy-demo
display_name: Training IRIS
Once the job is created, you can view it under jobs.
This job can now be run, and each run can also be analysed on the same metrics.
Tomorrow we will look into automated ML.
Compete set of code, documents, notebooks, and all of the materials will be available at the Github repository: https://github.com/tomaztk/Azure-Machine-Learning
Happy Advent of 2022!
Thanks for visiting r-craft.org
This article is originally published at https://tomaztsql.wordpress.com
Please visit source website for post related comments.