# tune 0.1.1

This article is originally published at https://www.tidyverse.org/blog/

We’re pleased to announce the release of tune 0.1.1. tune is a tidy interface for optimizing model tuning parameters.

You can install it from CRAN with:

```
install.packages("tune")
```

You can see a full list of changes in the release notes. The release was originally motivated by dplyr 1.0.0 changes although there are a lot of nice, new features to talk about.

## Better `autoplot()`

The previous plot method produces what we refer to as a *marginal plot*; each predictor was plotted against performance. That is probably the best that we can do for non-regular grids (which tends to be the default in tune). Here’s an example using the
Chicago train data:

```
library(tidymodels)
data(Chicago, package = "modeldata")
# Time-series resampling
set.seed(7898)
data_folds <-
rolling_origin(
Chicago,
initial = 364 * 15,
assess = 7 * 4,
skip = 7 * 4,
cumulative = FALSE
)
svm_mod <-
svm_rbf(cost = tune(), rbf_sigma = tune("kernel parameter")) %>%
set_mode("regression") %>%
set_engine("kernlab")
ctrl <- control_grid(save_pred = TRUE)
set.seed(2893)
non_regular_grid <-
svm_mod %>%
tune_grid(
ridership ~ Harlem + Archer_35th,
resamples = data_folds,
grid = 36,
control = ctrl
)
```

```
autoplot(non_regular_grid, metric = "rmse") +
ggtitle("old method, irregular grid")
```

Not bad but it could be improved in a few ways:

Both tuning parameters are generated on log scales. The data are shown above in the natural units and the data at the low end of the scale gets smashed together.

We could show its parameter label (e.g. “Radial Basis Function sigma”) when no parameter ID is given.

What happens when a regular (i.e. factorial) grid is used?

```
grid <-
svm_mod %>%
parameters() %>%
grid_regular(levels = c(3, 12))
grid
```

```
## # A tibble: 36 x 2
## cost `kernel parameter`
## <dbl> <dbl>
## 1 0.000977 0.0000000001
## 2 0.177 0.0000000001
## 3 32 0.0000000001
## 4 0.000977 0.000000000811
## 5 0.177 0.000000000811
## 6 32 0.000000000811
## 7 0.000977 0.00000000658
## 8 0.177 0.00000000658
## 9 32 0.00000000658
## 10 0.000977 0.0000000534
## # … with 26 more rows
```

```
set.seed(2893)
regular_grid <-
svm_mod %>%
tune_grid(
ridership ~ Harlem + Archer_35th,
resamples = data_folds,
grid = grid,
control = ctrl
)
```

```
autoplot(regular_grid, metric = "rmse") +
ggtitle("old method, regular grid")
```

This visualization also could be improved, since there might be a pattern in one parameter for each value of the other.

The new version of tune creates improved versions of both of these plots:

```
autoplot(non_regular_grid, metric = "rmse") +
ggtitle("new method, irregular grid")
```

This tells a completely different story than the previous version where the parameters were in their natural units.

The regular grid results are also much better and tell a cleaner story:

```
autoplot(regular_grid, metric = "rmse") +
ggtitle("new method, regular grid")
```

Extra arguments can be passed when a numeric grouping column is used;l these are given to `format()`

. To avoid scientific notation:

```
autoplot(regular_grid, metric = "rmse", digits = 3, scientific = FALSE) +
ggtitle("Formatting for coloring column")
```

## A ggplot2 `coord`

for plotting observed and predicted values

One helpful visualization of the fit of a regression model is to plot the true outcome value against the predictions. These *should* be on the same scale. Let’s look at such a plot:

```
best_values <- select_best(regular_grid, metric = "rmse")
best_values
```

```
## # A tibble: 1 x 3
## cost `kernel parameter` .config
## <dbl> <dbl> <chr>
## 1 32 0.0152 Model30
```

```
holdout_predictions <-
regular_grid %>%
collect_predictions(parameters = best_values)
holdout_predictions
```

```
## # A tibble: 224 x 7
## id .pred .row cost `kernel parameter` ridership .config
## <chr> <dbl> <int> <dbl> <dbl> <dbl> <chr>
## 1 Slice1 17.8 5461 32 0.0152 19.6 Model30
## 2 Slice1 18.9 5462 32 0.0152 20.0 Model30
## 3 Slice1 17.5 5463 32 0.0152 20.4 Model30
## 4 Slice1 8.88 5464 32 0.0152 20.4 Model30
## 5 Slice1 3.03 5465 32 0.0152 20.1 Model30
## 6 Slice1 6.07 5466 32 0.0152 4.78 Model30
## 7 Slice1 4.48 5467 32 0.0152 3.26 Model30
## 8 Slice1 12.6 5468 32 0.0152 19.3 Model30
## 9 Slice1 15.6 5469 32 0.0152 19.3 Model30
## 10 Slice1 15.4 5470 32 0.0152 19.9 Model30
## # … with 214 more rows
```

```
ggplot(holdout_predictions, aes(x = ridership, y = .pred)) +
geom_abline(lty = 2) +
geom_point(alpha = 0.3)
```

This is very helpful but there are a few possible improvements. The new version of tune has `coord_obs_pred()`

that produces a square plot with the same axes:

```
ggplot(holdout_predictions, aes(x = ridership, y = .pred)) +
geom_abline(lty = 2) +
geom_point(alpha = 0.3) +
coord_obs_pred()
```

## Tuning engine parameters

Bruna Wundervald (from Maynooth University) gave a
great presentation that used tidymodels packages. She ran into the problem that, if you wanted to tune parameters that were specific to the engine, you’d have to go through a lot of trouble to do so. This used to work well in a previous version of tune. Unfortunately, we accidentally broke it, but now you can once again tune engine specific parameters. One feature in this version of tune, along with the new 0.0.8 version of the dials package, is that we have added dials `parameter`

objects for every parameter that users might tuned with the existing engines that we support (this was not as difficult as it sounds).

To demonstrate, we’ll use the time series data above, but this time we’ll optimize the ranger parameters that Bruna was interested in.

Since parsnip has a pre-defined list of models and engines, we’ve gone ahead and set up the infrastructure for tuning most engine-specific values. For example, in the above example we could tune two regularization parameters specific to ranger.

```
rf_mod <-
rand_forest(min_n = tune()) %>%
set_mode("regression") %>%
set_engine("ranger",
regularization.factor = tune(),
regularization.usedepth = tune())
# Are there dials objects to work with these?
rf_param <- parameters(rf_mod)
rf_param
```

```
## Collection of 3 parameters for tuning
##
## id parameter type object class
## min_n min_n nparam[+]
## regularization.factor regularization.factor nparam[+]
## regularization.usedepth regularization.usedepth dparam[+]
```

There are parameter objects for these (and they keep their original names). You can adjust the ranges and values for these parameters using the `update()`

function as you would for others. To see their underlying functions:

```
rf_param$object[[2]]
```

```
## Gain Penalization (quantitative)
## Range: [0, 1]
```

From here, the standard tools in the tune package can be used. You can use one of the `grid_*()`

functions to create a grid of values, let `tune_grid()`

create a set for you, or use Bayesian optimization to find appropriate values sequentially.

The new `autoplot()`

method can also be used to produce nice visualizations of the relationship between performance and the parameters.

```
set.seed(4976)
ranger_params <-
rf_mod %>%
tune_grid(
ridership ~ Harlem + Archer_35th,
resamples = data_folds,
grid = 10,
control = ctrl
)
autoplot(ranger_params, metric = "rmse") +
theme(legend.position = "top")
```

Note that the tuning parameter labels (i.e. “Gain Penalization” instead of “`regularization.factor`

”) are used.

I’m sure that we missed someone’s favorite engine-specific parameter so please put in a GitHub issue for dials to let us know.

`.config`

columns

When model tuning is conducted, the tune package now saves a new column in the output called `.config`

. This column is a qualitative identification column for unique tuning parameter combinations. It often reflects what is being tuned. A value of `.config = "Recipe1_Model3"`

indicates that the first recipe tuning parameter set is being evaluated in conjunction with the third set of model parameters. Here’s an example from the random forest model that we just fit:

```
ranger_params %>%
collect_metrics() %>%
# get the unique tuning parameter combinations:
select(min_n, regularization.factor, regularization.usedepth, .config) %>%
distinct()
```

```
## # A tibble: 10 x 4
## min_n regularization.factor regularization.usedepth .config
## <int> <dbl> <lgl> <chr>
## 1 7 0.288 TRUE Model01
## 2 20 0.322 FALSE Model02
## 3 25 0.703 FALSE Model03
## 4 14 0.0781 FALSE Model04
## 5 36 0.882 TRUE Model05
## 6 29 0.192 FALSE Model06
## 7 35 0.404 TRUE Model07
## 8 24 0.687 TRUE Model08
## 9 3 0.580 FALSE Model09
## 10 10 0.968 TRUE Model10
```

## Other changes

`conf_mat_resampled()`

is a new function that computes the average confusion matrix across resampling statistics for a single model.`show_best()`

and the`select_*()`

functions will now use the first metric in the metric set if no metric is supplied.`filter_parameters()`

can trim the`.metrics`

column of unwanted results (as well as columns`.predictions`

and`.extracts`

) from`tune_*`

objects.If a grid is given, parameters do not need to be finalized to be used in the

`tune_*()`

functions.

## Acknowledgements

Thanks to everyone who contributed code or filed issues since the last version: @cimentadaj, @connor-french, @cwchang-nelson, @DavisVaughan, @dcossyleon, @EmilHvitfeldt, @jg43b, @JHucker, @juliasilge, @karaesmen, @kbzsl, @kylegilde, @LucyMcGowan, @mdancho84, @py9mrg, @realauggieheschmeyer, @robyjos, @rorynolan, @simonpcouch, @ThomasWolf0701, and @UnclAlDeveloper.

Thanks for visiting r-craft.org

This article is originally published at https://www.tidyverse.org/blog/

Please visit source website for post related comments.