torch for optimization
Torch is not just for deep learning. Its L-BFGS optimizer, complete with Strong-Wolfe line search, is a powerful tool in unconstrained as well as constrained optimization.continue reading.
Torch is not just for deep learning. Its L-BFGS optimizer, complete with Strong-Wolfe line search, is a powerful tool in unconstrained as well as constrained optimization.continue reading.
We conclude our mini-series on time-series forecasting with torch by augmenting last time’s sequence-to-sequence architecture with a technique both immensely popular in natural language processing and inspired by human (and...continue reading.
In our overview of techniques for time-series forecasting, we move on to sequence-to-sequence models. Architectures in this family are commonly used in natural language processing (NLP) tasks, such as machine...continue reading.
We continue our exploration of time-series forecasting with torch, moving on to architectures designed for multi-step prediction. Here, we augment the “workhorse RNN” by a multi-layer perceptron (MLP) to extrapolate...continue reading.
This post is an introduction to time-series forecasting with torch. Central topics are data input, and practical usage of RNNs (GRUs/LSTMs). Upcoming posts will build on this, and introduce increasingly...continue reading.
Last month, we conducted our first survey on mlverse software, covering topics ranging from area of application through software usage to user wishes and suggestions. In addition, the survey asked...continue reading.
Today we introduce tabnet, a torch implementation of “TabNet: Attentive Interpretable Tabular Learning” that is fully integrated with the tidymodels framework. Per se, already, tabnet was designed to require very...continue reading.
El Niño-Southern Oscillation (ENSO) is an atmospheric phenomenon, located in the tropical Pacific, that greatly affects ecosystems as well as human well-being on a large portion of the globe. We...continue reading.
In forecasting spatially-determined phenomena (the weather, say, or the next frame in a movie), we want to model temporal evolution, ideally using recurrence relations. At the same time, we’d like...continue reading.
The need to segment images arises in various sciences and their applications, many of which are vital to human (and animal) life. In this introductory post, we train a U-Net...continue reading.
How not to die from poisonous mushrooms. Also: How to use torch for deep learning on tabular data, including a mix of categorical and numerical features.continue reading.
We learn about transfer learning, input pipelines, and learning rate schedulers, all while using torch to tell apart species of beautiful birds.continue reading.
Today, we wrap up our mini-series on torch basics, adding to our toolset two abstractions: loss functions and optimizers.continue reading.
In this third installment of our mini-series introducing torch basics, we replace hand-coded matrix operations by modules, considerably simplifying our toy network’s code.continue reading.
With torch, there is hardly ever a reason to code backpropagation from scratch. Its automatic differentiation feature, called autograd, keeps track of operations that need their gradients computed, as well...continue reading.
In this first installment of a four-part miniseries, we present the main things you will want to know about torch tensors. As an illustrative example, we’ll code a simple neural...continue reading.
Today, we are excited to introduce torch, an R package that allows you to use PyTorch-like functionality natively from R. No Python installation is required: torch is built directly on...continue reading.
A few weeks ago, we showed how to forecast chaotic dynamical systems with deep learning, augmented by a custom constraint derived from domain-specific insight. Global weather is a chaotic system,...continue reading.
In the last part of this mini-series on forecasting with false nearest neighbors (FNN) loss, we replace the LSTM autoencoder from the previous post by a convolutional VAE, resulting in...continue reading.
In a recent post, we showed how an LSTM autoencoder, regularized by false nearest neighbors (FNN) loss, can be used to reconstruct the attractor of a nonlinear, chaotic dynamical system....continue reading.