Menu bar

03/11/2021

Time Series as Supervised Learning

Time series forecasting can be framed as a supervised learning problem. This re-framing of your time series data allows you access to the suite of standard linear and nonlinear machine learning algorithms on your problem. In this lesson, you will discover how you can re-frame your time series problem as a supervised learning problem for machine learning. 
After reading this lesson, you will know:
  • What supervised learning is and how it is the foundation for all predictive modeling machine learning algorithms.
  • The sliding window method for framing a time series dataset and how to use it.
  • How to use the sliding window for multivariate data and multi-step forecasting.
A. Supervised Machine Learning

Supervised learning is where you have input variables (X) and an output variable (y) and you use an algorithm to learn the mapping function from the input to the output.
    Y = f(X)

The goal is to approximate the real underlying mapping so well that when you have new input data (X), you can predict the output variables (y) for that data. 

Supervised learning problems can be further grouped into regression and classification problems.
  • Classification: A classification problem is when the output variable is a category, such as red and blue or disease and no disease.
  • Regression: A regression problem is when the output variable is a real value, such as dollars or weight. The contrived example above is a regression problem.

B. Sliding Window

Imagine we have a time series as follows:

time, measure
1, 100
2, 110
3, 108
4, 115
5, 120

We can restructure this time series dataset as a supervised learning problem by using the value at the previous time step to predict the value at the next time-step. Re-organizing the time series dataset this way, the data would look as follows:

X, y
?, 100
100, 110
110, 108
108, 115
115, 120
120, ?

Take a look at the above transformed dataset and compare it to the original time series.
Here are some observations:
  • We can see that the previous time step is the input (X) and the next time step is the output (y) in our supervised learning problem.
  • We can see that the order between the observations is preserved, and must continue to be preserved when using this dataset to train a supervised model.
  • We can see that we have no previous value that we can use to predict the first value in the sequence. We will delete this row as we cannot use it.
  • We can also see that we do not have a known next value to predict for the last value in the sequence. We may want to delete this value while training our supervised model also.
The use of prior time steps to predict the next time step is called the sliding window method.

In statistics and time series analysis, this is called a lag or lag method. 
The number of previous time steps is called the window width or size of the lag. 

This sliding window is the basis for how we can turn any time series dataset into a supervised learning problem. 

From this simple example, we can notice a few things:
  • We can see how this can work to turn a time series into either a regression or a classification supervised learning problem for real-valued or labeled time series values.
  • We can see how once a time series dataset is prepared this way that any of the standard linear and nonlinear machine learning algorithms may be applied, as long as the order of the rows is preserved.
  • We can see how the width sliding window can be increased to include more previous time steps.
  • We can see how the sliding window approach can be used on a time series that has more than one value, or so-called multivariate time series.

C. Sliding Window With Multivariates

Univariate Time Series: These are datasets where only a single variable is observed at each time, such as temperature each hour. The example in the previous section is a univariate time series dataset.

Multivariate Time Series: These are datasets where two or more variables are observed at each time.

Most time series analysis methods, and even books on the topic, focus on univariate data.  This is because it is the simplest to understand and work with. Multivariate data is often more difficult to work with. It is harder to model and often many of the classical methods do not perform well.

Example of a small contrived multivariate time series dataset

time, measure1, measure2
1,     0.2,           88
2,     0.5,           89
3,     0.7,           87
4,     0.4,           88
5,     1.0,           90

We can re-frame this time series dataset as a supervised learning problem with a window width of one.

X1,   X2,   X3,   y
?,     ?,     0.2,  88
0.2,  88,   0.5,  89
0.5,  89,   0.7,  87
0.7,  87,   0.4,  88
0.4,  88,   1.0,  90
1.0,  90,   ?,     ?

Using the same time series dataset above, we can phrase it as a supervised learning problem where we predict both measure1 and measure2 with the same window width of one, as follows.

X1,   X2,   y1,   y2
?,     ?,     0.2,  88
0.2,  88,   0.5,  89
0.5,  89,   0.7,  87
0.7,  87,   0.4,  88
0.4,  88,   1.0,  90
1.0,  90,   ?,     ?

Not many supervised learning methods can handle the prediction of multiple output values without modification, but some methods, like artificial neural networks, have little trouble.


D. Sliding Window With Multiple Steps

The number of time steps ahead to be forecasted is important
One-step Forecast: This is where the next time step (t+1) is predicted.
Multi-step Forecast: This is where two or more future time steps are to be predicted

There are a number of ways to model multi-step forecasting as a supervised learning problem. 
For now, we are focusing on framing multi-step forecast using the sliding window method. Consider the same univariate time series dataset from the first sliding window example above:

time, measure
1, 100
2, 110
3, 108
4, 115
5, 120


We can frame this time series as a two-step forecasting dataset for supervised learning with a window width of one, as follows:

X1,    y1,   y2
?      100,  110
100, 110,  108
110, 108,  115
108, 115,  120
115, 120,  ?
120, ?,      ?

We can see that the first row and the last two rows cannot be used to train a supervised model. It is also a good example to show the burden on the input variables. Specifically, that a supervised model only has X1 to work with in order to predict both y1 and y2.


No comments:

Post a Comment