Time Series for Actuaries

INTRODUTION OF TIME SERIES

A time series is just a collection of data points that occur in a logical order across time. On the other hand, cross-sectional data represents a single point in time. A time-series graph shows the evolution of a set of data points over time, such as the price of an asset, with data points recorded at regular intervals. There is no need that the data be gathered for a specific period of time, allowing the data to be collected in a fashion that delivers the information that the investor or analyst investigating the activity requires. Time-series data may be used to anticipate or predict future data based on past data.

Aim Of Time Series

  • Describe the data
  • Fit a model
  • Forecast future values
  • Look for connection with other time series
  • Decide if process is out of control
Time Series Analysis
Time-series analysis is a technique for examining a set of data points across time. Instead of randomly recording data points, time-series analyzers record data points at regular intervals over a defined period of time. This type of study entails more than just collecting data over time. Different from other forms of data, time-series data may also demonstrate how variables evolve over time. Time series analysis frequently need a large number of data points in order to maintain consistency and reliability. We used time-series analysis a lot in the R programming language.

R Programming
R is a statistical analysis, visualisation, and reporting computer language and software environment. R is maintained by the R Development Core Team, which was founded by Ross Ihaka and Robert Gentleman at the University of Auckland in New Zealand. The R programming language is named from the initial letter of the first names of the two R authors. R, a well-developed, simple, and powerful programming language, supports conditionals, loops, user-defined recursive functions, and input and output. R's functions deal with arrays, lists, vectors, and matrices. It offers a well-rounded, well-coordinated, and well-integrated set of data analysis capabilities.

Time-Series Analysis In R

For manipulating, generating, and visualizing series data, the R programming language offers a number of functions. The data is then stored in a R object called a time-series object. A time-series object is represented by the ts () function. This function's fundamental syntax in time series analysis is:

fig1: Time series

Types of Time Series Analysis

  • Classification: Determines the data's categories and assigns them to it.
  • Curve fitting: Plots data on a curve to investigate the connections between factors in the data.
  • Descriptive analysis: Detects trends, cycles, and seasonal variation in time series data.
  • Explanative analysis: Attempts to comprehend data and its links, as well as cause and effect.
  • Exploratory analysis: The key aspects of the time series data are highlighted, generally in a graphic style.

STATIONARY

A stochastic process with a stable joint probability distribution is one whose unconditional joint probability distribution does not change over time. Many statistical methods used in time series analysis are based on the assumption of stationarity, and non-stationary data is frequently modified to become stationary. A trend in the mean, which might be owing to the presence of a unit root or a deterministic trend, is the most prevalent source of stationarity violation.

• 
In general, there are two categories of stationary: 
1. Strictly stationary
2. Weak stationary

STRICT STATIONARY

• A tight stationary series satisfies the mathematical idea of a stationary process. A purely stationary series' mean, variance, and covariance are not time-dependent. The objective is to transform a non-stationary series into a rigid stationary series in order to make predictions.

WEAK STATIONARY

In this sort of series, the mean is always constant, and the time difference is the only factor that influences the covariance.

fig2: Stationary

IMPORTANCE OF STATIONARY

  •  Because we can only examine the behaviour of a non-stationary time series if it is non-stationary over the time period under consideration.
  •  As a consequence, each time Series data set will correspond to a certain application.
  • It's hard to generalize it to different time periods. In the domain of prediction, such nonstationary time-series may be of limited use.

NON-STATIONARY

By definition, non-stationary data is unpredictable and cannot be anticipated or modelled. Non-stationary time series findings might be false, indicating a link between two variables where none exists. Non-stationary data must be transformed to stationary data in order to get consistent, trustworthy findings. A non-stationary process with changing variance and a mean that does not remain constant or returns to a long-run mean over time.

fig3: Non-stationary

MARKOV PROPERTY

The memorylessness of a stochastic process is referred to as the Markov property. Andrey Markov, a Russian mathematician, would be its name. The strong Markov property is similar to the Markov property, except that the term present is replaced by a random variable called a stopping time. A concealed Markov model, for example, is a Markov assumption model in which the Markov property is assumed to hold. To anticipate the future, all we need is the current value.

Autocovariance

Definition: The autocovariance is a function that calculates the process's covariance with itself at two time points. Autocorrelation and autocovariance are inextricably linked.

Formula: The variance of two variables is defined as covariance. The formula below may be used to compute the covariance between x and y or x1 and x2.

fig4: formula of Autocovariance

Autocorrelation

Definition: The degree of resemblance between a particular time series and a lagged version of itself across consecutive time periods is represented mathematically by autocorrelation. Autocorrelation is similar to correlation between two separate time series in that it utilises the same time series twice: once in its original form and again with one or more time periods added.

Formula:

fig5: formula of Autocorrelation

Example

If it's raining today, for example, the data shows that it'll rain tomorrow more than if it's sunny today. When it comes to investment, a stock's autocorrelation of returns may be significant, implying that if it's "up" today, it's more likely to be up tomorrow as well. Autocorrelation, in reality, might be a valuable tool for traders, especially technical analysts.

White noise

White noise is a random signal with equal strength at different frequencies, giving it a constant power spectral density in signal processing. The word is used in numerous scientific and technological areas, including physics, acoustical engineering, telecommunications, and statistical forecasting, with this or similar meanings. Rather of referring to a single signal, white noise refers to a statistical model for signals and signal sources. White noise is named after white light, despite the fact that light that appears white does not always have a flat power spectral density over the visible range.

White noise is a discrete signal in discrete time, with samples considered as a sequence of serially uncorrelated random variables with zero mean and finite variance; one realisation of white noise is a random shock.

fig6: White noise

Types of time series

  1. Moving average
  2. Exponential smoothing
  3. Arima

ARMA

The AR and MA models have been merged to create this model. For forecasting future values of the time series, this model considers the influence of prior delays as well as residuals. The coefficients of the AR model are represented by while the coefficients of the MA model are represented by.

Formula: Yt = β₁* yₜ-₁ + α₁* Ɛₜ-₁ + β₂* yₜ-₂ + α₂ * Ɛₜ-₂ + β₃ * yₜ-₃ + α₃ * Ɛₜ-₃ +………… + βₖ * yₜ-ₖ + αₖ * Ɛₜ-

ARIMA

The ARIMA model is quite similar to the ARMA model, with the exception that it contains a new element called Integrated (I), which stands for differencing in the ARIMA model. In order to forecast future values, the ARIMA model combines a number of differences already applied to the model in order to make it stationary, the number of prior delays, and residuals errors.

fig7: ARIMA

ACF

The Auto Correlation function considers all previous data, regardless of their impact on the future or current time period. It determines the correlation between the time periods t and (t-k). It covers any delays or gaps between the time periods t and (t-k). The Pearson Correlation formula is usually used to calculate correlation.

fig8:ACF

PACF

 The partial correlation between time periods t and t-k is determined by the PACF. It does not account for all of the temporal delays between t and t-k. Assume that today's stock price is influenced by three days ago's stock price, but that it does not take into account yesterday's stock price closing. As a result, we only examine time delays that have a direct influence on future time periods, ignoring the minor time lags between the two time slots t and t-k.

fig9:PACF

GARCH

GARCH stands for Generalized Autoregressive Conditionally Heteroscedastic. Tim Bollerslev, a PhD student, created it in 1986. The ARCH (1) model is the most basic GARCH model, and it shares many characteristics with AR (1) models. ARCH(p) models are more sophisticated versions of AR(p) models. Finally, in the same manner that ARMA models manage conditional expectation, Generalized ARCH models express conditional variances.

Uses: In finance and economics, GARCH models are used to analyse time series data in a variety of ways. They're particularly beneficial when there are periods of rapid change (or volatility). They can, for example, accurately simulate the volatility of financial asset values such as bonds, market indexes, and stock prices.

Basic Procedures

Three basic phases make up a GARCH model:

  1. Calculate the most accurate autoregressive model.
  2. Calculate the error term's autocorrelations.
  3.  Make a statistical significance test.
Conclusion

We learned what a time series is and how to create one in R. We also learned about several types of time series, such as moving average, exponential smoothing, and the Arima model. We learned what stationary and non-stationary mean, as well as Markov property, ACF, and PACF in brief, and finally, what the GRACE model means.

References
  1. Time Series, https://www.udemy.com/course/time-series-for-actuaries/learn/lecture/18495478#overview, accessed on 6/10/2021 at 15:01
  2. Rstudio, https://www.udemy.com/course/time-series-for-actuaries/learn/lecture/18495478#overview, accessed on 6/10/2021 at 15:02
  3. ARIMA, https://www.udemy.com/course/time-series-for-actuaries/learn/lecture/18495478#overview, accessed on 6/10/2021 at 15:02
  4. ACF, https://www.udemy.com/course/time-series-for-actuaries/learn/lecture/18495478#overview, accessed on 6/10/2021 at 15:03
  5. GARCH, https://www.udemy.com/course/time-series-for-actuaries/learn/lecture/18495478#overview, accessed on 6/10/2021 at 15:04

Comments