Linear Regression Model

Linear regression is a machine learning technique that is used to establish a relationship between a scalar response and one or more explanatory variables. The first scaler response is called a target or dependent variable while the explanatory variables are known as a response or independent variables. When more than one independent variable is used in the modeling technique we call it multiple linear regression.

Independent variables are known as explanatory variables as they can explain the factors that control the dependent variable along with the degree of the impact. This can also be calculated using ‘parameter estimates’ or ‘coefficients’.

Machine Learning Algorithms: Mathematics Behind Linear Regression

There are several machine learning algorithms that can provide the desired outputs by processing the input data. One of the widely used algorithms is linear regression.

Linear regression is a type of supervised learning algorithm where the output is in a continuous range and isn’t classified into categories. Through a linear regression machine learning algorithm, we can predict values with a constant slope.

Bayesian Learning for Machine Learning: Linear Regression (Part 2)

Part 1 of this article series provides an introduction to Bayesian learning. With that understanding, we will continue the journey to represent machine learning models as probabilistic models. Once we have represented our classical machine learning model as probabilistic models with random variables, we can use Bayesian learning to infer the unknown model parameters. Such a process of learning unknown parameters of a model is known as Bayesian inference.

In this article, we will first briefly discuss the importance of Bayesian learning for machine learning. Then, we will move on to interpreting machine learning models as probabilistic models. I will use the simple linear regression model to elaborate on how such a representation is derived to perform Bayesian learning as a machine learning technique.

Learn TensorFlow: Linear Regression

Introduction to Linear Regression

An important algorithm of supervised learning is linear regression. In this article, I am going to re-use the following notations that I have referred from [1] (in the References section).

  • xi denotes the “input” variables, also called input features
  • yi denotes the “ouput” or target variable that we are trying to predict
  • A pair (xi, yi) is called a training example
  • A list of m training examples {xi, yi; i = 1,…,m} is called a training set
  • The superscript “i” in the notation is an index into the training set
  • X denotes the space of input values and Y denotes the space of output values. In this article, I am going to assume that X = Y = R
  • A function h: X -> Y, where h(x) is a good predictor for the corresponding value of y, is called a hypothesis or a model

When the target variable that we are trying to predict is continuous, we call the learning problem a regression problem. When y takes on only a small number of discrete values, we call it a classification problem.

Finding More Hidden Gems in Holt-Winters

Welcome back to this three-part blog post series on Holt-Winters and why it's still highly relevant today. To understand Part Two, I suggest reading Part One, in which we covered:

  1. When to use Holt-Winters.
  2. How Single Exponential Smoothing works.
  3. A conceptual overview of optimization for Single Exponential Smoothing.
  4. Extra: The proof for optimization of the Residual Sum of Squares (RSS) for Linear Regression.

In this piece, Part Two, we'll explore: