Estimation of linear model parameters using OLS



In this blog, we are going to cover how the simple linear regression model parameters are being estimated using the OLS (Ordinary Least Squares) method. The idea behind this blog is to make you understand the principle behind what happens when we are calculating the intercept and slope for the model.

Ordinary Least Squares is the most common method to estimate the parameters in a linear regression model regardless of the form of distribution of the error e. Least squares stand for the minimum square error or SSE (Sum of Squared Error). A lower error results in a better explanatory power of the regression model. Also, least-squares produce the best linear unbiased estimators of b0 and b1.

We know that the population simple linear regression line is –

For a sample, the population regression line is estimated by – 
where ŷ is the predicted value of the dependent variable y.

By using least squares estimation technique, we get –

where i = 1, 2, 3, ⋯ n represents n sample points, xi represents the ith value of the independent variable x and yi represents the ith value of the response variable y. (ȳ, x̄) are mean related to (y, x).
The difference between the observed value yi and the corresponding fitted value ŷi is called a residual.
Residuals play an important role in determining model adequacy. It refers to the amount of variability that is unexplained by the dependent variable x in the regression.

There is a whole theorem about how do we obtain these values but as a Data Scientist we should at least be knowing the underlying formulae behind these things rather than blindly using LinearRegression() method from sci-kit learn.

Comments