Jorge Valle

Jorge Valle

Jorge Valle is a front end developer with a particular passion for, and expertise in, JavaScript and user interfaces. Lately, he's also been diving into machine learning.

Why do we square the residuals in linear regression models?

I have been deepening my understanding of data fitting, and in particular linear regression models. While there are other approaches that don’t square the residuals when trying to minimize them, most of the literature uses the least-squares criterion for finding the best-fit line. So one question that kept popping into my head while studying was: why is it important to square the residuals?

So far, I’ve understood two separate reasons.

  1. We want to penalize the values that most deviate from our best-fit line. Smaller residuals are seen as more favorable, as they are closer to the best-fit. Larger residuals should be disproportionately, as opposed to linearly, penalized.
  2. We want to make all the residual values positive. Squaring the residuals guarantees that positive or negative residuals are treated the same way. It also helps prevents positive and negative residuals offsetting each other. This could also be done through taking the absolute value.

Books, patrons and coffee in Caffè San Marco, Trieste