https://www.thoughtco.com/thmb/zgzKrHeG5kg33_sl5nL12AR7qDc=/768x0/filters:no_upscale():max_bytes(150000):strip_icc()/calculate-a-sample-standard-deviation-3126345-v4-CS-01-5b76f58f46e0fb0050bb4ab2.png


An online course I've been taking recently brushed up on some basic topics in Statistics, and one subject that caught my interest was the formula for standard deviation. The reason being the concept explained to us during our high school years was not quite correct compared to how it was elucidated  in college. Anyone who is Maths-savvy knows standard deviation is simply the square root of the variance (which is the residual sum of squares divided by degrees of freedom). Intuitively, it is a figure that characterizes how each sample in the data set strays from the mean. So what could have gone wrong in explaining such a simple and basic concept between high-schoolers and engineering undergraduates?

I still remember our High School Statistics teacher on that fateful afternoon (around 3:30 p.m. or 4:00 p.m.) describing what standard deviation was to us. After writing the formula on the chalkboard and elaborating it's use, a student asked why the denominator was 'N-1' for sample standard deviation. There was a slight pause I think, before the teacher proceeded to reason that the total 'N' sample size counted the '0th' element of the sample - which supposedly justifies the subtraction of a unit. I believed this dubious explanation for some time until I stepped onto college.

The straightforward method of calculating sample and population SD:

We had a probability and statistics course on our 3rd year, where we revisited standard deviation. This time, it was in the evening (around 6:00 p.m.) and our professor who strongly believed in lighting a fire in students rather than spoon-feeding information to them. He explained that standard deviation had a changing denominator because it was based on the degrees of freedom of the data involved.

Below is a video from Khan Academy that gives a similar explanation:

Now, degrees of freedom is a really hard concept to explain through theory, and is best grasped by example - which is probably why our high school teacher wasn't audacious enough to mention it back then as she would've been bombarded with even more questions that involved a higher level of Mathematics (we were still high-schoolers).

Fast forward a few more years and here I am again reviewing the same thing for the 3rd time in an Econometrics course. This time, the standard deviation is for regression analysis. Hence, the degrees of freedom involved for error estimation becomes the total sample size minus 1 MINUS the number of explanatory variables (in the course, the explanatory variables were grouped together with the '-1' which makes it +1, I find this less intuitive to remember so I memorized the degrees of freedom as N-k-1 instead of N-(k+1)).

Maybe a reader might also find this approach easier to imbibe in one's mind. Thank you for reading!