# why use standard deviation instead of variance

Deviation just means how far from the normal The Standard Deviation is a measure of how spread
out numbers are. Its symbol is The formula is easy: it is the square root of the Variance. So now you ask, What is the Variance? The heights (at the shoulders) are: 600mm, 470mm, 170mm, 430mm
and 300mm. Find out the Mean, the Variance, and the Standard Deviation. so the mean (average) height is 394 mm. Let's plot this on the chart:
To calculate the Variance, take each difference, square it, and then average the result: So the Variance is 21,704 And the Standard Deviation is just the square root of Variance, so: And the good thing about the Standard Deviation is that it is useful. Now we can show which heights are within one Standard Deviation (147mm) of the Mean: So, using the Standard Deviation we have a standard way of knowing what is normal, and what is extra large or extra small. Rottweilers are tall dogs. And Dachshunds are a bit short. but don't tell them! But. there is a small change with Sample Our example has been for a Population (the 5 dogs are the only dogs we are interested in). But if the data is a Sample (a selection taken from a bigger Population), then the calculation changes!

All other calculations stay the same, including how we calculated the mean. Think of it as a correction when your data is only a sample. Here are the two formulas, explained at Looks complicated, but the important change is to divide by N-1 (instead of N ) when calculating a Sample Variance. When we measure the variability of a set of data, there are two closely linked statistics related to this: the б and, which both indicate how spread-out the data values are and involve similar steps in their calculation. However, the major difference between these two statistical analyses is that the standard deviation is the square root of the variance. In order to understand the differences between these two observations of statistical spread, one must first understand what each represents: Variance represents all data points in a set and is calculated by averaging the squared deviation of each mean while the standard deviation is a measure of spread around the mean when the central tendency is calculated via the mean. As a result, the variance can be expressed as the average squared deviation of the values from the means or [squaring deviation of the means] divided by the number of observations and standard deviation can be expressed as the square root of the variance.

To fully understand the difference between these statistics we need to understand the calculation of the variance. The steps to calculating the sample variance are as follows: Calculate the sample mean of the data. Find the difference between the mean and each of the data values. Square these differences. Add the squared differences together. Divide this sum by one less than the total number of data values. The mean provides the center point or of the data. The differences from the mean help to determine the deviations from that mean. Data values that are far from the mean will produce a greater deviation than those that are close to the mean. The differences are squared because if the differences are added without being squared, this sum will be zero. The addition of these squared deviations provides a measurement of total deviation. The division by one less than the sample size provides a sort of mean deviation.

This negates the effect of having many data points each contribute to the measurement of spread. As stated before, the standard deviation is simply calculated by finding the square root of this result, which provides the absolute standard of deviation regardless of a total number of data values. When we consider the variance, we realize that there is one major drawback to using it. When we follow the steps of the calculation of the variance, this shows that the variance is measured in terms of square units because we added together squared differences in our calculation. For example, if our sample data is measured in terms of meters, then the units forБ a variance would be given in square meters. In order to standardize our measure of spread, we need to take the square root of the variance. This will eliminate the problem of squared units, and gives us a measure of the spread that will have the same units as our original sample. There are many formulas in mathematical statistics that have nicer looking forms when we state them in terms of variance instead of standard deviation.

• Views: 282

why does hair grow faster on one side
why do you use the chi square statistic
why do we use box and whisker plots
why do we use analysis of variance
why do we take log of data
why do we need to collect data
why is normal distribution important in statistical analysis