Plotting the data is a good way to get a feel for differences between groups, but statistics can provide us with two more pieces of information: a confidence interval for the difference between means and a measure of the probability that an effect is due to chance (statistical significance).

Recoding String Variables

Before we can get started with our statistical analysis, we need to take care of something. You see, the creators of SPSS were evil troll-like creatures who delighted in inconveniencing beginning statistics students. They decreed that all independent variables must be coded as numbers, not string variables. "Hey!" you shout, "I’ve already got my data entered as strings!" I know, I know. The trolls were evil, remember. Luckily, there is a (somewhat) easy workaround. Go to TransformAutomatic Recode...:

In the dialog that opens, move "Condition" into the "Variable -> New Name" window, type "Condition2" in the "New Name" field and click "Add New Name", check the box labeled "Treat blank string values as user-missing". That last step is handy in case your data includes any missing values.

You can check out the results in the Data Window, both in Data View and Variable View. In Variable View, click on the "Values" cell in the "Condition2" row (highlighted below):

You can see how the string variable has been recoded as 1s (for chocolate) and 2s (for no chocolate):

Now that you have satisfied the capricious whims of the trolls, we can proceed to analyze the data...

Independent Samples t-test

To access the test, select Analyze → Compare Means → Independent Samples T Test. In the dialog that appears, put "Tip_Percentage" in the "Test Variable(s)" window and "Condition2" into the "Grouping Variable" window:

You see how, under "Grouping Variable", it says "Condition2(? ?)"? Trolls again. Although quite skillful in selling software licenses, the trolls were not skillful enough to write software that automatically detected that there were two levels of Condition2. No. Instead, you have to tell the software what those two levels are. And they have to be numbers. Nice. Thanks, trolls. Luckily, we just saw that the automatic recode numbered our two levels 1 and 2. So, click on "Define Groups", say that Group 1 is "1" and Group 2 is "2" (good thing we didn’t delegate that complex decision to a machine!), click "Continue", and press "OK" to run the analysis.

Output

First, you get descriptive statistics on each level of your independent variable:

The mean for the Chocolate condition is 17.74, and for No Chocolate it is 14.62. As we saw in the boxplot, the dispersion (measured by standard deviation) is greater in the Chocolate condition than the No Chocolate condition. Next is the output for the t-test itself:

Test for Equality of Variances

The first column is labeled "Levene’s test for equality of variances". This is not the t-test. It is a test of one of the assumptions of the t-test, namely that the variances of the two groups are equal. Variance is a measure of dispersion, equal to the square of the standard deviation. The more spread-out the scores are, the larger variance is. In this case, you can see that the F is 8.080 and "Sig.", which is a p-value, is .006. If p is below .05, it means that the variances are unequal. This is not a big surprise, because we saw in the boxplot and the descriptive statistics that there was more dispersion in the Chocolate than the No Chocolate group. Okay, the variances are unequal, so what do we do? Simple. See how there are two rows of results presented, the top labeled "Equal variances assumed" and the bottom labeled "Equal variances not assumed"? Well, if Levene’s test gives you a p-value less than .05, you use the bottom row because it does not assume that variances are equal. In fact, I would recommend using the bottom row all the time, just to be safe. The only thing you need to say when you use that row is that you are using a "Welch’s t-test".

Significance Testing

The test statistic, t, is reported under the column with that label: 6.207. To the right of it is another column you will need to report: "df", which stands for "degrees of freedom". For the bottom row, this number is not an integer: 74.838. To the right of it is "Sig (2-tailed)", which is the p-value. It says ".000".

Common error: p < .001

SPSS sometimes reports p as ".000". However, what it really means by this is "p < .001". The p-value cannot actually be zero, because there is always some possibility of obtaining a given test statistic by chance. Whenever you see p reported by SPSS as .000, write down p < .001.

The p-value indicates that a t value more extreme than 6.21 occurs less than 1 out of a thousand times (.001 = 1/1000) under the null distribution (assuming no difference between the two groups). This means that it is highly unlikely that the two groups are equal.

Under "Mean Difference", the t-test output adds a calculation of the difference between the means of the two groups: 3.12. You could calculate that yourself by comparing the means of the two groups: 17.74 vs. 14.62.

What is a Confidence Interval?

The last two columns present the "95% Confidence Interval of the Difference". A confidence interval is a way of representing the precision of an estimate. In this case, the estimate is of the difference between the means of the two groups: 3.12. The confidence interval lower bound is 2.12 and its upper bound is 4.12, so it is plus or minus 1.0. A 95% confidence interval means that 95% of the time, the population mean will be within that interval and 5% of the time, the population mean will be outside of that interval. This means you can be 95% confident that the population mean is within your confidence interval.

What is the population mean? It is the value you would get if you could sample every member of the population in that condition. An important assumption with estimating a confidence interval is that your sample is representative of your target population. In other words, confidence intervals assume that if you collected more data, the new data would look pretty much like the data you have already collected: it would have the same mean and standard deviation.

In this case, the confidence interval is 2.12 to 4.12. You can be 95% confident that the difference between the population means for the Chocolate and No Chocolate conditions is somewhere between 2.12 and 4.12. As before, the confidence interval allows you to know the precision of your estimate. Serving customers chocolate with their check will increase your tip percentage somewhere between 2.12 and 4.12 percent.

APA Style

To write up the results of this analysis, you could write:

Researchers hypothesized that giving customers chocolate with their bill would increase the tips that waiters received. Tip percentages for the two groups differed significantly according to Welch’s t-test, t(74.84) = 6.2, p < .001. On average, customers given chocolate tipped 17.7 percent, while customers not given chocolate tipped 14.6 percent. The 95% confidence interval for the effect of chocolate on tip percentage is between 2.1 and 4.1 percent. These results support the researchers’ hypothesis.

Common Error!

In the Strohmetz chocolate study, the dependent variable is in units of percentages because the researchers are studying tip percentages. However, most studies do not have dependent variables in units of percentages. The dependent variable could be in seconds, in which case you would not say that the confidence interval is between 2.1 and 4.1 percent, you would say it is between 2.1 and 4.1 seconds. Many students have copied the paragraph above in their stats assignments and not changed from percentages to the appropriate units for the dependent variable.

Note the formatting for reporting the results of the t-test:

  1. Write in complete sentences.
  2. State the researchers’ hypothesis.
  3. Give the means of each group, using the units for the dependent variable (here, the units are percentages because the DV was recorded in terms of tip percentages).
  4. Give the degrees of freedom in parentheses after the letter t (which is italicized)
  5. Give the p-value.
  6. Report the confidence interval.
  7. Decimal places: Use a number of decimal places sufficient to distinguish two values but generally not more than 2 or 3.
  8. State whether the results supported, partially supported, or did not support the researchers’ hypothesis.

Negative t values and confidence intervals?

The sign of the t-value is determined by whether the first mean is larger than the second (in which case, t is positive) or whether the second mean is larger than the first (in which case, t is negative). For example, if, when you were telling SPSS what the two groups were for the t-test, you identified "Group 1" as "2" and "Group 2" as "1", you would be subtracting the mean of the chocolate condition from the mean of the no chocolate condition. Because chocolate had a higher mean, you would get a negative t score:

What if the confidence intervals are negative? If the t value is negative, then the confidence intervals will also contain negative numbers. So long as the confidence interval does not cross zero (does not contain one positive and one negative number), it will be much easier to interpret it if you use the absolute value of the confidence interval. So, if the confidence interval is between -4.1 and -2.1, you would rewrite it to say that it is between 2.1 and 4.1 (in ascending order) .

95% Confidence Intervals Crossing Zero

There is an interesting relationship between confidence intervals and the p-value for t-tests. If a 95% confidence interval of the difference between means crosses zero, then the t-test of that comparison will not be significant at p < .05. So, if you find that the confidence interval of the difference between means is -3.2 to 7.5 points, you know that the p-value of the t-test will not be significant at p < .05 because -3.2 to 7.5 crosses zero. If a confidence interval of a difference crosses zero, it means that the difference could be zero, and that means there could be no difference at all.

So, you now know how to use a boxplot to check the distribution of your data and how to use an independent t-test to test whether the means of two groups are significantly different. One step remains: How to create a plot that shows the means and confidence intervals. That is the topic of the last page.