# Blog Archives

## Day 109: I Love Lucy Visits Type 1/Type II Errors

We had fun today with I Love Lucy and Errors in AP Stats today.  As an opening activity, we watched this snippet of I Love Lucy in the Chocolate Factory (thanks to a suggestion from one of our administrators who used to teach AP Stats):

Then I asked the following questions: What type of error did the supervisor make?  What is the null and alternative hypotheses under which the supervisor was working?  Be sure to define your parameter of interest.

Lucy is always funny and it lightened up the mood, got the kiddos talking about the errors and deciding what the null and alternative hypotheses might be in a real situation.  Some students used the average speed of the belt and others used the proportion of candies that were wrapped to quantify what they saw and what the supervisor might have been using as her measure.  Interestingly, with the first situation a Type I error was committed and in the other, a Type II error was committed.

Because of this quick video, a student asked if we can use either a mean or a proportion if we wanted to in a situation.  This was a great opportunity to revisit the Organizing Data component of the AP curriculum, specifically that one item/subject can have many measurements.

I also got a chance to revisit my Inference Procedure Rubric.  My students wrote up their second Significance Test today for the question:

According to an article in the San Gabriel Valley Tribune (2-13-03), “Most people are kissing the ‘right way’.”  That is, according to the study, the majority of couples tilt their heads to the right when kissing.  In the study, a researcher observed a random sample 124 couples kissing in various public places and found that 83/124 (66.9%) of the couples tilted to the right.  Is this convincing evidence that couples really do prefer to kiss the right way?  Explain.

After they finished, I posted the rubric on the board and then toggled between the rubric and the sample answer.  I had them assess their writing at each step.  It also gave me the opportunity to reiterate typical errors I had already seen them make or errors I had seen previous students make.  Great discussions!  I think I will add the question: What type of error might the researcher have made?

Here are some student responses.  In reflecting on their successes, it seems the typical misunderstanding about when to reject vs. not reject based on the p-value.  I’m really happy that so few are still thinking a small p-value means the null hypothesis is supported.  I also am really pleased with the number of students who fixed their errors as we went along (75% of them) and wrote notes to themselves about misconceptions.

In particular, its clear we need to continue to practice writing conclusion statements

As students were walking out the door, I asked how they were feeling about their confidence in writing up Significance Tests.  Most said they felt much more confident overall and they were able to articulate clearly areas they need to practice (mostly writing up the conclusions.)

## Day 108: Errors in Decision-Making

We spent some time today talking about decision errors in AP Statistics. It was clear from the opener that my stats apprentices were not so clear about what Type I and Type II errors are and how to state them.  Although I knew in my subconscious that the probabilities of these errors are conditional probabilities, I’ve never used this approach in developing the conceptual understanding of the errors with my kids.  I did today and I’m really pleased with the results (as far as I know!)

We also talked about the seriousness of the errors, so I had students look at two scenarios and the related error statements.  Then they decided which they thought was more serious by voting with their bodies.  They moved to one side of the room or the other depending on which they thought was more serious; then they debated their choice using correct vocabulary.

In addition, we wrote up our first significance test for one proportion in the 4-step way.  Typical presentation, but I did spend time developing the difference in the Normal condition and the standard deviation calculation between confidence intervals and significance tests.  It was a great way to re-enforce the difference between confidence intervals (which give plausible values of the parameter) and significance tests (which has the claimed parameter).  Through the discussion we determined which value to use in the Normal condition and the standard deviation (P0 vs. p-hat): p-hat for confidence intervals since it is our best unbiased estimator of the true population value and P0 for significance tests since it is the mean of the sampling distribution of all p-hats.

We finished the problem by determining which error we might have made.  Also talked about interpreting possible errors BEFORE running the significance test vs. the error we might actually have made AFTER we run the test.  Great conceptual discussions!