Do you remember playing Rock, Paper, Scissors as a kid? As an adult? Against a computer? I’ve seen various lesson plans around RPS and probability of winning, but none seemed to fit the level and sophistication I was looking to use with my AP Stats class. Well the New York Times science section had a great article Rock Paper Scissors: You vs. the Computer. If you haven’t played the computer, its a kick and I highly recommend it!
Then I remembered reading Bob Lochel’s post Rock-Paper-Scissors and Two-Way Tables. And found his second post, Chi-Square Tests: Rock Paper Scissors. I liked the idea, but the gold came in the response sections of both posts. Doug Page shared a worksheet he has developed for using the Rock, Paper, Scissors applet and also a Google Form for having students submit results. I stole blatantly!
Since my students use the iPad, they can’t access Flash animations, so I had to assign the play as homework. I hate to do this because so many kids just don’t follow-up and complete the assignment – and we don’t get the data we could if we collected in the classroom. And the same happened with this class, but at least we had enough to continue with the introduction to Chi-Square test of Homogeneity. I think next year, they will use their own data rather than the class data…or compare their results to the class results. Will have to think more about this next year.
I required the students to calculate the components by hand – they need to know where the components come from and how they are related to the final χ² statistic. Once we finished this problem, we tackled another problem using the calculator. Because of the hand-calculating of the components earlier, they then understood what the expected values matrix meant and the components matrix.
How do you incorporate electronic experiences to develop engagement?
Last week I was perusing Twitter and saw Bob Lochel’s post about “clay dice are drying.” I wanted to know more, so Bob posted a blog entry Statistics Arts and Crafts about his Chi-Square GOF activity. We are starting GOF today in AP Stats, so no time to have my students make their own die this year (sigh, would have loved this experience for them…but there’s always next year).
I loved the idea but how could I simulate the basic idea of a non-regular die being or not being fair?! I racked my brain when suddenly I remembered my recent purchase of these dice. I’m not sure the photo shows how irregular these dice are, but they really are!
Using Bob’s basic plan, I adjusted to fit these cute little puppies. Each group got to choose a differently shaped but irregular die.
Through the activity, kids learned about the parts of a Chi-Square GOF situation, about components and about the shape of a Chi-Square distribution. I think my statisticians were surprised that the oddly shaped die was actually fair. Love this aspect of the activity! What seems to be true isn’t always true…and statistics helps us look at our world through an unbiased lens.
I do plan to get some air drying clay and having my students make their own dies next year. I think if students try to make their die fair and it isn’t (or is) is much more motivating than using pre-made dice.
My AP Stats students were feeling a little rushed and uncomfortable with the whole process of writing up a two-sample hypothesis test for proportions vs. confidence intervals. I completely understand! There are so many nuances and details to keep track of, so much to write and they just needed a confidence booster.
I thought I’d try something a little different. I copied lots of prompts from various textbooks, but without the question. Then I gave these directions:
Your and your partner will practice writing up a two-proportion inference procedure for both an inference test and confidence interval. You have 45 minutes to complete the poster (20 minutes for each question and 5 minutes for organization). Each of you is to write in a different color and sign your name in that color. Read the situation carefully. Glue the situation to the top of your poster paper; the poster is to be letter (not landscape) orientation. Write the title of the problem at the top of your poster paper.
- Write and answer an inference test question (Is there significant evidence that…) about this information. Select and state the a-level chosen. Describe the error you might be making with your decision.
- Write and answer a confidence interval question (Estimate the true difference…) about this information. Select and state the confidence level. Interpret the confidence level. Describe the additional information you gain from the confidence interval.
At the end of the time, we will do a gallery walk. You will be assessing two write-ups using the inference rubric. You will have 7 minutes to complete each assessment.
We had a block period (90 minutes) to work on the posters AND do the gallery walk. Time was a little tight, but the discussion was fabulous! I love getting to walk around and listen to peer discussions and I had a great opportunity to connect with my struggling students in order to clarify things based on their questions. I really liked how the activity made the students write the question – I hope this will help when they read unfamiliar questions as we prep for The Exam.
I found this cool toy at Christmas time at a World Market store and I just knew I had to use it in AP Stats. The elf Holiday Popper package claimed that it shoots up to 20 feet…so significance tests, here we come. I just love, Love, LOVE how this single activity wove in lots of review and deepening of new concepts simultaneously.
I showed the “statisticians” the packaging and asked how we could test this claim. We had to determine the null and alternative hypotheses…very interesting conversation, but good clarifications about the null needing the equal sign. We had to determine whether the conditions for inference could be met, which led to questions about the idea of randomness and independence; a great review of old concepts and vocabulary. For the Normal condition, we needed to decide what sample size we could/should use.
And this led to designing the data collection process. Once again, good questions about bias, controlling for lurking variables and sample size for t-methods. After we decided that we didn’t know the shape of the population distribution of shot-lengths, they immediately said it would be easy to collect 30 or more measurements, so we wouldn’t have to graph the data due to the CLT (secretly I was elated they remembered this requirement). But mean old me said we’d collect 20 measurements because I wanted to review the process (can you hear the collective groan?).
We then collected the data and analyzed it.
Once we finished talking through the process and working through the problem, I had the students submit their work via a Google Docs, which gives me an opportunity to look at what area(s) are still weak. I noticed immediately that writing conclusions is an area needing additional practice. I’ll come up with something for tomorrow.
What I particularly like about this activity is that it’s not the typical “textbook” problem; the students actually had to create the problem, determine the question, collect the data and analyze it. What is an activity you used that gave so much more to the learning and retaining process than you anticipated?
We spent some time today talking about decision errors in AP Statistics. It was clear from the opener that my stats apprentices were not so clear about what Type I and Type II errors are and how to state them. Although I knew in my subconscious that the probabilities of these errors are conditional probabilities, I’ve never used this approach in developing the conceptual understanding of the errors with my kids. I did today and I’m really pleased with the results (as far as I know!)
We also talked about the seriousness of the errors, so I had students look at two scenarios and the related error statements. Then they decided which they thought was more serious by voting with their bodies. They moved to one side of the room or the other depending on which they thought was more serious; then they debated their choice using correct vocabulary.
In addition, we wrote up our first significance test for one proportion in the 4-step way. Typical presentation, but I did spend time developing the difference in the Normal condition and the standard deviation calculation between confidence intervals and significance tests. It was a great way to re-enforce the difference between confidence intervals (which give plausible values of the parameter) and significance tests (which has the claimed parameter). Through the discussion we determined which value to use in the Normal condition and the standard deviation (P0 vs. p-hat): p-hat for confidence intervals since it is our best unbiased estimator of the true population value and P0 for significance tests since it is the mean of the sampling distribution of all p-hats.
We finished the problem by determining which error we might have made. Also talked about interpreting possible errors BEFORE running the significance test vs. the error we might actually have made AFTER we run the test. Great conceptual discussions!
The activity we did today in AP Stats was an Introduction to the Logic of Hypothesis Testing using Skittles. I wrote this activity after being inspired by Adam Pethan’s video Hypothesis Tests: Introduction. He had a wonderfully simple way of using a real life scenario (that used food) and gave me an awesome activity that connected sampling distributions to this new idea.
Because I wanted (needed) the sample size to be controlled and the sample proportion to be the same for all students, I used Adam’s random sample proportion of yellow skittles as the basis for building the logic of the hypothesis test. They needed to draw the population distribution (labeled correctly) and write both hypotheses correctly with correct symbols (this is the FIRST time they have ever seen a Null or Alternative hypothesis). They had to show me their answers on these first questions before they could get Skittles. It gave me a chance to check every single hypothesis along with symbols and notation…great formative assessment.
Once the student wrote the two hypotheses correctly along with the hypothesized population distribution, they could get a mini-cup of Skittles to munch on while they continued with the activity.
During our study of sampling distributions, I emphasized ad nauseam what the probability meant and had the kids write an interpretation of the probability they calculated in their own words based on the mean of the sampling distribution AND the sample statistic comparison. In particular, the focus was on the idea of the sample being “unusual” in our sampling distribution as reflected by the probability we calculated dovetailed easily into today’s lesson. They determined what their level of tolerance for an unusual sample value would be based on the probability (area). This will lead in nicely to alpha levels later in the unit.
Then they calculated the probability using the sample value and the constructed sampling distribution (of course they checked the conditions to build the distribution!!) But looking over their submissions, we still have to work on testing the Normal condition…but we have months to do this, right? Formative assessment is so great for highlighting misconceptions and missing details, isn’t it? I also gave a silent yelp of joy as my students talked, discussed, argued, clarified and focused on understanding the big ideas.
The last part of the activity reviewed confidence intervals again since the kiddos are having their test tomorrow on this topic. Very few questions to me, but lots of intense discussion about how/why to approach these questions. I would say that the 7 of the 8 math practices were in evidence today: sense-making, reasoning, argument, modeling, using tools, precision of language and calculations, and attending to the inherent structure of the problem.
Finally, they submit their results electronically in Schoology so I can look at the results and determine the next steps. All in all, I was really pleased with the success of this first-time activity. I did work out the problems ahead of time, but using with students is always eye-opening. Some tweaking is needed, but not as much as some of my first-time activities need. How do you vet your activities (make a careful and critical examination of them) before you use them for the first time?