So, yesterday, most of the groups got the statements sorted. Today they will create a poster of the incorrect statements and write underneath each one what the error is. I found a great explanation by Bock about the types of interpretation errors with examples and explanations. My students used this document to help guide them. And they eventually were able to identify correct and incorrect statments, but also were able to articulate what made interpretations incorrect. YAY!
I continue to believe that one of the most powerful teaching strategies I employ with my AP Statistics class is the FRAPPY. Students work on a released AP question related to the topic we’re studying. After they have the allotted time (12-15 minutes to quietly write out their response), they talk with their group and adjust their answer using a green pen. Then they look at student responses together and assess the quality of the responses; if there is anything in the response my students want to add to their own response they can again add it using green pen. Finally they use the rubric to assess their final answer.
And one of the most compelling aspects is the student reflection I added to the process. Although it was discussed as a good practice I felt I needed to be very purposeful with my students about the reflection. For this Frappy, the students did the 2001 Problem 3: Radio Givaway:
Every Monday a local radio station gives coupons away to 50 people who correctly answer a question about a news fact from the previous day’s newspaper. The coupons given away are numbered from 1 to 50, with the first person receiving coupon 1, the second person receiving coupon 2, and so on, until all 50 coupons are given away. On the following Saturday, the radio station randomly draws a number from 1 to 50 and awards cash prizes to the holders of the coupons with these numbers. Numbers continue to be drawn without replacement until the total amount awarded first equals or exceeds $300. If selected, coupons 1 through 5 each have a cash value of $200, coupon 6 through 20 each have a cash value of $100, and coupons 21 through 50 each have a cash value of $50. (a) Explain how you would conduct a simulation using the random number table provided below to estimate the distribution of the number of prize winners each week.
Part (b) to perform the simulation 3 times is usually straightforward for the kids. So the big focus is on explaining how to run the simulation. Here are some examples of my students’ reflections on their performance and lessons learned:
“I understood the concept…I should think about the entire process when writing the response” This student identified the part of the process that was missing in the final response.
“I need to remember the smaller details that tend to fit into every type of this problem” I just love the generalization the student is trying to make here!! And the student provides specific ways to do this.
“I did not specify to count the # of winners, which is the point of the question, even though I did it….Next time, I will …actually address the question.” This student recognizing the importance of clearly writing each step and reading the directions.
If I were to describe these errors and caution students against them, it would fall on deaf ears. How beautiful and powerful for students to self-critique!!
Today I introduced the FRAPPY process to my new AP Statistics students. The term FRAPPY is an acronym for Free Response AP Problem – Yay! and was initially introduced to the AP community by Jason Molesky on his website StatMonkey.
Since this strategy has been a major part of my APS course, the writing and resilience of my students has massively improved. Check it out when you have a chance!
So yesterday’s Professional Development opportunity was AWESOME! Today, back to reality. My AP Stats kiddos have been left on their own a great deal this past 6 class day with me gone for 4 of them. But they are troopers. They were left with the task to learn about and understand confidence interval procedures for two means. Today I wanted to summarize with them what they know, what they need to fine-tune and what eluded them. So I wrote on the white board Step 1, Step 2, Step 3, and Step 4. And they told me what needed to be filled in using the Additional Practice problem from their notes.
They actually articulated well the key and crucial parts of the process. We had a little calculator issue as you can see, which led to using the calculator more efficiently and effectively. We also revisited the distinction between the standard error vs. standard deviation along with finding the critical t-value.
We’ll see how the Frappy’s go next week.
After yesterday’s Holiday Popper activity, I realized my chitlins needed some more practice in writing conclusions to significance tests in AP Stats. I wanted an energizing way to make writing a little more fun. With a quick Google search, I found this cool formative assessment activity on Mr. Orr is a Geek.com called Commit and Crumble. Jon Orr shares how he used the technique with his Algebra 1 class and his instructions are easy to follow.
Since students don’t put their name on the paper, this is an anonymous way to practice, to risk-take, and to get feed-back about their thinking around a problem or question. After my students wrote for 10 minutes, we then proceeded to the Commit and Crumble activity. Here are the directions I gave them:
After they did one round, I had the kids re-toss and re-select another paper. Then use a red pen to add additional comments. I also asked them to discuss the paper they had with their Clock Buddy (so they could see an additional paper). Using their iPad, I suggested that they take a pic of any write-ups that were especially good or helpful to them and attach to their Opener/Exit slips. This way they have a kid-friendly write-up that can be referred to in subsequent situations.
We debriefed the question at the end, highlighting common errors while also discussing the importance of reading the prompt and doing what it says, not what you think it says. The question about significant evidence asked them to write a conclusion statement. However, many proceeded to do the 4-step process. They even calculated the t-value and the p-value when they were already in the given computer printout. We learned some valuable tips for exam taking today! Students said the writing and looking at other papers was helpful to them. We’ll see tomorrow as I will ask them to write a conclusion to submit to me.
PS. Added 3/6
Here is evidence of some students’ choice of a good answer! And they are different papers. Yay!!
I have a stack of crumpled papers and am thinking of having students find their paper (if they want) to see the peer feedback. Here is another resource for the Commit_and_Toss strategy with additional variations. What are some energizing formative assessment ideas you’ve used with success?
I found this cool toy at Christmas time at a World Market store and I just knew I had to use it in AP Stats. The elf Holiday Popper package claimed that it shoots up to 20 feet…so significance tests, here we come. I just love, Love, LOVE how this single activity wove in lots of review and deepening of new concepts simultaneously.
I showed the “statisticians” the packaging and asked how we could test this claim. We had to determine the null and alternative hypotheses…very interesting conversation, but good clarifications about the null needing the equal sign. We had to determine whether the conditions for inference could be met, which led to questions about the idea of randomness and independence; a great review of old concepts and vocabulary. For the Normal condition, we needed to decide what sample size we could/should use.
And this led to designing the data collection process. Once again, good questions about bias, controlling for lurking variables and sample size for t-methods. After we decided that we didn’t know the shape of the population distribution of shot-lengths, they immediately said it would be easy to collect 30 or more measurements, so we wouldn’t have to graph the data due to the CLT (secretly I was elated they remembered this requirement). But mean old me said we’d collect 20 measurements because I wanted to review the process (can you hear the collective groan?).
We then collected the data and analyzed it.
Once we finished talking through the process and working through the problem, I had the students submit their work via a Google Docs, which gives me an opportunity to look at what area(s) are still weak. I noticed immediately that writing conclusions is an area needing additional practice. I’ll come up with something for tomorrow.
What I particularly like about this activity is that it’s not the typical “textbook” problem; the students actually had to create the problem, determine the question, collect the data and analyze it. What is an activity you used that gave so much more to the learning and retaining process than you anticipated?
We had fun today with I Love Lucy and Errors in AP Stats today. As an opening activity, we watched this snippet of I Love Lucy in the Chocolate Factory (thanks to a suggestion from one of our administrators who used to teach AP Stats):
Then I asked the following questions: What type of error did the supervisor make? What is the null and alternative hypotheses under which the supervisor was working? Be sure to define your parameter of interest.
Lucy is always funny and it lightened up the mood, got the kiddos talking about the errors and deciding what the null and alternative hypotheses might be in a real situation. Some students used the average speed of the belt and others used the proportion of candies that were wrapped to quantify what they saw and what the supervisor might have been using as her measure. Interestingly, with the first situation a Type I error was committed and in the other, a Type II error was committed.
Because of this quick video, a student asked if we can use either a mean or a proportion if we wanted to in a situation. This was a great opportunity to revisit the Organizing Data component of the AP curriculum, specifically that one item/subject can have many measurements.
I also got a chance to revisit my Inference Procedure Rubric. My students wrote up their second Significance Test today for the question:
According to an article in the San Gabriel Valley Tribune (2-13-03), “Most people are kissing the ‘right way’.” That is, according to the study, the majority of couples tilt their heads to the right when kissing. In the study, a researcher observed a random sample 124 couples kissing in various public places and found that 83/124 (66.9%) of the couples tilted to the right. Is this convincing evidence that couples really do prefer to kiss the right way? Explain.
After they finished, I posted the rubric on the board and then toggled between the rubric and the sample answer. I had them assess their writing at each step. It also gave me the opportunity to reiterate typical errors I had already seen them make or errors I had seen previous students make. Great discussions! I think I will add the question: What type of error might the researcher have made?
Here are some student responses. In reflecting on their successes, it seems the typical misunderstanding about when to reject vs. not reject based on the p-value. I’m really happy that so few are still thinking a small p-value means the null hypothesis is supported. I also am really pleased with the number of students who fixed their errors as we went along (75% of them) and wrote notes to themselves about misconceptions.
In particular, its clear we need to continue to practice writing conclusion statements
As students were walking out the door, I asked how they were feeling about their confidence in writing up Significance Tests. Most said they felt much more confident overall and they were able to articulate clearly areas they need to practice (mostly writing up the conclusions.)
We spent some time today talking about decision errors in AP Statistics. It was clear from the opener that my stats apprentices were not so clear about what Type I and Type II errors are and how to state them. Although I knew in my subconscious that the probabilities of these errors are conditional probabilities, I’ve never used this approach in developing the conceptual understanding of the errors with my kids. I did today and I’m really pleased with the results (as far as I know!)
We also talked about the seriousness of the errors, so I had students look at two scenarios and the related error statements. Then they decided which they thought was more serious by voting with their bodies. They moved to one side of the room or the other depending on which they thought was more serious; then they debated their choice using correct vocabulary.
In addition, we wrote up our first significance test for one proportion in the 4-step way. Typical presentation, but I did spend time developing the difference in the Normal condition and the standard deviation calculation between confidence intervals and significance tests. It was a great way to re-enforce the difference between confidence intervals (which give plausible values of the parameter) and significance tests (which has the claimed parameter). Through the discussion we determined which value to use in the Normal condition and the standard deviation (P0 vs. p-hat): p-hat for confidence intervals since it is our best unbiased estimator of the true population value and P0 for significance tests since it is the mean of the sampling distribution of all p-hats.
We finished the problem by determining which error we might have made. Also talked about interpreting possible errors BEFORE running the significance test vs. the error we might actually have made AFTER we run the test. Great conceptual discussions!
It’s hard to believe that I’ve stuck with this challenge for 100 days! Amazing and gratifying!
Today I had one of my formal observations today in AP Statistics. And it is FRAPPY day. The evaluating administrator is a former LA teacher and so the fact that a math class is actually doing technical writing (as per the Common Core English Standards) was well-met. Prior to today’s activity, I shared the Frappy process, the intent of each part and that my students were “well-trained” in the process.
And my students were exemplary! Everything I shared would happen did happen. The students were focused during the writing phase. They collaborated about their answers and adjusted with green pen ONLY after they discussed changes. They thoughtfully discussed the two student answers and wrote suggestions or questions along side the responses. They used the rubric to assess and score their own writing in red while asking clarifying questions about the rubric and the statistical processes and communication. They wrote thoughtful reflections about their level of understanding while also including suggestions to improve their understanding and communication. Here are some examples of their reflections:
During block today, we have started writing inference analysis using the 4-step process in AP Statistics today. I guided the kiddos through a problem, talking through the requirements in each step and what I would consider a complete analysis. Yes, I truly believe guided instruction is necessary at times. I then shared the Inference Procedure Response rubric (I’ve been working on this periodically for my professional evaluation), showing how the 4-step process matches to the required AP curriculum. When students didn’t have any more questions, I asked if they were ready to tackle a problem on their own and they enthusiastically said “YES!” So I had them write up the problem:
In her first grade social studies class, Jordan learned that 70% of Earth’s surface was covered in water. She wondered if this was really true and asked her dad for help. To investigate, he tossed an inflatable globe to her 50 times, being careful to spin the globe each time. When she caught it, he recorded where her right index finger was pointing. In 50 tosses, her finger was pointing at the water 33 times. Should Jordan believe her teacher? Construct and interpret a 95% confidence interval to support your answer.
My students had 15 minutes to work individually. For some. this was a little uncomfortable since I encourage so much collaboration in class. Once they were finished, they exchanged their paper with their 12 o’clock Clock Buddy. I handed out the rubric and had them go through their partner’s paper and assessing each part. I then collected and looked the papers over to check how student-friendly the rubric was, to see if there were any adjustments to the rubric based on its use, and to look at each paper individually.
The process was similar to the FRAPPY process shared by Jason Molesky, a process I use all year to develop good communication and familiarity with the demands of the AP exam. Unlike the FRAPPY process, the rubric process today gave students immediate feedback from peers on their own writing. I wanted my statisticians to see other students’ write-ups to encourage organized presentation of their ideas, which I think happened. I also wanted them to assess a peer’s write-up using the rubric in hopes of re-enforcing the components of a well-written response. It was a wonderful process and I know I’ll use it again.