Today we begin confidence interval for means in earnest in AP statistics. Yesterday after the activity, the students explored what a t-distribution was via this Geogebra document and looked at how to determine a critical t* value from a table as well as from the Nspire. (sidenote: although we do find values using tables occasionally, I emphasize using their calculator to find these values as well as the area probabilities…its all about saving time on the AP exam, so training them now will help them to be comfortable using their calculator later during all of the inference procedures).
Today, we did our first write-up of a t-interval for means, emphasizing the conditions that need to be met. I am so glad that I “over-emphasize” these when they are learning about sampling distributions. Of course, we write one up together so they understand what needs to be shown and stated in the inference procedure.
As I’m writing this post, an idea popped into my head: a procedure wall! Just like the word walls used in algebra and geometry classes but with the proper name of the procedure. I’ll have a way to cover when they are taking an assessment. I can use sentence strips, and maybe color-code for proportions vs. means vs. others. Will have to ruminate on this a little more to flesh out the details…look for a future post on this!
So, yesterday, most of the groups got the statements sorted. Today they will create a poster of the incorrect statements and write underneath each one what the error is. I found a great explanation by Bock about the types of interpretation errors with examples and explanations. My students used this document to help guide them. And they eventually were able to identify correct and incorrect statments, but also were able to articulate what made interpretations incorrect. YAY!
I’m a sucker for all things Valentine and hearts. So why wouldn’t I have an AP Stats activity using Sweetheart conversation hearts? This year, my students endulge me better than most, at least feinting enthusiasm for these corny activities.
This activity was all about the proportion of pink hearts. The big question was, “What is the true proportion of pink Sweetheart conversation hearts?” They had to count the number of pink hearts and then determine a sample proportion for their box…we had to discuss our willingness to accept each box as a random sample from the population of all Sweetheart conversation hearts.
From there, they began their introduction to inference methods via the confidence interval. Using their sample value, they explored the possible shape of the sampling distribution, how it related to their previous understanding of sampling distributions, what conditions might need to be in place, etc.
The next step is to work on interpreting the confidence interval as well as the confidence level…their first foray into technical writing!
We had lots of Hershey Kisses left over from Tolo, so I decided to re-write the intro activity to confidence intervals to revolve around tossing Hershey Kisses. It is based on an activity I found on the internet by Lisa Brock and Carol Sikes, who found the activity on the website for Aaron Rendahl’s STAT 4102 activities at University of Minnesota. I revised it some more…don’t we all tweak things to make it our own?
Today we begin the journey to Inferential Statistics in my AP Statistics class. And I use the development of the sampling distribution of a statistic as a means to set up good habits as well as develop the conceptual understanding of why we need to check conditions for the elements of the sampling distribution: Center, Spread, and Shape.
I found that students really struggled with all of the apparently different conditions for the various inference methods we study in second semester. I really wanted to streamline the process of checking the conditions. After looking at and comparing the various assumptions and conditions, I realized that the two sampling distributions used in the inference procedures about proportions or means boiled down to two things: a random element in the data collection method (simple random sample or randomized experiment), and sample size where one needs both a large enough and a small enough sample to determine the standard deviation and shape. In addition, I needed to help my students understand the difference between assumptions and conditions.
Here is a summary of what I found:
Independence Assumption: The sampled values must be independent of each other
The Sample Size Assumption: The sample size, n, must be large enough
Assumptions are hard—often impossible—to check. Still, we need to check whether the assumptions are reasonable by checking conditions that provide information about the assumptions. The corresponding conditions to check before using the Normal to model the distribution of sample proportions or means are the Randomization Condition, 10% Condition and the Success-Failure Condition/Large Enough Sample Size Condition.
Conditions you can check:
Randomization Condition: The data must be representative of the population. (That is, it must come from a randomized experiment, or from a simple random sample of the population; the sampling method must be unbiased.)
10% Condition: The sample size, n, must be no larger than 10% of the population.
Success/Failure Condition: The sample size must be large enough so that we can expect at least 10 “successes” and 10 “failures”. That is, np > 10 and nq > 10 OR Large Enough Sample Condition: If the population is unimodal and symmetric, any size sample is sufficient. Otherwise, a larger sample is needed.
I also wanted it have some kind of cognitive framework to fit these ideas. How could I combine the ideas into chunks that include the essence of the assumptions and conditions? Well, Random made its own sense, but Independence and the 10% condition were intertwined, and the approximately Normal shape was tied to having a large enough size sample via success vs fails or simply a larger sample was better.
Acronyms are great memory devices, especially when first learning about something new and complex. The acronym is a simple organizational tool that reminds the user of the complex ideas. My next thought was what kind of acronym could I come up with for these? RIN (for random, independence and normal)..but there was no real hook or interesting connection with RIN. How about SIN where S stood for random Sample…but that was somewhat suspect because not all inference is about sampling, so I did stretch it a little to say SRS or random assignment (they both sound essy). And I use the catchy phrase, “It is a SIN to not check the conditions.” And I have found that my kiddos don’t forget to check them….phewww!
Here is an example of how we talk through these ever important assumptions and conditions. As the year progresses, we fine-tune and focus in on the important distinctions, but the acronym SIN and these three words give us a simple framework to talk about the distinctions. To set the groundwork for next semester, I have students recognize the assumptions/conditions AND what the condition guarantees.
Today I saw my students really begin to collect data in a variety of ways: from the internet, surveys, data collection. Their directions for today were:
Complete data displays
Complete data collection
Continue to work on inference analysis
Continue to work on designing Infographic
I am so thankful for the Piktochart blog support! Once again, I provided my kiddos resources but didn’t actually tell them how to do things. What is so great about this project so far is how empowered my students seem to be. The quality of their efforts won’t be visible until next week, but I am confident that they are taking the project seriously!
One of the nice things about this project is it falls right into our district’s 21st Century vision for Facilitated Learning: Collaborative and Independent Application. In particular, I see this project doing the following
- Collaborative Learning
- Students demonstrate analysis, evaluation and synthesis: Stating a main question and devising supporting questions, then using data collection and analysis to answer the questions, and finally using an infographic to synthesis their results into an understandable whole that answers their question.
- Students work collaboratively: YES…you can see it in the photos and hear it in the classroom. So much good discussion, analysis and revision goin’ on!
- Students perform non-routine tasks such as interpreting, evaluating and creating: isn’t that what creating an quality infographic does?
- Independent Application
- Self, peer and others evaluate the student’s work with public audiences and authentic assessments: although this isn’t met very well this year (due to the crunch of time to get the project organized for the first time), I hope to have the students help devise the identifiers of the components in the rubric as well as actually post their infographic and have their peers evaluate, perhaps using a google doc (see what was done by Joel Evans and Bob Lochel).
- Students take responsibility for their own learning: absolutely! I provided some links to support possible questions students might have, but they have done a wonderful job of taking initiative and answering their own questions about the technology, and finding the data.
- Students independently practice advanced skills: since part of the project is to create graphics and do inference analysis, my young statisticians have had to do a lot of graph making followed with exploratory data analysis, write survey questions that aren’t biased and conduct a variety of inference procedures until they decided which one they wanted to use on their infographic.
- Students perform extended blocks of authentic and multidisciplinary work: at least the authentic work over a long period of independent time is met. How might the multidisciplinary component be woven into this project? Should it? What benefits and possible pitfalls are there to having students chose the connection to another discipline?
All in all, I’m happy so far with this first attempt at the infographic idea. What fresh, new idea have you tried recently and what was your inspiration?
While at the Texas Instrument International Conference in March, I attended a workshop hosted by Robert Hanchett using Fatal Vision® Impairment Goggles to do an experiment testing students’ motor skills while wearing the goggles, which simulate the physical effects alcohol intoxication. These goggles give the students the experience of intoxication without actually imbibing alcohol.
I so wanted to add this activity to my AP Statistics curriculum, but it was so expensive. I was considering writing a grant to our schools foundation when I got word that the local police department had just purchased the whole Fatal Vision Goggles kit and was looking for a way to introduce the experience in the schools. Whoo-Hoo!! I immediately contacted Officer Munoz and set up a time to coordinate. He also was very excited to use these materials so quickly into the high school.
Prior to this experience, Officer Munoz talked about his experiences with drunk driving and the procedures policemen must follow when administering a field sobriety test. He shared the actual form used during the test. He also demonstrated the full process for two randomly selected students. My students were very engaged and focused on the introduction – so fun to have an activity that is so relevant for my students!
For the lab, Amy P Statistics students will walk a 10-foot line both with goggles and without the goggles. The order in which they do this will be randomized by the students using a coin toss. Before the project, they will make a hypothesis concerning how many times they deviate from the line both with and without the goggles. While each student walks the line under both conditions, their partner will silently count how many times the subject strays from the line. After the data is collected, the AP Statistics students will conduct t-tests to determine the validity of their hypotheses.
After the activity, students were very enthusiastic about the experience and highly recommend it for next year. Officer Munoz and I hope to do the same activity during my AP Statistics class, but follow it up with a lunch time experience for the rest of the school. I so love the opportunity to bring the community into my mathematics classroom!!
How do you weave in community experiences into your classroom? I’m always looking for new and easily implemented ways to welcome our community into the schoolhouse, so I’d love to hear from you.
One of the main staples of an AP Statistics class is prepping for the AP Exam, and one of the most challenging things for students is choosing the right inference procedure for a situation. So Larry Green’s Categorizing Stats Problems Applet has been used extensively. This link plays well with iPads, too!
Every year, my students comment very positively about their experiences with the applet.
Here is an example of the type of problem and the choices students can make. When they choose correctly, they get a smiley face. If they choose incorrectly, they get hints to help them narrow their next choice. I advise my kiddos that initially they will be humbled, but as they continue through the problems, they will get better. And the nice thing is, they do!
I was so excited to finally incorporate Gapminder into my AP Statistics class today! In December, I had the privilege to observe one of my social studies colleagues as she used Gapminder with her seniors to explore social trends. The students then used Gapminder to explore their own hypotheses around a social trend. Afterwards, we talked about how she planned and prepped her students to use the site independently.
Fast forward to now. I noticed that one of the alternative examples for our textbook referenced variables in gapminder, and I said to myself, “Self, its time to dip our toe into using Gapminder in class.” Here is the problem:
What does a country’s income per person (measured in gross domestic product per person, adjusted for purchasing power) say about the under-5 child mortality rate (per 1000 live births) in that country? Here are the data for a random sample of 14 countries in 2009 (data from www.gapminder.org).
Prior to class, I played with the site, selecting the 14 countries used in the example. Then I figured out how to get it to 2009 (obviously, to some, the time line at the bottom). I also found out how to “hide” the other countries to then be able to see the sample fourteen using the opacity toggle. We looked at the entire population of data and noticed that the relationship was definitely NOT linear. We then looked at the selected 14 countries, but the graphic seemed cluttered with the names and it was hard to see that relationship still was curved.
So I created an Nspire document with the data included so we could look at the data without the labels. This also gave the opportunity to try the reciprocal function idea on the data and see if it straightened (linearized) it.
We went through the usual discussion of the power transformation of data (and some even wanted to know why it worked). And we then compared to what happens in Gapminder when the log scale is used on both axes. Fun to see the connections!
The last part of the discussion was around what the inputs and outputs of the regression equation are, using the question below. Didn’t quite finish, so we will need to use some time on Monday to solidify concepts and fine-tune the process. But all in all, an okay lesson.
I would like the students to actually do the creating of the graphs on Gapminder and determine how to transform, but I have to remind myself that is not the purpose of this class. And with the AP exam in just a few weeks, I have to pick the most impactful experiences. Maybe this could be a great end-of-year project for one of my students. What do you think?
Seniors in our school had their senior “skip” day on Monday. ARGH!! We had just started linear regression inference on Friday, and Monday was discussing the conditions for inference for the population slope. I knew if I simply plowed along, I would plow most of the “skippers” under. So how do I not condone the “skip” (and reteach the lesson as if no one had attended) but also not penalize the skippers academically (while also validating and rewarding those that did not skip)?
Quite the dilemma, but while my students took their Chi-Square test, I was able to put together an Nspire companion document to this problem:
Alternate Example 12.1a: Fresh flowers?
For their second-semester project, two AP Statistics students decided to investigate the effect of sugar on the life of cut flowers. They went to the local grocery store and randomly selected 12 carnations. All the carnations seemed equally healthy when they were selected. When the students got home, they prepared 12 identical vases with exactly the same amount of water in each vase. They put one tablespoon of sugar in 3 vases, two tablespoons of sugar in 3 vases, and three tablespoons of sugar in 3 vases. In the remaining 3 vases, they put no sugar. After the vases were prepared and placed in the same location, the students randomly assigned one flower to each vase and observed how many hours each flower continued to look fresh. Here are the data and computer output….
Here are some of the screens:
Because my students were already logged in to the Navigator, I could send it right away. They didn’t have to enter the data, and by using a split screen, I could re-introduce (re-enforce) the conditions and visually show what needed to be done to check the conditions. Those students who were there on Monday found the document to be “awesome,” in their own words. They said it actually helped them solidify their understanding better. And we were able to cover the topic in 10 minutes.
Addendum 4/23: in their opener today, they could articulate the mnemonic, what it stood for AND how to check the 5 conditions!! Whoot-whoot!