Goals:
- Understand the “true experiment”
- Examine tools for random assignment to group
- Examine tools for determining if you have enough participants to do a valid statistical comparison
- Do a t-test comparing a Tech-using Class to a Comparison Class
- Consider how you might approximate this design in your classroom with a technology intervention
Activity Guide:
What might the “true experiment” have looked like for your Fall practicum project. Let’s try some things out.
- Consider how you might have 2 identical groups of learners that are completely the same in every way, except for you “treatment” (independent variable) of an innovative technology-based learning experience. What would it take to have 2 groups (classes) of students with the same genetic code, same childhood experiences, same parents, same instruction (up until this time), eaten the same food and seen all the identical things from exactly the same perspective. Now, there’s a true control group. Of course nutrition, genetics, parenting, these are all known to affect learning, so they must be controlled or they will confound your results. Is this ever going to be possible? If not the only alternative is randomly selecting students to test, and then also randomly assigning students to group, and in so doing, let chance “control” which variable learning factors get put into which experiment group. [so go to step 2]
- Use the Random assignment page from TopHap Chapter 9 to divide your students into 2 randomly assigned groups: Experimental and Comparison.
- give each of your students a Research number starting with 1 associated with their name
- On the Randomizer.org page, request 2 sets (Experimental and Comparison) of random assignments.
- Create a list of the Experiment student names and the Comparison group student names.
- Find out how many students you would really need to test for significant differences, using a free Power Analysis tool
- Get G*Power from the CNET or for Mac (I am unaware of a cloud/chromebook option)
- Open the program and set it up for:
- Test Family: t-test
- Statistical Test: Means: Difference between two independent means (2 groups)
- Type of power analysis: A priori: Compute required sample size
- Input parameters:
- Try it with the default values (including an effect size (d) = .50
- Then try it by assuming you might have a larger effect size by changing the effect size (d) to .75
- Now that you have done a Power analysis, you should see that you need fewer students in each group, if you expect a larger effect of the technology on student learning, skills, or attitudes. You might also add this to your understanding that the less control (like of the number of students) a researcher has, the more complete the design and statistics must become to “make up for” a lack of experimental control.
- Now you are ready to answer some questions about your Fall Practicum research.
- Did you have enough students in each group to be able to detect a difference or might the effect of the technology been too little to reliably “see it” with just the students you had available?
- If you did not randomly assign students to the experimental and comparison groups last Fall, what confounding variables might have influenced the study?
- What might a “true experiment” of your Fall technology practicum actually look like? See if you could briefly outline what you would need to do to make it approach a true experiment.
- Now let’s do a t-test comparing 2 classes, a Class using new Tech vs a Comparison class, on a unit test.
- Here are Exel (xls) gradebooks from 2 classes, a Wise Tech Class and a Comparison Class. In these gradebooks you will see Fake student names, homework scores, and their Unit test scores on the first Unit test (100 points possible).
- If you don’t have or don’t want to use Excel, here are the Unit test scores for each class:
- Wise Tech Class: 92 98 97 87 59 72 91 99 88 94 94 87 94 89 95 92 94 93 98 97 92 88 97 88 99
- Comparison Class: 83 91 85 88 75 81 87 95 80 90 91 77 94 78 93 92 94 90 90 92 92 85 92 75 95
- Some things to notice about these Unit test scores.
- Notice that the average grade (mean) for the Tech class 91.0 (A-) is higher than the average grade for the Comparison class 87.4 (B+)
- Notice that in the Tech class, 1 student, little Jon bombed the test scoring a 59 out of 100. In the list above it’s the 5th score in Wise Tech class. Boo little Jonnie! Did he screw up our experiment?
- Let’s put these scores from the 2 classes into a free online t-test calculator and see if the 2 classes performed statistically differently on the Unit test.
- If you have the Excel spreadsheet open, you can just copy/paste the numbers from the Unit tests of the 2 classes from Excel into the t-test calculator. (NOTE: it also worked for me to simply copy/paste the value in item 2.2 above right out of this HuskyCT page and into the t-test calculator). When you do either of these copy/pastes, you will only see the last value in the window, but all 25 scores DO paste there… continue and you’ll see)
- Paste the Unit test scores from 1 class into the window labeled “Enter Data for Group 1” and the other into the window labeled “Enter Data for Group 2” (while it does not really matter which class you put where, we typically put the the group we expect to be higher scores as Group 1, so when we subtract group 2 from group 1, the value is positive, so I’d suggest pasting the Wise Tech class into Group 1).
- Under “1. group description” click the radio button for “groups have equal variance” — this is the usual default option.
- Under “2. number of tails” choose two tailed test, so either class could be better than the other (this is a more conservative test of significance)
- Under “3. Significance level” click the radio button for 0.05 (the value used by most psychological and educational research).
- Under “4. choose a test” click the radio button next to unpaired t test
- Click the blue “Fine t and p values” button at the bottom.
- Read the results to determine if there was a statistically significant difference between the 2 classes.
- If you are feeling interested, go back and change little Jon’s (from the Wise Tech class score from the frightful 59 to a reasonable 88 and rerun the t-test. You can do this simply by scrolling to the bottom of the results you got and editing that score, then clicking the blue “Find t and p values” button again. Did it change the results? Can the performance of 1 student change the entire results of an experiment?
- Discuss in the HuskyCT forum for this Module.