Survey Research: Tools and How-to
UConn provides us with the tool Qualtrics to use to design, collect and analyze survey style data in a variety of formats. EPSY offers a 3 credit course, EPSY 5621 Construction of Evaluation Instruments. It takes both the tools and the knowledge of how to use them, to create and analyze surveys effectively.
With your NetID and Password you can Login in to UConn’s Qualtrics to see what options are available and how it works (this is a 37 minute YouTube Tutorial Intro for Beginners). In K-12 it may be more common to use survey tools like Socrative, Kahoot! and Polleverywhere to produce and use quick surveys for students. For parent surveys SurveyMonkey (requires a paid account) and Google Forms (free, but analytics collected) may be more commonly used. These tools vary in the degree to which they let you create “skip patterns” and links of items within the survey that are dependent on answers to other items. The whole business can get pretty complicated.
Hey, Let’s just try one! Here’s some “research” on the impact of Google Slides in Education.
Please read Heggart and Yoo (2018). This is self described research on a framework for getting the most out of Google Classroom. Being about to read and analyze published “research” like this is a big reason you are in this course. So let’s try it together:
- Is the information provided scholarly, reliable, valid and convincing?
- Does the research show that Google Classroom causes better learning?
- Do you feel the authors who were also the instructors and whose note provide the data, are unbiased?
- Do you feel students, under these circumstances, are fully qualified to report the educational value of a tool/app like Google Classroom?
- What might you add to this study now that you have read a bit more about educational research methods?
Survey Research: Should we trust it?
Research on memory tells us that our recall of events is sketchy at best and reconstructed at worst. Consider a very old study by Maier (1931). Subjects were brought into a room in which various objects were lying around and two ropes hang from the ceiling. Their task was to tie the ropes together. Because of the length of the ropes and the distance between them it was not possible to just take one, walk to the other and tie them together. After explaining the task, the experimenter gave two hints: he ‘accidentally’ touched one rope which made it swing back and forth a bit and, if a subject still failed to solve it after some time, the experimenter gave him a pair of scissors and told him: ‘With the aid of this and no other object you can solve the problem.’ In this case subjects found the solution (tie the scissors to the rope, swing it, walk to the other, pull it over and grab the swinging rope) much quicker after the hint, than when the experimenter had not given a hint. Afterwards subjects were asked if they solved the problem ‘as a whole’, without intermediate steps leading to solution or if they gradually solved it in steps. They were also asked if they had noticed that the experimenter had touched the rope and if that had had any influence on their reasoning. Maier found that subjects who discovered the solution ‘as a whole’ reported that they had not noticed the cue or that they had not used it. The explanation for this is that from the subjects’ perspective it must be highly unlikely that this would have had any effect, considering the time that they spent solving the problem (between several minutes and half an hour) and the sudden appearance of the solution. Yet, from the difference in solution time, one can conclude that the cue did have an effect.
Drawing directly from West (2014), one obvious limitation of questionnaires is that they are subject to faking, and therefore, to social desirability bias. When considering whether an item such as “I am a hard worker” should be marked “very much like me,” a child (or her teacher or parent) may be inclined to choose a higher rating in order to appear more attractive to herself or to others. To the extent that social desirability bias is uniform within a group under study, it will inflate individual responses but not alter their rank order. If some individuals respond more to social pressure than others, however, their placement within the overall distribution of responses could change.
Another obvious flaw is that survey forms presume we all have the exact same idea of the context given in the prompt statements in mind when we respond. But usually we don’t, because that would be very rare that we have exactly the same experiences with the ideas. This “reference bias,” that occurs when survey responses are influenced by differing standards of comparison, tends to make survey results uninterpretable. The child deciding whether she is a “hard worker” must conjure up a mental image of hard work to which she can compare her own habits. A child with high standards might consider a hard worker to be someone who does all of her homework well before bedtime and, in addition, organizes and reviews all of her notes from the day’s classes. Another child might consider a hard worker to be someone who brings home her assignments and attempts to complete them, even if most of them remain unfinished the next morning.
Surveys: Not Perfect, but easy to do for Classroom Teachers
Student and Parent surveys are often the sole data source for information on critical school issues, including climate and the effectiveness of teaching and learning (our version of customer service). Chapter 8 in your TopHat text gives some important details and cautions about such survey research. I would draw your attention to the section (8.7.1) about “Response Rate” and how nonresponse bias is always a concern. Chapter 8 has a few bits of advice about constructing a good survey, such as to avoid compound (double barreled statements) and to avoid negatively worded (potentially confusing) statements. But there is quite a bit more to it.
First, don’t ask unnecessary things (like age or grade) unless you have a hypothesis that involves those data, because such can activate respondent’s defenses (am I too old? is this going to affect my grade?). And keep any such personal demographic items to the last, so it is more likely if respondents quit early, that they have seen the critical items. It is also nicer and more polite, not to ask someone’s age as the very first thing. Would you walk up to someone’s door and when they answer, make your first question “how old are you?” You get the point. Our friends at SurveyMonkey have some decent words of advice. Note that the order in which you list options can influence responses, as well, so try to balance or randomize that as a factor. Pilot testing is particularly good at identifying unclear wording and confirming that respondent have in mind what you expect when they answer. I think we have discussed the “social desirability effect” where people want to look good, so they may give the most socially acceptable answer in lieu of the truth. And going back to the Ethics module, be sure to tell respondents WHY you need this information, get their consent to use it for those purposes, and thank them for providing the information.
Other Readings:
National Institutes of Health Guide to Best Practices and Consideration for Survey Research
Essential Understanding:
This is not a full course on Instrument Design nor a working guide to conducting Survey Research. Instead, your take-away should involve just how attention to details in survey research matters. Who responds (and who doesn’t), the order of questions and answers, and many other factors can influence the data you get from surveys. It matters what people bring to mind when they read your questions, and not what you THINK they should have in mind. It also matters the nature of human memory, which is not like a computer memory, and often builds or reconstructs things based on the question at the moment, incorporating information from the way you ask the question into the memory of the prior experience.
References
West, M. R. (2014.) Brookings Institute The limitations of self-report measure of non-cognitive skills.
Maier, N. R. F. (1931). Reasoning in humans II. The solution to a problem and its appearance in consciousness. Journal of Comparative Psychology, (12), 181–194.