There are plenty of reasons to believe that multiple-choice and true-false tests are among the worst ways to measure whether learning is successful; in the best of circumstances, they tend to measure only the lowest levels of learning achieved, and in the worst of situations they leave respondents without any acceptable responses from which to choose. Yet we seem to remain tone deaf to concerns and criticisms voiced by educators for decades and continue to rely on them.

From Skley’s Flickr photostream at http://tinyurl.com/machqej
The multiple-choice/true-false approach is pervasive in much of what we encounter day to day within and outside of the world of learning. Surveys often incorporate these methods to make tabulation of the results easy—after all, it can easily be argued, we can’t afford to engage in more personalized labor-intensive methods of collecting data (e.g., reading short- or essay-length responses)—although there are signs that mechanizing the testing and grading process in ever-more sophisticated ways is increasingly becoming possible.
But relying on mechanical methods so obviously produces mechanical results that I believe we need to question our own assumptions, be honest about what those assumptions are producing, and seek better solutions than we so far have managed to produce.
A couple of recent experiences helped me understand the frustrations and weaknesses of multiple-choice/true-false testing at a visceral level: taking two very good and very different massive open online courses (MOOCs), and responding to a survey that relied solely on multiple-choice responses which, because they were poorly written, didn’t provide any appropriate responses for some of us who would have been very willing to participate in the project to which the survey was connected.
Let’s look at the MOOCs first: #etmooc (the Educational Technology & Media MOOC developed and facilitated by Alec Couros and his wonderful gang of “conspirators” earlier this year) and R. David Lankes’s “New Librarianship Master Class” (a MOOC developed and delivered under the auspices of the University of Syracuse School of Information Studies). #etmooc, as a connectivist MOOC (where learning, production of learning objects including blog posts and videos demonstrating the skills that learners were acquiring, and online connections between learners were among the central elements and testing was nonexistent) nurtured development of a community of learning that continues to exist among some of the participants months after the course formally ended. The New Librarianship Master Class, grounded in the more traditional academic model of documenting specific learning goals, relied more on standardized testing to document learning results and offered less evidence that learners were using what they were learning.
The frustrations that some of us mentioned after struggling with standardized questions that didn’t really reflect what we had learned and what we would be capable of doing with that newly-gained knowledge resurfaced for me last night as I was trying to complete in an online research survey. Facing a series of multiple-choice questions, I quickly realized that whoever prepared the options available as responses to the survey questions had underestimated the range of experiences the survey audience had—was, in fact, tone deaf to the nuances of the situation. Opting for results that could be scored mechanically rather than requiring any sort of human engagement in the tabulation process, the writer(s) forgot to include an option for “other” to catch any of us whose experiences didn’t fit any of the options described among the possible multiple-choice responses—which, of course, meant that at least a few of us who might otherwise have been willing to provide useful information abandoned the opportunity and turned to more rewarding endeavors. (And no, there wasn’t an opportunity to simply skip those questions-without-possible-answers so we could stay involved.)
Abandoning a potentially intriguing survey had few repercussions other than the momentary annoyance of being excluded from an interesting project. Being forced to respond to standardized tests that don’t accurately document a learner’s level of mastery of a subject is obviously more significant in that it can affect academic or workplace advancement—and it’s something that is going to have to be addressed in those MOOCs that don’t take the connectivist approach—which raises a broader question that need not be answered within the confines of pre-specified options on a multiple-choice test: why are we not advocating for more effective ways and resources to encourage and document learning successes in both onsite and online settings? Expediency need not be an excuse for producing second-rate results.