<body><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/platform.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://www.blogger.com/navbar.g?targetBlogID\x3d15526040\x26blogName\x3dSchools+of+Thought\x26publishMode\x3dPUBLISH_MODE_BLOGSPOT\x26navbarType\x3dBLUE\x26layoutType\x3dCLASSIC\x26searchRoot\x3dhttps://haspel.blogspot.com/search\x26blogLocale\x3den_US\x26v\x3d2\x26homepageUrl\x3dhttp://haspel.blogspot.com/\x26vt\x3d-2837553055558778188', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe" }); } }); </script>

Schools of Thought

Translating education research into usable knowledge

Examination

Well, I'm back. My responsibilities at The Cavalier Daily are officially complete, and now I can focus my full attentions on education and this blog. Since I'm marking an ending, it was only fitting that I attended a launch event on Tuesday: The first official report released by Education Sector. Both the excellent report and the panel discussion that followed were fascinating, and I want to spend some time talking about both.

The report, authored by Thomas Toch, focused on the testing industry and how it plays into the varied nuances of NCLB. The whole thing is worth a read, especially the important sections on the dearth of psychometricians and the shrinking profit margins of testing companies, but I was most interested in the section entitled "simple questions." Anyone who has read this blog knows that I am paramountly concerned with critical thinking; I have long held that simple, primarily multiple-choice tests reflect our learning expectations and allow critical thinking skills to fall by the wayside. Well, it turns out I'm not the only one:

Perhaps the most troubling classroom consequence of the tumult in the testing industry is the strong incentive the problems have created for states and their testing contractors to build tests that measure primarily low-level skills ... NCLB has sought to lift the level of teaching in the nation’s classrooms by requiring states to set challenging standards for what students should know and be able to do. But testing experts say that many of the tests that states are introducing under NCLB contain many questions that require students to merely recall and restate facts rather than do more demanding tasks like applying or evaluating information, largely because it’s easier and cheaper to test the simpler tasks.

...Such tests also give a skewed sense of student achievement. Scores on reading tests that measure mainly literal comprehension are going to be higher than those on tests with a lot of questions that require students to evaluate what they’ve read by, say, reading two passages and identifying themes common to both. The same is true in math. In a study by Lorrie Shepard, a testing expert and the dean of the school of education at the University of Colorado–Boulder, 85 percent of third-graders who had been drilled in computation for a standardized test picked the right answer to 3 x 4, but only 55 percent answered correctly when presented with three rows of four Xs. (emphasis mine)

Despite the flaws in MC items, the report notes that Education Week reported 15 states are using NCLB reading and math exams this year which contain not a single open-ended question.

Both the report and the individuals on the panel discussion -- Toch, Gary Cook (research scientist with the Center for Educational Policy Research at U-Wisconsin), Sharron Hunt (director of testing for the Georgia Dept. of Ed), and Kelley Carpenter (director of communication for McGraw-Hill) -- suggested that the primary solution was to create multiple choice items that assess higher cognitive functions.

Certainly there are better and worse MC questions, and certainly a modest use of MC items is a necessary trade-off given the tremendous cost and time of creating and grading open-ended questions (so long as students are asked, just for diagnostic purposes, why they arrived at their answer), but I question the underlying logic. A multiple choice question fundamentally reduces the universe of possible answers to four, and that also provides an enormous bank of guiding information. Thus, even the best MC questions will still fall short when trying to assess true critical thinking skills. It seems to me that the pyschometric trend should instead veer towards assessments which have a smaller number of intensive constructed response items, much like the now-defunct Maryland School Performance Assessment Program.

More and more these days, assessment seems to be driving instruction. If that's the case, then something like money cannot be allowed to curtail the development of high-quality exams focused on identifying critical thinking skills. Yet, as the report notes, we spend a relative pittance on exams, with the federal government only chipping in $400 million among the 50 states. This is especially grave since a test with many open-ended questions can cost a state upwards of $5 billion over the course of five years to contract, administer and score versus only $2 billion over five years for one that's all MC. But if it's $3 billion more over five years to avoid the iceberg, then we need to start having a conversation about where to come up with the money.

Tomorrow: The panel discussion on common exams and common standards.
« Home | Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »