<body><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/platform.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://www.blogger.com/navbar.g?targetBlogID\x3d15526040\x26blogName\x3dSchools+of+Thought\x26publishMode\x3dPUBLISH_MODE_BLOGSPOT\x26navbarType\x3dBLUE\x26layoutType\x3dCLASSIC\x26searchRoot\x3dhttps://haspel.blogspot.com/search\x26blogLocale\x3den_US\x26v\x3d2\x26homepageUrl\x3dhttp://haspel.blogspot.com/\x26vt\x3d-2837553055558778188', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe" }); } }); </script>

Schools of Thought

Translating education research into usable knowledge

Assessing rigorously

Following the comments in the recent EdSector report confirming that many exams are indeed focused solely on low-level skills, I dug deeper into their source material and came across the paper version of a fascinating talk given by Lorrie Shepard, dean of the education school at the University of Colorado-Boulder. The talk, given as part of ETS' William H. Angoff Memorial Lecture Series, focused on what it means to have rigorous assessment which truly tells you what a student knows.

What's perhaps most striking is how much simple standardized test multiple choice items -- items which almost all of us would say assess a skill we want our kids to know -- can inflate the appeareance of knowledge. Take addition and subtraction, for example.

In one case, the same set of kids were presented with the standard test problem of
764
+67
----

and the alternative item of "Add 764 and 67." There were the same MC options for both items.

73% got the first problem right. Only 67% got the second problem correct. Put another way, 6% of the kids taking the standard exam looked like they knew how to add while they actually had no flexible grasp of the concept.

It gets worse when you move to subtraction:

87
-24
----

Versus "subtract 24 from 87." 83% correct on the first, 66% on the second. This time, the inflation was 17%!

More broadly speaking, Shepard notes that, "In large school districts selected because of accountability pressure focused on raising test scores, random subsamples of students were administered unfamiliar standardized tests and alternative tests constructed item-by-item to match the district-administered test but using a slightly more open-ended format. Student performance dropped as much as a half standard deviation on the unfamiliar tests..."

We often here proponents of explicit instruction talk about going "back to the basics." But teaching to the test via drill of low-level skills -- something which is the dominant form of pedagogy in most America public schools -- leaves students without the mobility of mind and leaves us with false impressions of how much they have actually learned. It is useless to be able to pick out the correct answer for 2 + 3 if you don't actually understand what you're doing. And, if you can't use nor understand the addition skill in a variety of circumstances and in a variety of ways, how can you ever develop critical thinking skills which are grounded in extrapolation and connection? Yet all we care about as a system -- all we judge and peg consequences to (positive and negative) -- is the ability to pick out the correct answer for 2 + 3.

Assessments need a fundamental evolution if they are to begin doing their jobs and actually start assessing the learning we want to see.
« Home | Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »