<body><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/platform.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://draft.blogger.com/navbar.g?targetBlogID\x3d15526040\x26blogName\x3dSchools+of+Thought\x26publishMode\x3dPUBLISH_MODE_BLOGSPOT\x26navbarType\x3dBLUE\x26layoutType\x3dCLASSIC\x26searchRoot\x3dhttps://haspel.blogspot.com/search\x26blogLocale\x3den_US\x26v\x3d2\x26homepageUrl\x3dhttp://haspel.blogspot.com/\x26vt\x3d-2837553055558778188', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe" }); } }); </script>

Schools of Thought

Translating education research into usable knowledge

Fun with numbers

Wednesday, August 31, 2005
The media has been abuzz today with the news that the Class of 2005's SAT scores were the highest ever -- The New York Times, Christian Science Monitor, Arizona Republic, San Deigo Union Tribune and Charlotte Observer are among the 411 hits on GoogleNews. Here's what the articles aren't telling you:

Number of students in the Class of 2005 taking the SAT: 1,576,000
Number of students in the Class of 2005 taking the ACT: 1,186,251
Approximate graduation rate (proxy for SAT/ACT testtakers) for the class of 2005: 70%

Approximate number of students in the Class of 2005 not taking the SAT or ACT (i.e. not going to college): 1,183,821

The non-collegiate number's undoubtedly higher, since not everyone who graduates takes the SAT or ACT. And, of course, there's a huge gap when you disaggregate the data (approximate graduation rate for blacks and latinos is 50%).

So celebrate, America! The Class of 2005 is scoring a couple points higher on the math section of the SAT -- well, the Class of 2005 minus the million kids being left behind.

"I respectfully suggest that we may be looking at a crisis here"

Monday, August 29, 2005
New York Times columnist Bob Herbert has an excellent op-ed in Monday's paper. He writes, in part:

An education task force established by the [Center for American Progress and Institute for America's Future] noted the following:

"Young low-income and minority children are more likely to start school without having gained important school readiness skills, such as recognizing letters and counting. ... By the fourth grade, low-income students read about three grade levels behind nonpoor students. Across the nation, only 15 percent of low-income fourth graders achieved proficiency in reading in 2003, compared to 41 percent of nonpoor students."

How's that for a disturbing passage? Not only is the picture horribly bleak for low-income and minority kids, but we find that only 41 percent of nonpoor fourth graders can read proficiently.

I respectfully suggest that we may be looking at a crisis here.

But is there a nationwide crisis? I've pointed out the slow growth among the states in elementary school achievement (even with all the trickery -- and the picture is far bleaker in middle and high school) and I've demonstrated the abysmal absolute numbers. You can also talk about America's relatively poor performance in international assessments. But how do you explain rosy headlines like "Young students post solid gains in federal tests," or "School achievement gap is narrowing" that followed the July release of the long-term National Assessment for Educational Progress results? After all, I hold up the NAEP as perhaps the most reliable assessment out there.

The answer lies behind the headlines. The long-term NAEP results, which have given equivalent, comparable tests to students over the past thirty years, showed that in reading 9-year-olds gained 11 scale score points (on a scale of 0-500) over their 1971 counterparts, 13-year-olds gained 4 points and 17-year-olds stayed exactly even.

Keep in mind that 1971 was 34 years ago. Nixon was president. The USSR was still going strong. Neil Armstrong had landed on the moon a mere 2 years earlier. 17-year-olds -- 11th graders -- are reading at the same level as they were 34 years ago. The biggest gainers have inched up 11 points.

I respectfully suggest that we may be looking at a crisis here.

But comparisons with 1971 aren't very useful except for to induce some gasps. It's better to look at 1994, since when we've seen 9-year-old increases of 8 points, 13-year-old losses of 1 point and 17-year-old losses of 3 points. But what does that mean? It's not exactly clear what the scale scores translate to in terms of number of questions answered correctly, partially because the NAEP tests involve both multiple-choice and constructed-response questions. I'm going to call the NCES tomorrow to request data on the conversion scale. All we can say at this point is that on a scale of 0-500, 10 points isn't enormous. It's certainly less than you'd expect or like to see over the course of many, many years.

A more interesting comparison can be made with regards to performance levels, which the test creators have linked to certain degrees of knowledge. 9-year-olds fall into the following three categories:

LEVEL 250: Interrelate Ideas and Make Generalizations
LEVEL 200: Demonstrate Partially Developed Skills and Understanding
LEVEL 150: Carry Out Simple, Discrete Reading Tasks

In 1994, 92% of 9-year-olds were at level 150 or above, 63% at 200 or above and 17% at 250 or above. In 2004, the percentages were 96, 70 and 20.

So now we come to it, through the absolute analysis: Nearly a third of 9-year-olds, 4th graders, can't demonstrate partially developed reading skills, and the overwhelming majority can't interrelate ideas and make generalizations. Only 61% of current 13-year-olds can do the latter!

By any measure, student achievement in America has crept forward over the past decade. The problem is, the gulf that remains -- after 34 years! -- could dwarf the Grand Canyon. The media can spin it however it likes, but drilling down through the national data leads to the same conclusion as cracking open the state data:

I respectfully suggest that we may be looking at a crisis here.

Can NCLB be saved?

Sunday, August 28, 2005
While it may seem that I spent the last week assaulting the No Child Left Behind Act, I should make it clear that I'm not a vociferous opponent. I think that the goal NCLB represents -- a quality education for every child and holding schools accountable for reaching that standard -- is an admirable one. Really, it's the only one. I can quibble about what defines quality, but that's for another day. The problems I have outlined below in some detail are unquestionably issues of execution, not concept. The natural response that follows, then, is that NCLB needs to be implemented better. But is that possible?

The importance of Connecticut becoming the first state to sue the federal government over NCLB is that it's a clarion demonstration of how resentful states are of federal intervention in education. For the moment, let's set aside the debate over whether Georgia should have the right to teach their kids to do arithmetic differently than Montana and focus on the practical aspects of this resistance.

It's not a profound statement to say that states don't like the federal government meddling in their affairs. But when it makes headlines that 45 state governors came together to agree on the monumental and magnanimous principle that there should be a common measure of how many students are graduating from high school, one starts to wonder how much more leeway the federal government has.

Indeed, the salves needed for fixing problems of inconsistent standards, crippling sleight-of-hand on test structure and results, funding inequities and the such may both need to and be unable to come from the feds. Just think how the states will howl the second Secretary Spellings hints at a national assessment. Highway funds are all well and good, but the federal government only has so many rounds in its clip. Even the somewhat radical and extremely intruiging proposals of the liberal task force led by Arizona Gov. Napolitano only went so far as to suggest voluntary national standards -- and that was recieved as a strech. When you've got Florida plunging ahead with its Sunshine "You Might Be Able to Read if You're Lucky" Standards and Texas having to switch tests from the TAAS to the TAKS because the TAAS was so ridiculously easy everyone was passing it (while only 27% of TX 4th graders were proficient on the NAEP reading in 2003), voluntary national standards seem like the very, very least that should have been in place since about yesterday.

The answer to the person who says we need better execution of NCLB to ensure that its purposes are met is: Yes, but how? These aren't minor problems, and they are going to require major solutions. Considering the political landscape just four years after the passage of NCLB, it can't be reasonably assumed that the federal government has the wherewithal to push through the needed implementation. If anything, the 2007 reauthorization of NCLB will potentially see a reduction of federal influence.

The feds can't do it. The states have made it abundantly clear they won't do it. Who precisely is going to save the already-decaying infrastructure of a noble law which is only inching the nation forward on its best days?

Updates

Weekends are always going to be a bit slow with regards to updates (you know, schoolwork and the such). Expect a post by the end of the day.

Question by question

Friday, August 26, 2005
A final thought in the way test scores are reported and evaluated: Not only do states engage in masking the truth, and not only does the media commit a fallacy by only reporting comparative data, but passage on a test is given in static states instead of looking at how many kids are passing each genre of question. In other words, a child can pass a math test on the strength of his division skills while lacking multiplication.

The problem is, the vast majority of states don't release old tests with the accompanying percentage of students who got each question right. Only two that I've found, Washington and Wyoming, do this. The Boston Globe also managed to get its hands on some data for an article today.

So what do Washington and Wyoming demonstrate? Primarily that there is a tremendous amount of variation in the number of students that get different questions correct. We'll look at 4th grade math tests for this exercise, since multiple choice items are the simplest unit of analysis.

In Washington, for example, 80% of 4th graders correctly answered a question about how they would go about collecting data on their classmates' pets, while only 47% could answer a problem involving whole numbers and fractions.

Wyoming is an even better case study because they have far more released MC items and their test is closely aligned to their state NAEP results. Overall in 2003, the test we're looking at, 37% passed proficient.

In Wyoming we see, on the low end, 24% correct on a question involving measurement and arithmetic, 31% on a question with area and arithmetic, 42% on a question about time, 47% on a question about digits and numbers and my personal favorite, 30% correctly answering "which of the figures below is not a rectangle?"

On the high end, 62% correctly responded to a question on estimation, 63% on patterns, 66% on graphing, 61% on simple multiplication and 68% on probability (We'll set aside for a moment that having a third of kids missing questions on the high end is atrocious).

A 40 point spread does not an accurate average make. Once again, it means that most data points are likely not clustered around the middle (of the 15 released items in Wyoming, precisely one was within 5 points of the state average of 37%). Especially when you take into account the automatic advantage of multiple choice questions with regards to guessing and having the universe of answers reduced to four, the results suddenly seem a bit more complex. The media needs to start challenging state departments of education to release tests with the percent getting each question correct, and then some serious analysis needs to happen to see what's really there.

Here's my question: If states can screw around with the structure of a test to inflate the results, if no one questions the absolute numbers and instead focuses solely on trends, and if results are reported/accountability is determined by averages and scale scores which don't reflect the wild variation inside a test, in what way are they legitimate measures with which to force states to improve their education systems? And, are there avenues available to rectify these problems, or are they crippling execution flaws to an otherwise admirable policy goal? Finally, what are the alternatives? Stay tuned: These latter two questions will occupy many of my subsequent posts.

What test results really mean

Thursday, August 25, 2005
I think it's particularly appropriate to stay on the theme of assessments during the Month of the Test Results. Yesterday, I wrote about the chicanery states can and do use to inflate their test scores; today, I want to talk about the fallacy of how scores are reported in the media.

If you read most any newspaper article following a state's release of its test scores, some combination of the words "more," "less," "better," or "worse" will dominate the headlines. To be fair, making comparisons across time is a useful excercise; it tells people whether schools and districts are heading in the right direction. But when the media -- and certaintly the states -- don't dwell on the absolute numbers, the picture becomes distorted. It's nice to say that more people in a town excercise regularly this year compared to last, but if the increase is from 1% to 2%, you might want to reevaluate your methods. Trend data must be melded with absolute numbers.

It's hard to know the authentic absolute numbers for all the reasons I outlined yesterday. But if we look at absolute scores in 2003-2004 or 2004-2005 (depending on whether the state has released '05 yet) on top of comparisons with the baseline 2001-2002 school year, the picture begins to get definition around its edges.

First, consider the states whose test results are closest aligned to their respective NAEP scores. Of the four states who recieve an "A" on their 4th grade tests by Peterson and Hess -- South Carolina, Maine, Wyoming and Massachussetts -- the highest passing proficient rate was 56% for reading and 42% for math, both coming from Massachussetts (Source: State DoEds). All have made mere single-digit gains over the past several years. So, for the most legitimate cross-section we can find, nearly 6 in 10 4th graders aren't proficient in math, and somewhere between 4 or 5 in 10 aren't proficient in reading. Whether scores inch up or down, that's abysmal. That's scary.

Let's dig deeper into Massachussetts, though. In 2000, it was the 13th most populous state according to the census, and it's not exactly anyone's idea of a backwards boondock. In 2003, Massachussetts had about 75,000 kids take the 4th grade English Language Arts test. That means that a 1% increase in test scores in any given grade represents 750 kids becoming proficient.

In 2001, 51% of Massachussetts 4th graders were proficient -- or, put another way, 36,750 weren't. Last year, it was 56%. Which means, NCLB and all, after three full school years an underwhelming 3,750 more kids are learning to read and write now. Nearly 33,000 still can't.

These are the stories that are not being told, the stories that must be told. NCLB is undeniably a shot in the arm; 4th/5th grade reading scores are rising across the nation. But those three- or four-year rises are incremental -- 2 points in Colorado, 7 in Delaware, 8 in Georgia, 2 in Illinois, 3 in Indiana, 7 in Louisiana, 1 in Minnesota, 4 in Mississippi, 6 in Pennsylvania, 3 in Wyoming -- and most state NAEP pass rates hover in the 20-50% range. This is progress, but not success. At this rate, every child will be actually proficient sometime around the turn of the next century. If standards and assessment are to do their job, citizens, the media and policymakers alike must understand what the numbers mean and just how deep of a crisis they represent.

Five simple rules for fudging test numbers

Wednesday, August 24, 2005
With newspaper headlines blaring such missives as "Nearly all Michigan school districts meet improvement standards," or "Scores slump on most Md. tests," August is the unquestionable Month of the Test Result. But as the majority of states begin to release their state assessment and AYP results, one issue which gets lost in the shuffle is just how easy it is for states to manipulate the numbers.

Under NCLB, states have an enormous amount of discretion when it comes to designing their tests. There are five major veins in which states can engage in chicanery: Content standards, test format, achievement standards, cut scores, and n-sizes. I apologize in advance to those for whom this will be Psychometrics 101, but it's not talked about nearly enough.

Content standards
Simply put, these say “what every kid in our state should know in a given grade.” They are the basis of test questions, and any change to the content standards will fundamentally alter what knowledge the test is assessing.

Test format
This is self-explanatory. Is the test entirely multiple-choice? Half multiple-choice and half constructed response? How many questions are there? Is the test timed? Changing the format changes how kids will perform on it. There is also an important distinction between norm-referenced tests, which tell you how students do relative to one another, and criterion-referenced tests, which tell you how students do against a universal benchmark. These are intensely different measures.

Achievement standards
This is the first of two facets that deal with scoring. Achievement standards are the levels of proficiency – e.g. below basic, basic, proficient, advanced. States decide which achievement standard is the target; usually, it is proficient or above. Adding new achievement standards can change the results semantically. Suddenly, kids who were only 10% below proficient are “almost proficient,” and the “basic” group just got 10% smaller.

Cut scores
What score do you need to pass? Most states now employ criterion-referenced tests which assign each student a scale score depending on how many questions they got right. Altering the scale score necessary to be considered proficient dramatically changes the number of students in each category.

N-sizes
The n-size is the minimum number of students belonging to a certain group that must be present in a school for their group to be counted. So, if the n-size is 20, a school with 20 black students has to report and be held accountable for the performance of that subgroup – if the same school only has 17 Hispanics, it doesn’t have to disaggregate for them. This applies not only for racial groups, but also groupings by socioeconomic status, English language capability and disabilities. States have nearly complete autonomy to set their n-sizes, and increasingly the n-sizes have been creeping upwards, excluding more and more subgroups who traditionally struggle.

The incredible number of permutations wouldn't be so bad if states designed their tests in good faith. But they don't. Paul Peterson and Frederick Hess compared the pass rates on every state test with the state's equivalent pass rate on the NAEP, and found that on average, 36% more kids were passing their state assessments than the rigorous NAEP.

The problem that stems from this isn't so much that some states like Texas continue to report trend data even after changing their cut scores (TX changed cut scores between '02-'03 and '03-'04), but instead that results are reported with no context. Arizona is an instructive example; this year, Arizona completely reformatted its AIMS test. New, lower cut scores, sample test items released for the first time, making the test untimed, etc. As a consequence, test scores in Arizona jumped 20 to 30 percent in most subjects.

What's interesting about this is how transparent the charade was. The Arizona department of education said in a press release that the scores weren't comparable historically, and the Arizona Republic wrote an article on how the changes were causing test scores to rise artificially. But the point is this -- from here on out, scores on the AIMS will not reflect any semblance of authentic knowledge!

52% of Arizona 5th graders passed the reading AIMS in 2004, while 71% did in 2005 under the reformatted test. Come next year, when, say, 73% pass, that's the number that will be reported. 73% will be compared to 71%, completely ignoring the fact that both numbers are, in an absolute sense, ludicrously inflated. Thousands more parents will be told that their children are proficient when in reality they are no more proficient than when they weren't proficient last year. In an exceedingly public manner, Arizona has successfully masked the number of its students who can't read or do math well.

Standards and test scores are supposed to highlight problem areas so they can be fixed and hold schools accountable for fixing them. How can that laudable goal possibly function when states have five powerful tools at their disposal with which to whitewash the truth?

Who needs good teachers, anyway?

Tuesday, August 23, 2005
Let me pose a hypothetical: Two schools, one which serves mainly students from upper-middle-class families who live in safe, nuturing environments, the other which serves mainly students from lower-class families who live in dangerous, impoverished environments. With limited resources, do you devote more money to the first school or the second?

The advantaged school, of course.

Profs. Marguerite Roza and Paul Hill of the University of Washington and Center on Reinventing Public Education had an op-ed in Monday's Washington Post in which they explain their new report finding that schools serving disadvantaged populations are getting less funds than their upscale counterparts.

Roza and Hill write:
Here is how it works. While the law is clear that districts should spread their state and local funds evenly among all schools before applying the federal dollars, the truth is that they don't. In four of the five large urban districts we studied, noncategorical funds -- those intended for all students -- disproportionately go to schools that have students from wealthier families. In Denver, the school district spends $365 more per student in the more upscale schools than on those who attend the schools serving families with the highest poverty levels. That adds up to a difference of nearly $200,000 for a school enrolling 500 students.

This is primarily due, they continue, to district accounting practices that use average teacher salary in divvying up each school's budget. Because advantaged schools are hiring better teachers with higher salaries, the disadvantaged schools are having funds tied off for teachers they aren't paying.

This insanity is hardly isolated to five large urban districts. Nationally in 2002, schools in districts with the least poverty recieved on average $1,348 more per student than districts iwth the most poverty. Per student! (Source: Education Trust, The Funding Gap 2004). That's about $40,000 in a classroom of 30 kids, and nearly $540,000 in a school of 400.

Money isn't everything, to be sure -- Washington, D.C., is proof enough of that -- but if not sufficient, it is certainly necessary. Money pays for facilities, books, field trips, technology, and, most importantly, teachers. Unsurprisingly, study after study tells us that disadvantaged schools aren't getting teachers that are qualified or effective.

How much difference does having a good teacher really make? More than you can imagine. To quote a policy brief from the Center for Comprehensive Schools Reform and Improvement:

[W]e now know that good teaching matters tremendously. One influential study in Tennessee found that two groups of students who start out with the same level of achievement can end up 50 points apart on a 100-point scale if one group is assigned three ineffective teachers in a row and the other is assigned three effective teachers in a row. A more recent study in Texas found that the impact of classroom teaching is so great that “having five years of good teachers in a row could overcome the average seventh-grade mathematics achievement gap between lower-income kids and those from higher-income families.”

Looking at it from another angle, multiple researchers have quantified student achievement in terms of future dollars earned. Any way you cut it, an increase of one standard deviation in math performance, for example, can be worth between $100,000 to $200,000 over the course of a student's career. For a person whose annual salary is a respectable $50,000, that's an enormous bonus; consider the impact on someone trying to break a cycle of generational poverty.

The average teacher salary nationwide is about $46,700, with the average starting salary ranging in the low- to mid-30k range. That means that if the extra $540,000 going to the advantaged school anually was instead going to the disadvantaged school, the disadvantaged school could hire a dozen teachers or, if it wanted to pay above average, a smaller number of extremely effective teachers.

Schools and students that need effective teachers the most aren't getting the resources to acquire them. The funding gap is inexcusable, but the result of the funding gap is catastrophic.

Fantastic Florida

Sunday, August 21, 2005
"10th grade reading scores are declining," the Miami Herald warned in June of 2004. "Broward [County] kids slip in FCAT reading," the paper reiterated a year later. Yet while things look pretty dismal in Florida -- a state where nearly a third of 4th graders can't pass the reading assessment and less than half of 8th graders can -- the actual story is far more frightening. Come, if you dare, down a path where tens of thousands of Florida schoolchildren are nearing the horizon of their public education barely able to read.

The Florida Comprehensive Assessment Test, or FCAT, is not the most rigorous test in the country. 28% more Florida 4th graders were passing proficient on the FCAT reading test in 2003 as on the reading test of the National Assessment of Education Progress (NAEP), a nationally recognized benchmark of rigor. In 8th grade math, 33% more kids passed the FCAT than the NAEP (Source: Education Trust education watch state summary).

Worse yet is in high school, where the FCAT is a required graduation exit exam which can be taken up to six times between 10th grade and 12th grade. Here, the test is even weaker. According to a study done by the Florida Department of Education, a student who passed both the high school FCAT math and reading with the minimum passing scale scores of 300 would recieve a 410 verbal and 370 math on the SAT, or a 780 composite.

Chew on that one for a moment. You can graduate from high school in Florida while performing more than 200 points lower than the national average on the SAT. A 780 (or the equivalent 15 composite ACT score) doesn't get you into a single decent state college; not the University of Florida, Florida State University, University of Miami, Northern Florida University, Florida A&M or Florida Gulf Coast University. But Florida's average SAT score is only about 50 points lower than the national average (FL is at nearly 1000), so what's the problem?

The problem is, a mind-blowing number of kids aren't passing the FCAT. A mere 32% of students passed the grade 10 reading test on the first try in 2005, with African-Americans checking in at a distressing 13% and Latinos at 22%. We're not talking just missing by a hair, either; back in 2003, when a third of all 10th graders were passing proficient (level 3 or above), an equal third were wallowing in level 1.

167,000 students took the 10th grade test in 2003 (Source: Florida DOEd). So, put another way, 105,210 10th graders couldn't even get a 410 SAT verbal!

105,210. That's the population of New Haven, CT, or Ann Arbor, MI. Surely though, the story isn't actually THAT bad. Well, Florida doesn't release cumulative pass rates (the percent of students passing when all six tries are thrown together), but it does report the pass rate of 12th graders taking the FCAT; essentially, the last-chancers. Here, we have nearly 19,000 students taking the reading test, and 20% of them passing. That leaves approximately 15,200 Florida 12th graders failing, after six tries and countless support measures, to pass a rudimentary reading test. 15,200 -- the size of the undergraduate population of my own University of Virginia. So take U.Va., except replace it with kids who can hardly read.

So how can Florida's average SAT score be so solid while its average FCAT scale score hovers at the 410 SAT verbal level? Simple: The situation is actually worse than it looks; the high-performing kids (the ones who are actually taking the SAT and going to college, hence the decent scores) are yanking the average FCAT score up.

According to the College Board, 87,000 Florida high schoolers took the SAT in 2004. Their average score on the verbal section was ~500, which equates to a 331 on the FCAT. Since the entire test-taking population in 10th grade was 167,000, that means that the non-SAT takers had to be averaging at best a 261 to come up with the actual state average of 296.

Oh, but you say, the SAT takers were mostly juniors and seniors, the FCAT takers sophmores. True enough, but even if you pretend the SAT takers were getting 450 on their verbal (which is 311 on the FCAT), you're still looking at a 281 for the non-SAT test takers.

How bad is a 281, or a 261? We're talking 350-300 verbal SAT. We're talking, can barely read. We're talking, these are the 45% of Florida kids who don't graduate.

To say that Florida is failing its kids is a gross understatement. Florida is flat-out neglecting its kids. But while this post focuses on Florida, the crisis isn't isolated to the Sunshine State. To say that scores are up or down, or to say that a certain percentage of students are failing, means nothing unless you can put it in terms of severity. The media must help us understand that average scores don't necessarily mean most kids are huddled around the middle and furthermore that the scores are meaningless in an absolute sense unless linked to specific degrees of knowledge. If you can be "proficient" with a 780 composite SAT, "proficiency" cannot be assumed to be a legitimate term.

Step back -- this is one state, one of fifty. It has 80,000 kids who can't read at a reasonable level and 17,000 who can't pass a rudimentary test on the sixth go-around. Tens of thousands of actual people whose prospects and potential are being stunted by an education system that is damaged to the core.

And that's just one state.

Do we need high standards? Hell yes. They just need to be a whole lot higher and a whole lot more effective than they are right now.

Edtroductions

Saturday, August 20, 2005
On test results released a few days ago, black and hispanic 7th graders in California were performing at the same level as white 3rd graders.

In Florida, passing the high school exit exam in reading is equal to getting a 410 verbal on the SAT; 46% of Florida 10th graders failed the exam on their first try.

18 states currently lack unique student identification numbers with which to track individual students as they move through the grades.

Nationally, the high school graduation rate hovers somewhere around 70%, 50% for minorities.

On average, 36% more student pass their state assessments as pass the National Assessment of Educational Progress.

If you are born into a low-income family, you have less than a 10% chance of ending up with a bachelor's degree.

Hi, I'm Elliot H. You may remember me from such blog posts as "How to not build a better high school," highlighted on the excellent blog Eduwonk. I'm a fourth year at the University of Virginia (hence the moniker; for those who don't know, we Cavaliers are nicknamed Wahoos -- it's a fish that can drink its weight in water. I will neither confirm nor deny the stereotype), and I just finished an illuminating ten-week internship with the quality folks over at the Education Trust. At U.Va., I serve as the executive editor of the university newspaper, the Cavalier Daily.

I think it's worthwhile to justify the existence of this blog; after all, I can't blame you for being skeptical about the musings of a 21-year-old college kid who has never set foot inside a classroom as an educator. I've noticed a gulf in the edublogosphere where most journals tend to fall into one of two categories: Compiling the news, or analyzing it. The missing feature is blogs critiquing the news.

Education journalists are understandably hard-pressed to delve deeply into key issues. When you have to produce three articles a week on topics ranging from school board meetings to test scores to school violence, it's not hard to see why. But as a result, the picture of American education that emerges is only a shadow of the truth.

That's one of the purposes of this blog: To draw connections among the tremendously diverse media coverage of education, and to burrow down another level on the important articles. You aren't going to find the data I reeled off above in any one newspaper or report; it's drawn from scholarly research, think tanks, advocacy groups, state and federal departments of education and, yes, the media. There's so much out there, one Wahoo can't hope to synthesize it all. But I can at least help put it in context.

Which segues into the other purpose of this blog, which is more ideological: To ask the question, "Are our kids truly learning?" By this I mean going past the marquee of test scores and statistics to get at the pedagogy behind the numbers. What does passing proficient on a state assessment mean a child can actually do? How deep and broad is his or her knowledge? Is there critical thinking occurring? Are we giving our kids an equality of opportunity, the tools with which to achieve their fullest potential -- as defined by them?

I make a pledge to the reader, here and now. Whenever reasonable, I will back up what I say with facts and figures and I will provide their sources. I am not looking to hoodwink or mislead or manipulate. I have my beliefs and my biases, but years of writing opinion columns and lead editorials has taught me that convincing someone is immeasurably more rewarding than persuading them. Moreover, I am forever in a process of learning in this dazzling, boundless arena. I welcome and encourage comments, free debate, and healthy criticism. If you want to contact me directly, shoot me an email at edwahoo@gmail.com.

A final disclaimer: My work and words are my own, and in no way represent the institutional opinion of The Cavalier Daily, its managing board, or any other organization I am currently or have been previously affiliated with.

Wahoowa.

About the author

Wednesday, August 17, 2005
The author of this blog -- formerly known as EdWahoo -- is Elliot Haspel, currently a graduate student in Education Policy & Management at the Harvard Graduate School of Education. I am a graduate of the University of Virginia and was a member of the 2006 Teach for America Phoenix corps, where I taught 4th grade for two years. I have also served as a summer intern at The Education Trust. I welcome comments, and please don't hesitate to contact me personally. I hope you enjoy the blog!