In the previous post, we discussed the College Learning Assessment Test, which is said to show that 40% of college graduates lack the basic reasoning skills needed to hold down a white-collar job. Not surprising, although, for style’s sake, I sort of wish they’d gone with 40.27% or 39.94% – nothing says Science! like a couple extra decimal places, especially in cases where arbitrary gross-level assumptions, such as what qualifies as a white collar job and exactly what the acceptable level of reasoning ability (whatever that may be) is needed, are made, yet never discussed. But hey, they didn’t fall for it, and used a nice round 40%, which is encouraging, in a way.
But they did a couple other questionable things that aren’t so benign. We are invited to judge the value of a college education based on the obvious improvement in reasoning abilities between uneducated freshmen and presumably well-educated seniors. Check this out:
The percentage of freshmen who are intellectual cripples is much higher than the percentage of seniors in the same boat. So, see! College works – input a high percentage of drooling idiots, process them at great expense over 4, 5 or 6 years, output a smaller percentage of drooling idiots!* For the 23% of students** who went from being intellectually incompetent as freshman to being fully intellectually qualified to be an office clerk as seniors, that 5 or 6-figure debt*** is worth it! Colleges get to keep on keepin’ on! Everything is cool! SHUT UP!
Now, one would assume (and we know where that leads) from this pretty graphic that the freshmen and the seniors are the same group. Right? Otherwise, it wouldn’t be cricket to compare them directly like this.
One would be wrong:
The test, which was administered at 169 colleges and universities in 2013 and 2014 and released Thursday, reveals broad variation in the intellectual development of the nation’s students depending on the type and even location of the school they attend.
Since it is unlikely that very many students could go from being freshmen in 2013 to being seniors in 2014, it seems we are not comparing apples to apples. This is noted in the article by a critic:
Mr. Arum was skeptical of the advantages accrued. Because the test was administered over one academic year, it was taken by two groups of people. A total of 18,178 freshmen took the test and 13,474 seniors. That mismatch suggested a selection bias to Mr. Arum.
(Aside: selection bias? no, I don’t that quite captures the issue.)
“Who knows how many dropped out? They were probably the weaker students,” he said.
Ms. James said first-year students expect to sit for a battery of tests when they arrive at college, but seniors had little incentive to take the exam. The CAE attempted to statistically correct for the selection bias, but because the test wasn’t administered to a single group over four years, there were inherent limitations.
“It’s accurate to the extent possible,” she said.
The CLA+ is graded on a scale of 400 to 1600. In the fall of 2013, freshmen averaged a score of 1039, and graduating seniors averaged 1128, a gain of 89 points.
“Statistically correct” for problems in the base data? OooKaaay. So, we’re *not* following a group of freshmen to see if, 4, 5 or 6 years later, having taken all the required courses, they have learned how to think at an assistant retail manager level as seniors? We are in fact just comparing the results of group A with the results of an unrelated group B? That doesn’t seem right. But we are told that this comparison is “accurate to the extent possible,” which is less than totally satisfying.
Time to roll out some math! Lets’s blow out the numbers and recalculate:
Here we just back into the numbers of students in each classification – it’s rough, but illustrative. Following Mr. Arum’s insight above, what if the difference represents academically poorer students dropping out? (Of course, this isn’t true, because these are not the same students in the two groups. Again, just using this to illustrate the issue.)
Now, this is an extreme case, and, as mentioned above, is completely unreal, as we’re not talking about the same students in the two groups. But it does illustrate what would happen if students on the left-hand side of the graphic were more likely to drop out than students toward the right – it would make it seem that college improves a student’s reasoning abilities, when in fact what is happening is that college is largely just weeding out those with particularly bad reasoning abilities, so that the mix at the senior end of things is simply richer in students with minimal reasoning ability that they brought to college with them. In other words, in this extreme, totally invalid except as an illustration of the concept example, 4, 5, or 6 years of college improves the reasoning ability of about 11% of the students enough to move them from “can’t get a job in an office” to “could get a job in an office.”
How could that not be worth $100k+?
Conclusion: total junk science. Its flaws are so obvious that only 93.24% of college graduates would fall for this. But more damning: is crap like this supposed to *encourage* people to go to college? Youwsa!
* Note: there’s no indication that simply being a drooling idiot is enough to deprive a student of getting a degree – that would be mean!
** Merely subtracting the 40% of incompetent seniors from the 63% incompetent freshmen to get the 23%. In addition to the many other assumptions discussed, this also assumes that nobody starts out as a minimally competent freshman and is rendered an idiot by college – an assumption one might be loathe to make, given the anecdotal evidence ready to hand.
*** For any recent college graduates that may be reading this, this refers to the from $10,000 to over $100,000 that *you* borrowed and agreed to pay back with the revenues generated from the lucrative career in office admin your ‘studies’ degree will get you. If you’re lucky.