What Happens When We Stop Measuring Readiness?

Today, Inside Higher Ed published my op-ed on UC’s test-free admissions experiment and what the system’s own data is now revealing.

I didn’t write it to relitigate the SAT debate.

I wrote it because something uncomfortable is happening downstream—and students are paying the price.

Over the past several years, the University of California removed standardized tests as part of a broader shift in admissions policy. The intent, at least rhetorically, was to expand access and reduce barriers. But intent and outcomes are not the same thing.

According to a UC San Diego working group report, the number of students arriving without even middle-school-level math skills has increased nearly thirty-fold. That’s not a rounding error. It’s a signal.

When academic readiness is no longer measured consistently across schools, it doesn’t disappear. It simply shows up later—in remedial coursework, in delayed progress, in students quietly stopping out, and in high attrition from demanding majors like engineering and STEM.

Universities can eliminate tests.
They can’t eliminate calculus.

One of the arguments I make in the piece is that removing standardized measures doesn’t make systems more humane by default. In many cases, it does the opposite. It shifts risk away from admissions offices and onto students—especially first-generation students and those from under-resourced high schools—who are admitted without clear signals about whether they’re prepared for the academic demands they’ll face.

I also write from experience.

I finished high school with a roughly 3.5 GPA, weighed down by mediocre ninth- and tenth-grade years before I figured out how to work. A strong SAT score helped balance my application and signal what my transcript alone could not: that I had caught up, matured, and was ready.

In a GPA-only system, students like me are quietly penalized for early missteps—even when they demonstrate real growth. We tell teenagers that improvement matters, then design admissions systems that permanently punish them for a few B’s at age fourteen.

That contradiction matters.

This isn’t an argument for test absolutism. It’s an argument for feedback. For measurement. For honesty about readiness—so support can be targeted early, expectations can be aligned, and opportunity doesn’t turn into attrition.

If you care about access, completion, and long-term outcomes—not just admissions optics—I hope you’ll read the piece.

👉 Read the full op-ed at Inside Higher Ed:
https://www.insidehighered.com/opinion/views/2026/01/05/ucs-test-free-experiment-isnt-going-well-opinion

I welcome disagreement. But let’s at least argue from data—and from what happens to students after they’re admitted.