Accessible assessment in K-12: a data quality argument

Every district collects formative assessment data. Very few have stopped to ask whether the data they’re collecting actually measures what they intend.
When a student with dyslexia takes a timed, text-heavy reading activity without text-to-speech, the score reflects their decoding barrier, not their comprehension. When a multilingual learner encounters unfamiliar academic vocabulary in English during a math assessment, the score reflects their language acquisition stage, not their mathematical reasoning. When a student with anxiety is placed in a leaderboard-driven environment, the score may reflect their stress response, not their content knowledge.
These are not edge cases. They are systematic patterns affecting a substantial portion of every district’s student population. In assessment science, they have a precise name: construct-irrelevant variance.
What is construct-irrelevant variance and why does it matter?
Construct-irrelevant variance is when student scores are affected by factors unrelated to the knowledge being measured. In plain terms: the assessment is measuring the wrong thing, and instructional decisions built on that data are imprecise at best and counterproductive at worst.
Universal Design for Learning (UDL Guidelines 3.0, CAST, 2024) establishes the foundational principle: accessible assessment is not a retrofit applied after the fact. It is a design requirement from the start. The “average learner” for whom most tools were historically designed does not exist as a statistical reality.
CAST (2024), UDL Guidelines 3.0. NCEO Report #451 (2023).
NCEO Report #451 found that technology-based accommodations (text-to-speech, extended time, electronic delivery) were among the most consistently effective. They reduce construct-irrelevant variance without altering the underlying construct. Extended time for a student with a processing disorder does not make an assessment less rigorous. It removes a barrier that was never part of the construct.
Why is this a data argument, not just an equity argument?
Districts typically position accessibility as an equity concern, which it is. But the data validity argument is harder to dismiss.
NCES data shows that 15% of U.S. public school students receive services under IDEA, while English learners represent another 10.6%. Students who benefit from accommodations routinely represent 20 to 25 percent or more of a typical district’s enrollment.
NCES (2022-23): IDEA services. NCES (Fall 2021): English learners.
If that share of your student population is taking formative assessments that systematically measure the wrong thing, then that share of your data is wrong. Not imprecise. Wrong. A district making instructional decisions on systematically biased formative data is not data-driven. It is amplifying the gaps it is working to close.
Data-driven instruction only works if the data is valid. Accessible assessment is the foundation for valid measurement.
What does genuine assessment accessibility require?
Genuine accessibility requires information available in multiple formats so that no single modality becomes the barrier. It requires multiple ways to demonstrate knowledge. And it requires assessment conditions that allow students with different processing patterns and anxiety profiles to perform without those profiles corrupting the data.
None of these can function as exceptions educators must request. By the time a teacher has filed a request and waited for approval, the formative assessment moment has passed. These conditions need to be present, by default, for every student who needs them.
How does Wayground approach assessment accommodations?
Wayground offers more than 25 accommodations permanently free for all U.S. educators. Not as an add-on tier. Not as a premium feature. As a baseline.
The reasoning is logical, not charitable. If accommodations are required for formative data to be valid, gating them behind a paywall means selling invalid data to the districts that cannot afford the upgrade.
Text-to-speech removes the decoding barrier for students with dyslexia and early language learners. Extended time removes the processing confound. Focus Mode removes animation and visual clutter, not as a concession to distraction, but as the removal of construct-irrelevant noise. Mastery-based practice produces cleaner data by removing the stress variable for students whose anxiety corrupts their performance under timed conditions.
What should districts look for?
CAST’s framework captures the design intention precisely: design for variability, not the average. The average learner does not exist. Assessment that cannot accommodate learner variability is not measuring student knowledge broadly. It is measuring the overlap between student profile and assessment format.
Two independent ESSA Level III studies, spanning nearly 13,000 schools across Texas and California, found consistent positive correlations between Wayground usage and outcomes. Accessible assessment is how both data quality and student engagement become possible at once.
See how Wayground’s built-in accommodations work, and why they’re permanently free for every educator.


