In a recent survey, three-quarters of English language arts and math teachers reported that their students had taken an interim assessment in the 2021–22 school year, a tool commonly used to gauge student progress. This data point helps to illuminate how states and districts are continuing to prioritize strategies and tools such as assessments to help address pandemic recovery and chronic inequities in student outcomes.
But how much do we actually know about the efficacy of commercial interim assessments? Over the past several years, EdReports and the Center for Assessment (CfA) have doggedly pursued the potential of Consumer Reports-style reviews of these products as one way to get an answer to this question. Teachers need to know if assessment results truly represent what students are, or are not, learning. Demand in the $1+ billion assessment market is growing, but the supply of credible data about their effectiveness is not.
As of May 2023, EdReports and CfA have had to pause their review efforts indefinitely due to a lack of publisher participation. However, we hope that by sharing what we’ve learned on our extensive journey, and the tools we have created, educators can exercise their purchasing power to press the assessment market for greater transparency and quality. Students are counting on us—they deserve evidence-based support to help them learn and grow.
What is an Interim Assessment?
The term “interim (or benchmark) assessments” covers a range of designs and purposes. Broadly, they’re tests administered at different points in the school year to gauge student progress. Usage is widespread, with survey data showing the heaviest reliance in urban districts—those that serve the most disproportionately higher numbers of historically-marginalized students.
These assessments have real-world impacts. Many educators make instructional changes based on the results—decisions that can have profound and lasting effects on the trajectories of countless learners. It is critically important to know if students are being assessed on what they have learned so teachers can make informed decisions.
The Long Pursuit to Conduct Rigorous Reviews of Commercial Interim Assessments
Starting in 2016, EdReports and the Center for Assessment worked extensively with expert educators and scientists to build a process and design tools to review commercially available interim assessment products. These assessments have long been a black box to educators and there has never been a third-party review of publishers’ marketing claims.
But the drive to review commercial interim assessments came with many obstacles. For one, there are no blueprints to conduct this type of review. While EdReports was able to leverage its experience conducting reviews of instructional materials, we had to seamlessly incorporate a psychometric component to the process to align with test design and delivery.
Additionally, unlike EdReports’ reviews of K–12 instructional materials, where products can be purchased independently, reviewing interim assessments requires publisher consent because the products’ test questions, reports, and other tools are proprietary. Taking part in a review represented a significant publisher commitment, and many declined our invitation.
These obstacles created delays in our review process but, by the summer of 2022, EdReports and CfA were able to secure commitments from two publishers. Nearly 70 educator reviewers were trained on how to use the review tools and conduct a rigorous consensus process. The organizations formally announced their intentions to begin the review process, and the response from the education field was optimistic about this important step toward transparency and increasing the demand for excellence.
Lack of Publisher Participation
Not long after announcing the start of the reviews, one of the participating publishers opted not to share the information required to execute the review process. This event necessitated EdReports and CfA to reconsider how and whether to proceed. Without the context of other reviews to compare to or firm commitments from other vendors, we collectively agreed it would not have been sufficiently meaningful or actionable to the field to release a single review.
We believe that this is a real loss to the field. During this uncertain time for school funding and student learning acceleration efforts, publicly-available, independent reviews of interim assessments could have been a powerful resource to help districts make hugely consequential instructional and budgetary decisions in service of student outcomes.
But we also believe that educators hold enormous power to push publishers for more information and ultimately shape the interim assessment field for the better. EdReports’ review tools and resources can support districts to better understand how a particular assessment fits within their local priorities. When educators can articulate what they need from an assessment product, they can leverage their role as current or potential customers to get the data and evidence necessary to make an informed purchase.
Resources to Bolster Your Publisher Conversations
Even without independent reviews, there’s a lot that districts can do before purchasing commercial interim assessments to ensure what they choose will truly serve teachers and students. Questions publishers should be able to answer include:
- What’s your vision for how the product should be used, and what’s the research base supporting this vision?
- How should assessment scores be interpreted, and what decisions can those scores inform? What evidence supports the idea that using the data in this way helps improve student outcomes?
- How were the product’s test questions evaluated, and were educators involved?
- Are all the test questions standards-aligned? If so, what evidence supports that claim?
States and districts can leverage the review tools created by EdReports and CfA. While we acknowledge that the tools are extensive, and not all school systems have the personnel or capacity to parse through evidence in a systematic and calibrated manner, we believe educators can still benefit from them. In addition to supporting asking the questions listed above, they can:
- Inspire adaptation.
- Guide educators to delve deeply into the purpose, alignment, and utility of assessments purchased and/or under consideration.
- Help bolster assessment literacy for educators through examination and discussion of criteria and evidence guides
- Encourage reflection on intentions for the product, whether the product is designed to support those needs, and whether publishers can supply the research and documentation to ensure there is evidence to support marketing claims.
- Provide some indicators that could be utilized by schools and districts. For example, educators could utilize test events from their students (with identifying information removed) on products in use to look at some of the alignment criteria highlighted in gateway 1, criterion 2 regardless of whether vendors can or would provide information on test blueprints, etc.
Interim Assessment Review Tools and Implementation Guide
The review tools support a sequential review process that reflect the importance of alignment to college and career-ready standards, evidence of publisher claims on design elements, and utility of the results to support teachers in appropriate ways. The tools consider other high-quality attributes of assessments as recommended by educators and assessment experts.
Coming Soon: District Assessment Procurement Protocol (DAPP)
This tool walks educators through three activities designed to inform the assessment procurement process:, 1) getting clear on use, 2) identifying desired assessment features, and 3) evaluating the technical quality of different assessment options.
July 12, 2022 – EdReports and Center for Assessment Announce Collaboration to Review Interim Assessments
EdWeek Market Brief – Are Interim Assessments Living Up to Their Billing? New Review Aims to Find Out
District Administration – Here comes fresh guidance in choosing the right testing platforms