I am a firm believer that learning never ends, and as a future educator, I aim to support the continuous growth and development of my students’ knowledge and skills by providing them with multiple opportunities for formative assessment and revision before any summative demonstrations of learning. Unfortunately, not all mandated assessments for students attending schools in the United States offer the same sort of “test, reflect, revise, and repeat” cycle that nurtures continuous learning in the same way that I strive to do in my unit planning. From my own experiences, Massachusetts’s MCAS testing was a high stakes exam for my teachers, but students also felt the pressure to meet statewide curriculum and performance goals. The value of standardized testing is a hot debate topic among educators, and while the progressivist teacher in me tends to stand against standardized testing (after all, standardized tests focus on state statistics and curriculum in a way that reduces the centrality of the student), I was highly impressed with how Mr. J employed Vermont’s STAR Reading exam in his classroom.
Last Thursday, Team Nova’s eighth graders completed the STAR Reading exam, an online reading test that gathers various literacy statistics for participating students (approximate Lexile levels, grade levels for reading, how students compare to their peers and school community, etc.). Students take the exam anywhere between 3 and 5 times a year as a means to measure their progress as independent readers. While a lot of our students scored in the eighth grade target areas for the STAR, some results were pleasantly surprising, and others came as a bit of a shock. Expecting the score to be a “one and done” deal like most standardized tests I have taken in the past, I was elated to learn that students who scored lower than Mr. J had predicted would be offered a second opportunity to take the STAR. Tuesday, students completed a second round of STAR testing if their scores were below what Mr. J truly believed their capabilities were. The results were far better than Thursday’s exam scores: most second round students scored at or above their target areas compared to their September scores.
There are about a million thoughts that come to mind when I ponder the STAR exam and standardized testing. For starters, and parallel to my last post about “the Fridays,” how much does the time of day/week impact student performance on standardized tests? I know I will have little control over when certain tests are given, but will I have opportunities to advocate for students who may benefit from testing on a certain day or at a certain time? Next, does repeating the STAR exam technically skew the literacy data for our classroom? Student data uses the best scores on the STAR exam(s) taken within a certain timeframe, so while each student is only accounted for once, taking one student’s better scores compared to another’s single score seems like equitable testing could get lost in translation. Where do you draw the line for who qualifies to retest? What about students whose scores are thought to reflect a “bad” day? Students will have days where some material is more difficult to digest than on others, no matter the subject area or task assigned in class. Should student data really be based on a “better” day set of test scores? What, then, defines true student capabilities?
While I walk the line between agreeing and disagreeing with my students retaking the STAR, I am positive that I support the idea of students being given the opportunity to learn from their past experiences in order to revise for the future. With new “What I Need”/ intervention groups forming based on our new STAR data this week, I look forward to seeing how well students grow in their humanities, science, or math skills over the course of their third term WIN placements.
Comments
Post a Comment