Happy Wednesday! As Team Nova wades knee-deep in our new “We The People” first amendment debate unit, students are still thinking about westward expansion in the United States, likely because the closure that a graded summative assessment brings to a unit did not arrive until today. After nearly six weeks of readings, discussions, and a screening of James Cameron’s Avatar, Team Nova spent last Thursday taking their first (and probably only) major test of the year. Now, here’s a disclaimer: Mr. J. is not a fan of assessing student learning by an individual’s performance on traditionally styled tests (like those including multiple-choice questions, short essay responses, matching sections, true or false, etc.). I myself prefer to assess student progress and learning via writing prompts, performance assessments, personal communications, and any other forms of assessment that do not require highly accurate recall on top of reading, writing, and comprehension skills. However, with EMS eighth graders moving on to Essex High School in a few short months, Mr. J. deemed it essential to our students’ academic and emotional preparation that they experience at least one class-long exam this year.
Unfortunately, final scores did not land as high as we had hoped on our proficiency scale, with Nova’s average proficiency score hovering around a 2.9 out of 4. While students are expected to perform at proficiency (meeting the standards for an overall score of 3 out of 4) on all assignments, a student’s final class grade is determined by the average of his or her assignment proficiency scores. 2.9 is by no means a bad class proficiency average, but there are a few test taking factors that have Mr. J., my students, and I wondering just what prevented further test success for Team Nova: students were allowed to have printed notes as long as they followed a given format and contained information on westward expansion written in their own words, and our humanities classes spent three whole days before the test invested in topic review. Were student notes disorganized, overfilled, inefficient, or lacking key information? Did the review days not cater to student needs, or did we as teachers misunderstand where misconceptions existed? Did students not ask the right questions, or were they unaware of what they didn’t know? Was our formative data (which, for the most part, showed that students understood our classroom discussions and most key vocabulary terms) inaccurate or flawed? Did the low scores from the previous mapping quizzes discourage students when mapping again appeared on the test? There are so many questions I have about our westward expansion test, and from my experiences in the classroom thus far, I am positive that most students could point to the answer of more than one of these questions as their main obstacle on exam day. Furthermore, I am confident in my belief that each student would name a different obstacle from any one of the students sitting next to them. So, how does an educator process test data like this in a way that will benefit his or her teaching, but also the students who are coping with the aftermath of low test scores now?
After some careful consideration, Mr. J. decided to scale the test by 8 points, raising our highest score from a 92 to a 100, and subsequently raising other students almost 10 points higher than their original score. Students will not have the option to retake the exam, but options for extra credit are available for the students who are concerned about their overall humanities grade. While I cannot say for certain what I would do in response to a respectable but still low team average on an exam I gave, I do know that I would reflect on the test scores as an indicator that something in my lesson implementation was weaker than what it could have been. For now, Mr. J. and I can at least rest assured that our students have been exposed to traditional test taking, and that we are all learning from the westward expansion test results as a classroom community.
Comments
Post a Comment