Assessment to Measure Student Learning
Determining the impact of professional development on student learning outcomes proved difficult. Initially, I thought that I would use Scholastic Reading and Math Inventory data to compare student scores. I was familiar with the scoring and the assessment. I did not realize that the district had decided to retire the platform as of the 2020-2021 school year. As I was designing my action research, I struggled to find reliable assessment data that would give me an accurate picture of student learning. In addition, assessment data that had determined the benchmarks for scoring were created pre-pandemic. To compare pre-pandemic scores to pandemic scores did not seem fair. After all, we were comparing a 6 hour school day to a 2 ½ hour school day. Students moved from teacher to teacher and from virtual to in-person, as we progressed through the Health Departments Regulations and parents decided what was best for their families. The school district decided to roll out the Star Test from Renaissance.
Napa Valley Unified School District adopted Renaissance Star Testing during the school year of 2020-2021. Students took the first test before teachers were trained in how to use this new platform. I had difficulty getting data that I could use as I was unfamiliar with the reports provided by the program. Professional Development training for teachers was made available in the winter of 2021. Within Renaissance, there are five different scales to use: Lexile Scale, PARCC, Star Enterprise Scale, Smarter Balanced, and the Star Unified Scale that facilitates comparison of varying Star Test Types. The Star Unified Scale allows assessments that are part of Star Early Literacy to be included in the Star Reading Scale. Likewise, the Star Unified Scale uses measurement properties of a Rasch score scale and provides increased precision and one consistent scale across the Star computer-adaptive assessments.
Asking the three teachers to send me their star data when I was unclear of the options meant that initially, each teacher sent me their data utilizing a different scale. Only when I was able to get all their data using the same scale was I able to complete my analysis of the reading and math for the students taught by the PLC. But now I needed to compare their data to what exactly. Comparing data to Star benchmarks that were created pre-pandemic would not yield useful results. So I decided to compare the PLC data to the district Star data for the 4th grade, allowing comparisons of an identical time period with students of the same age.
I could not compare the data between the three teachers because the students had been re-allocated when we entered phase 2, which was after the first Star tests and before the winter Star tests. So I compiled all the scores for the fourth grade involved in my PLC analysis and did likewise for all the students in fourth grade in the district.
In this way, I removed as many variables as I could to analyze clean data. The trendline for the district data for both math and reading showed a flat line with a trendline score of 0. The PLC trendline score for Reading was 0.32 and for Math was 0.47. The PLC reading and math scores were significant and will require further study. The PLC had accomplished significant learning outcomes for both reading and math during a pandemic. I recommend showcasing the three teachers in this study to be used as models of how to teach during a pandemic. One teacher in the PLC was a presenter at the workshop e-Learning Lab; all three teachers participated in the workshops in the e-Learning Lab, one teacher sought out professional development from U.C. Davis. Two of the teachers have Master’s Degrees, and one holds a Bachelor’s degree. The reason I chose this group to study is because of their reputation for having a strong culture of excellence within the PLC model. Watching them in action was a master class for me. I learned a great deal from these excellent teachers.