Tennessee Takes the Temperature
This article was written by Jonathan Attridge, research analyst at the Tennessee Department of Education.
In spring 2014-15, 68 percent of Tennessee teachers reported that evaluation improves teaching in their school and 63 percent said it improves student learning. That is a drastic shift from when Tennessee became the first state to implement a statewide, multiple-measure teacher evaluation system that included a major student growth component in 2011-12. At the close of the first year of implementation, only around one-third of teachers believed the evaluation process was likely to improve either their teaching or student learning.
We have been able to watch these numbers steadily improve and monitor a host of other outcomes through an annual survey of all educators in the state. We ask teachers to weigh in directly on state, district and school-level policy initiatives. In 2014-15, over 36,000 teachers and almost 2,000 administrators (around 60 percent of the states educators) provided feedback via the Tennessee Educator Survey. This survey has become an invaluable tool for state policymakers to collect feedback from the educators we serve as we chart a path forward on policies designed to improve teaching and learning.
The survey is conducted in partnership with the Tennessee Consortium on Research, Evaluation, and Development at Vanderbilt University, and offers an efficient mechanism for checking the temperature of the evaluation system and soliciting feedback from those who are in our classrooms and instructing our students. We can compare this years survey results to previous years to measure the progress that our state has made and identify areas that we need to focus on improving in the next year.
For example, although we have seen a steady improvement in teachers perceptions of evaluation overall, the percentage of teachers who say they believe the evaluation process is fair (68 percent) has changed little since 2012-13. In the next year, the Tennessee Department of Education will deploy evaluation coaches to districts and schools where perceptions of fairness are particularly negative in the hopes of moving the needle on this indicator. The department has also launched programs aimed at improving instructional coaching (in a partnership with the University of Pittsburgh) and programs to create collaborative relationships between teachers who struggle in particular areas of practice and those who have demonstrated success in those areas according to our evaluation rubrics (the Instructional Partnership Initiative).
The education landscape in Tennessee will continue to change in 2015-16, and the department seeks to ensure the teacher evaluation system can adapt. This year, Tennessee transitions to a new set of assessments and new legislation that adjusts the formula for calculating teacher evaluation scores in light of the new assessment. Over the next year, the department will focus on allowing greater district-level autonomy to determine the model of evaluation that works best at the local level. The department has created explicit goals around helping districts take advantage of the flexibility that already exists and increasing options for district autonomy in areas such as evaluation model selection, use of alternative growth measures and observation practices.
As we launch these initiatives, we will continue to track progress through our annual educator survey. Equally importantly, schools and districts will have the ability to do the same monitoring of their own data. Last week, the department launched a website allowing schools and districts with survey response rates at 50 percent or higher to view their own results. Almost 1,100 schools (66 percent of eligible schools) and 125 districts (84 percent of eligible districts) received teacher results, while almost 75 districts (77 percent of eligible districts) received administrator results. In coming weeks, staff members in the departments regional offices will work with districts to launch strategic planning conversations based in part upon local survey results. Over time, we hope to find ways to improve upon the survey content and web tool to make the results increasingly useful at both the local and state level.