Educator Effectiveness is About Learning and Growth Opportunities
And that should never end

Blog post Megan BorenSREB Program Specialist

Lessons learned from three years of work in 8 states

Providing real-time feedback and growth opportunities to teachers — the real purpose of educator effectiveness systems — means that school leaders have to understand and value the skills to be instructional leaders and be properly trained on those skills. In any career path, professionals need to know how they’re doing and need help honing their craft. Teaching is no different; teachers need a coach, they need feedback. Even the best need professional learning opportunities in school with their colleagues as well as at conferences and workshops.

When state evaluation systems were initially rolled out, sometimes hastily, they struck many educators as all about test scores and accountability, not growth and improvement. Many school leaders weren’t given the proper training and tools to become instructional leaders, nor did their district supervisors provide them with coaching and feedback to grow in that role.

Principals, assistant principals and teacher-leaders need to know how to observe teachers, use formal and informal data, coach them when needed and steer them toward truly helpful professional learning that will grow their practice. All of which is hard enough, but they also need to know the value this leadership will bring and how to make time to be an instructional leader for their teachers.

States need coaching too

Just as educators need support and coaching, so do state agency teams. It’s about feedback and growth for everyone. That’s how we all get better at the mission of educating and advancing our students to their fullest potential.

From 2015-2018, a small team of SREB educator effectiveness staff led by former vice president Andy Baxter worked on a grant that provided financial and technical assistance to half of the SREB states. Specifically, the team assisted with the educator evaluation and growth systems adopted by each state. SREB coordinated grants from the Bill & Melinda Gates Foundation of up to $1 million each in eight states, coupling it with additional support from our team including a robust needs assessment at the outset, detailed progress monitoring reviews for three years, and large-scale focus group research on the ground in the states.

Most states had rolled out their evaluation systems years earlier as part of a wave of reform. Some systems were hurriedly designed, with little testing or time for school leaders and teachers to adjust to a complex process of observation, data gathering and feedback loops. Compliance with the minimum requirements of evaluation became a lifeline for most school leaders. Where schools were using student growth measures, educators often perceived that the system was about data and accountability, when its intent was instructional growth.

We knew the states needed help, and we had our work cut out for us.

This project became a labor of love for our small educator effectiveness team. We set out to help states tackle large-scale redesign of their evaluation and growth systems, to better implement those systems and to better communicate their true purpose. The state agency teams and our staff learned a great deal about these greater goals, and a lot more.

More organizations and funders should prioritize helping state agencies with large-scale initiatives using a truly collaborative approach

Every state agency we worked with overwhelmingly appreciated the support from SREB, saying they wished SREB would offer more of this kind of assistance in the future. The dollars were appreciated, but even more were our efforts to share research and best practices, conduct focus groups, connect state teams to each other and to other leading organizations as a community of practice, and regularly reflect and plan with states on new strategies to try.

I think more organizations and funders should prioritize helping state agencies with large-scale initiatives using a truly collaborative approach. We were able to assist in many ways: capacity-building, strategic planning, communications planning, research, convenings, procurement issues, technology planning and data analysis. We instituted a six-month iterative funding review so states could regularly alter their funding strategies to respond to real-time needs and results.

Ultimately, evaluations should not be viewed as a compliance task, but rather as one tool in a cohesive system of structures related to effectiveness: preparation, development, compensation, recruitment and retention of high-quality educators. Through the grant, many states made gains in shifting their evaluation systems from compliance to an opportunity to coach teachers. But we learned that this work must persist for years to come — school leaders need access to quality observation and feedback training annually. Teachers need to know where to find quality professional learning resources, and everyone needs to know how student data properly ties into the conversation. Scaling this kind of support to all districts and schools in a state continues to be a huge challenge, especially for those states with small teams and budgets dedicated to this important work.

To learn more about this project, read our report, Improving Educator Feedback and Support: Lessons from Eight SREB States.

More about educator effectiveness at SREB.