Performance Reviews – think again (part 2)
Performance Reviews – think again (part 2)

Performance Reviews – think again (part 2)

Two weeks in and school is starting to find a rhythm. One of the few things new staff can be grateful for is, that for the most part, they will have already agreed their contractual terms and therefore there is no need for a 2017-18 performance review closing review conversation. However, there will need to be a 2018-19 open conversation and a two new policies to get to grips with (why are school leaders up and down the country writing individual performance review and PRP policies anyway, is yet another workload agenda point). The topic deserved closer inspection.

Conversations with the experts

Understanding requires mastery of four ways of looking at things – as they were, as they are, as they might become, and as they ought to be. – Dee Hock (Birth of the Chaordic Age)

Part 1 focused on how we got here, how things are and the criticism of common performance management practice as outlined in the “Could do better” CIPD report. Part 2 needs to focus on “as they might become,” for I am not convinced there is even a place for PRP in education… What “ought to be,” is perhaps something very different indeed.

Conversation with Education Support Partnership Chief Executive Julian Stanley @edsupportukCEO about the work of the charity is quite sobering. Here is a charity, working at the coalface. Supporting teachers, concerned enough, brave enough, worried enough to call for help. Come October, the proportion of calls related to pending or post performance review, PRP conversations have been steadily increasing. This is a very real and personal experience for teachers.

With these thoughts in mind, I have moved from criticism to remedy, from “as they are” to  “as they might become.” I first identified experts in the fields of organisational management, lesson observation, education data and education policy design that I felt may be in a position to advise me. I then wrote emails outlining my motivate and asking for their help. The response has been humbling and concerning.

Speaking with Prof Rob Briner  was a privilege. Clear, insightful and pragmatic. I could barely keep pace with him, making copious notes and having to ask Rob to repeat himself quite a few times. If only I had recorded the conversation – (now there is an idea Julian.) Though I spoke to Rob second, his insights provided a four point framework from which to address our thinking about performance review. In reflection, these points are commonly raised in the reports by David Marsden (Professor David Marsden – Professor of Industrial Relations)..

Prof Rob Briner

(Professor of Organizational Psychology in the School of Business and Management)

I doubt you want a re-run of our conversation. The nub of the conversation are these four questions. I am not sure if the order matters.

1. What is it that you are trying to achieve?

Can you, are you, measuring what matters? What matters in schools? – Student outcomes, staff commitment to their own professional development, teacher effectiveness (hold that thought), stakeholder satisfaction.

For the sake of this summary – let’s go with just two of the more common objective (which is a rarity) – student outcomes and lesson observations.

2. Can the reviewee impact upon the outcome you intend to measure?

If employees do not feel they are capable of improving performance through their own efforts, PRP may in fact serve as a disincentive, with employees choosing to expend minimal effort, rather than to compete.

As a teacher, I would wholeheartedly want the answer to be, definitely yes to both student outcomes and graded lesson observations. However, it is no way near that simple.

Student outcomes

Let’s start by highlighting just a few very obvious variable.

What student outcome?

Right, let’s rattle off a few for starters. Prior attainment, group size, selected / core, curriculum allocation, curriculum placement, I’ve been told to timetable maths and English first double of every day for Year 11 and David Rogers has quipped that “Geography is always taught last double period on Friday afternoon.” If you teach core, 3 groups, teach an elective, 6 groups – how can this be comparable.

You get the picture.

Who taught them last year, or the year before that (if it is now a three-year GCSE programme). When teaching at a Tauntons Sixth Form College, I even heard challenges to poor coursing, or coursing up, putting students onto Level 3 programmes with resits.

What if I teach a group without a terminal exam measure? How reliable are the previous teacher assessments?

Graded lesson observations

No.

3. Is the measurement valid?

Student outcomes – no

The measure is relative valid, a little less reliable. Assessment itself is fallible. However, I think we would have to accept that terminal exams are as good a measure as we can expects.

Setting individual student targets – absolute nonsense, but it goes on.

Setting group targets – regrettably, targets are troublesome and a fully explanation is planned. (Another expert contributor and part 3).

Setting Year Group targets, whole school targets, you can appreciate why this needs a fuller explanation.

Graded lesson observations – no

Absolute nonsense. Matt O’Leary (Professor of Education at Birmingham City University) very kind contributed to my research and understanding.

As before, let’s rattle off a few obvious concerns. Learning is invisible. Observers observe performance not learning. Engagement is no learning. Observers change the dynamics of the lesson. Are lessons the most informative unit of learning to observe? Teaching is much more than delivery. Arguably, planning is more important. Instruction, is not the most influential component of school – on student outcomes. Observers are typically senior teachers, who as observers, find it difficult to “turn-off” their preconceptions of effective teaching.

Even an effective teacher may not understand fully which bits of their practice really make a difference. Rob Coe

Quite simply, lesson observations are flawed. Matt makes the point very clearly and critically;

Heightened levels of anxiety and stress surrounding performance management tools such as assessed lesson observations continue to occur across colleges and schools, despite the findings from the largest study on the use and impact of lesson observations in the UK being widely known and publicised.

Prof Matt O’Leary also recommended:

(2017) ‘Performance over professional learning and the complexity puzzle: lesson observation in England’s further education sector’, Professional Development in Education, Vol. 43(4), pp. 573-591. (Co-written with Dr Phil Wood, University of Leicester) http://www.tandfonline.com/eprint/qBwnjN224uChRShvgK3n/full

I want to take it for granted that you are on board. That you know that lesson observation are flawed. To be absolutely clear, not even a $50 million dollar study get’s you observer reliability. Despite all the efforts and funding, the Bill and Melinda Gates Foundation (2012) ‘Ensuring Fair and Reliable Measures of Effective Teaching’ could not attain inter-rater reliability.

Using Ofsted’s categories, if a lesson is judged ‘Outstanding’ by one observer, the probability that a second observer would give a different judgement is between 51% and 78%. Summary of Rob Coe.

Even when you know which teacher is ‘effective’ and ‘ineffective’ teachers (from value added measures), observers are not able to identify effectiveness. Fewer than 1% of those judged to be ‘Inadequate’ are genuinely inadequate; of those rated ‘Outstanding’, only 4% actually produce outstanding learning gains; overall, 63% of judgements will be wrong.

Having no information, is apparently, better than having prior knowledge teacher effectiveness.

4. What are the unintended consequences of your actions?

All decisions have consequences, including no decision. Rob talked at length about the unintended consequences of performance review and the “perverse disincentives” of Performance Related Pay; citing the example of Call Centres employees cutting customers off to meet call targets, whilst at the same time harming business reputation and customer loyalty. He also asked when is a good teacher, good enough? I do not have the answer, however these are important questions to consider.

Here is Rob’s final critical reflection, a viewpoint that summarises my investigations so far.

At a time where teachers, and teaching, is encouraged to be more research evidenced based, the management of education is ignoring the organisational management research.

If you think this is unjust criticism, unscientific as it is, this crass Twitter poll would suggest that far too many teachers are about to be let down in the up-coming weeks performance review close.

 

I do not yet have a handle on “what ought to be.” That said, “what ought to be,” is not proscribed or mandated Dee Hock suggested that it will be best achieved by acting as if “it were already true.”

Bill and Melinda Gates Foundation (2012) ‘Ensuring Fair and Reliable Measures of Effective Teaching’.
Measures of Effective Teaching (MET) Project research paper, January 2012.

Do We Know a Successful Teacher When We See One? Experiments in the Identification of Effective Teachers.

Leave a Reply