One of the biggest standouts for my analysis of nearly 400 restructures between 2018 and 2021 is the fact that not a single document explains how the decision makers will check that their chosen method (= restructure) has the desired effects (= objectives met) in practice. Not a single one!
Before I cite this fact in academic papers I wanted to make sure that I'm not interpreting unfairly, restructure documents don’t tell the whole story after all. So I sent OIA’s to all public service organisations in my scope (61 of them… phew) and asked them to release any reports and assessments that they have made following a restructure, and/or explain what informal means they use to measure their after-effects. This week I had time to analyse their responses, and what I found was remarkable (to this nerd, at least).
First off, I need to mention to the most audacious response I received, and that was from the Natural Hazards Commission (the artist formally known as EQC). They wrote:
“Each restructure or realignment process was in and of itself, an indication that the objectives of the previous process had been achieved. This informal assessment of the success of each restructure provided confirmation that the objectives needed to reach the next realignment stage, had been met.”
I think my reaction when reading this is best expressed like this:
Bullet-proof logic, hard to argue with.
Everyone else was a lot more reasonable in their response. Most organisations do not measure restructuring specifically, some mention “informal assessments” that they do. Their leaders “monitor organisational performance on an ongoing basis”, teams are trusted to have “informal discussions”, some say that teams do “lessons learned” workshops.
Like so much around restructuring, whether this is good or bad is contextual. How a restructure document reads and what story lies behind it is completely unclear, and I myself have seen restructures that really weren't a big deal and where monitoring afterwards would feel like overkill.
I’m thinking more and more that restructuring is about thresholds.
But in the end, every structure is spending public money and usually we expect the public service to be accountable for costs-benefit arguments when they do that. The responses make it very clear that restructuring is just not seen as an activity that requires this lens: the cost, and the benefit. And I think that's a problem.
Lo and behold, four organisations (so far, I have a few stragglers left, full disclosure) have released reports of structured assessments that were made following a structure change – the DIA, DOC, IRD and ACC.
That means that for 484 restructures that I counted between 2018 and 2021 in New Zealand’s public service, 9 reports of their after-effects were prepared.
I have my fingers crossed to see a few more, but the proportion of things is pretty clear.
So let's look at what I learned from them:
Noone is surprised to hear that most of these reports were prepared by external consultancies. The reports from a larger consultancy seemed to have only spoken to managers and came to much more favourable – and less detailed – conclusions as to the effects of the restructures: they generally find that the new organisation design works better than the previous structure, and note concerns about capacity and
“challenges in defining clear accountability and managing finite resources.”
I read that as saying that the idea works, but the practice doesn’t because there are not enough people to do the new jobs. Plus, restructuring is all about defining roles and responsibilities and how they hang together in an org chart - yet accountabilities still remain unclear when the rubber hits the road. Mmh.
A smaller agency produced several reviews whose quality I found quite impressive - they used multi-stage research processes and talked to staff, managers, and stakeholders. Ka pai, imho. In one report, they found:
“Enhanced clarity in branch purpose and work strategy, improved communication practices including hui and stand-ups, and positive developments in collaboration within leadership teams.”
The also found though that there was a split between that leaders valued and what worked for them, and what arrived at the coalface with the practitioners, with one employee cited as saying:
“The purpose feels abstract; therefore, it is hard to connect with. (…) There was a sense that the purpose works strategically as it ‘tells a great story,’ but practically it is hard to connect with and work towards.”
And:
“The structure was implemented but the ways of working to make it effective have not been fully developed or communicated. The siloed operation of teams undermines the operating model and what it can deliver as a whole.”
A report from another organisation finds a similar issue where the strategic idea of a restructure and what was not possible in practice:
“Disappointing being told that we would be working in an environment where [a certain function] could work across different roles... but there was a lack of work done pre-alignment to identify this being possible.”
And yet another – the most impressive report of them all, as it featured REAL metrics that showed how far the automation of a process had come – diagnosed that:
“Initial FTE modelling indicated the project may not deliver sufficient benefits, but time pressures led to optimism bias around automation levels.”
Optimism bias - such a good point, and a frequent bedfellow of restructuring, I believe.
I'm truly grateful for these reports, they are not a common feature of restructuring practice in Aotearoa, so some people must have gone the extra mile to commission them. They tell the same story that I see across all of my research: staff do their best to see the positives, e.g. in the new operating model, but
the top-down, manager-exclusive, waterfall-staged, sunk-cost, tunnel-visioned way in which we handle restructures leaves a massive disconnect between strategy and practice.
And let's be clear, the expectation cannot be better post restructure report should say: yip, all targets were met, every problem is solved, all works perfectly now. Of course that's not the goal, public service organisations are complex beasts and unintended consequences and miscalculations are completely understandable – in principle. We cannot expect change to be a one-off event – and honestly I don’t think anyone seriously does.
But we have every reason to ask: how much of this is avoidable?
What miscalculations could be improved had practitioners been part of the solution and of the design of the change in the first place? The thing that restructuring should be most suited for it's clarifying roles and responsibilities within a team - that's what most of the formal documentation it's focused on. Yet if you assess the after-effects you can still get feedback like:
“Lines are blurred between team functions... Everyone I talked to was very confused about what their role entailed.”
The four organisations that have commissioned such reports had the chance to use that feedback and do better, take practical actions and keep checking: hotter or colder?
Good on them, I say. THAT is what good structural change should be.
There is no genius solution to modernise, develop or even just improve what doesn’t work in one big swing. But I’ve said it before and I’ll say it many more times: if you don’t check, you’ll never know.