Humans cannot plan. Some humans have no idea of the passage of time or the basic requisites of working out backwards when they should start to get ready to leave the house to be somewhere on time.
Then there are the professionals who know much better than that; who make a living out of activities which chiefly consist of working out how to get things done, and how to plan how to get them done on time and on budget. And yet still they do not plan effectively. Why is that?
Do We Plan Badly?
As I indicated, this is my basic assumption. I believe it to be self-evident, but for the sake of completeness I will review the evidence. Obviously I am particularly interested in the planning of IT projects in law firms; but I will start with generic planning, move to IT planning in general, and then consider legal IT projects.
The world’s largest projects have the world’s best experts in their field involved in the planning thereof, and yet these megaprojects have some of the worst planning records.
We will never know whether the great pyramids were behind time and over budget, but we certainly know a great deal about more recent megaprojects that have gone both woefully over budget and over planned timescale, these include the Channel Tunnel, the Sydney Opera House, Concorde, the Scottish Parliament, and so on. Spectacular examples of cost overrun are the Sydney Opera House by 1,400 percent, and the Concorde supersonic aeroplane by 1,100 per cent. Many of the other projects have been delayed by a factor of two or three, and suffered cost overruns of a factor of between five and ten.
If we turn to IT projects in particular, and things here are even worse – especially in the public sector: such as the TAURUS – Stock Exchange share settlement system; the NHS ‘Connecting for Health’ system; the Libra Magistrates Courts system – and so on. According to researcher Standish Group's Chaos survey only 28 per cent of all IT projects in the US, in government and industry alike, hit their targets for budget, functionality and timeliness.
Finally, we need to turn the microscope on ourselves and consider the state of law firm IT projects.
I think we can all use our own experience to admit that very few IT projects in law firms are completed on time and on budget. I have certainly seen many – and been involved with several; however, this is not an exercise in naming and shaming, so I will not go into any more details. Suffice to say that inaccurate estimation of timescales and costs is as endemic in the legal IT sector as it is in all the others.
In relation to these issues, when planning a project, best practice has it that we do the following: first, break down the project into its constituent elements, and break down each element into tasks. Then, analyse each task by reference to the amount of effort required to complete it, and the cost of such effort, and the amount of elapsed time it will take to complete. In the course of doing this we need to ensure that the resulting analysis takes into account any dependencies between the various tasks and elements. Finally, estimate the costs of any necessary hardware, software and other external spend, such as vendor support and consultancy.
There is scope for under-estimation at each stage of this process; such as, forgetting constituent elements or tasks, or misjudging dependencies – but here we are primarily concerned with under-estimating the length of time it will take do get something done, as well as the external spend.
These things are closely linked. There is a cost to every resource, whether it is internal or external. The cost of internal resource is naturally less obvious; as it does not involve external spend. The key elements of external spend will be the cost of external resource – such as the daily rate of vendor staff and other contractors – and the hardware and software elements.
Things largely go wrong simply because we take an over-optimistic view of how long something should take to do; in turn, the vendors involved with the project do the same thing, thus compounding the problem.
My view is that there is a word used in the previous paragraph which is an important factor in the constant under-estimation of project effort: it is the word should. We tend to take a view as to how long something should, or ought to, take, rather than how long it is actually likely to take. Such elementary misjudgements obviously do not only occur in large projects; anyone who has had the builders in for any significant home alteration will have learned that almost every such project will exceed the builder’s initial estimate of cost and timescale.
Why Do We Plan Badly?
In a 1994 study, 37 psychology students were asked to estimate how long it would take to finish their senior theses. The average estimate was 33.9 days. They also estimated how long it would take "if everything went as well as it possibly could" (averaging 27.4 days) and "if everything went as poorly as it possibly could" (averaging 48.6 days). The average actual completion time was 55.5 days, with only about 30% of the students completing their thesis in the amount of time they predicted. So; under-estimation of effort required to complete tasks is simply human nature. There are many explanations given as to why these under-estimations occur, and (more amazingly) recur by the same practitioners, some more or less scientific, such as technical; political-economic; and psychological.
Technical
This concept covers just plain omissions, negligence and just plain carelessness. I am sure that this explains a lot of planning errors in the amateur and high-street areas, such as graduate work planning and self-employed builders, however - apart from the occasional catastrophe - I am willing to believe that plain incompetence does not usually explain planning shortfalls in major legal IT projects.
Political-Economic
The key factor under this heading is known as ‘strategic misrepresentation’ and it is my personal favourite, being (simply put) a euphemism for lying. Essentially it covers those planning estimates of time and money that are deliberately under-estimated to allow the project to ‘sell’.
If it is a project subject to tender from third parties then this is a common occurrence; the vendor will gauge what the budget is for the project, and make assumptions about what the competition will be bidding, and price accordingly - often totally under-estimating the capacity of the main server and/or the necessary effort in terms of vendor support effort. Some legal IT vendors are notorious for doing this. Otherwise, it may be a purely internal project, but the sponsor or the IT team are aware that if the budget and/or timescale goes beyond certain limits then it will not receive approval. In such circumstances they may decide to ‘under-egg’ the estimates in order to gain approval “for the good of the firm”.
Psychological
This is the nub of the issue; having ruled out incompetence and malevolence we come to what is surely the most common rationale behind most project under-estimation – the inbuilt human bias to optimism.
Various terms have been employed by analysis to explain the common under-estimation of cost and time (and over-estimation of benefits) in most projects. They include: straightforward wishful thinking; the ‘planning fallacy’; cognitive bias; and optimism bias.
Let us go back to the 1994 study of the psychology students and related experiments – empirically, the planning fallacy is, in terms of psychology, a recognised cognitive bias – in other words, it is a natural species-wide distortion of human perception.
Optimism bias is a similar and related issue, possibly more introspective, or possibly just another way of looking at the same problem. Studies have found optimism bias at play in various different kinds of judgment such as second-year MBA students overestimated the number of job offers they would receive and their starting salary; students overestimated the scores they would achieve on exams; professional financial analysts consistently overestimated corporate earnings; and such like. All these things are attributable to what is known as positive illusion – an unrealistic assessment of one’s own relative abilities.
When one analyses projects retrospectively to see where the optimistic inaccuracies have arisen due to all these psychological factors, the following tends to emerge: there is a mental discounting of ‘off-project’ risks - those formulating the plan may eliminate factors they perceive to lie outside the specifics of the project; also planners tend to focus on the project and underestimate time for sickness, holidays, meetings, delays for approvals and other "overhead" tasks; overall; finally, planners tend towards gauging how long a task should take (if all the circumstances and related activities go swimmingly) and not take a more realistic view of that is most likely to happen – i.e. that ‘real-life’ will intervene.
This results in a plain lack of adequate contingency planning. Thinking about the meaning and the use of the word should in ordinary parlance; it implicitly carries within it a meaning laden with one thing we do not want when trying to plan effectively: optimism.
In order words, we should be able to do it in six weeks but in practice we should also know that we will probably not be able to. Something will happen (illness, absences, third-party delay), or more likely several things, and it will probably take eight or nine weeks, or whatever. Thus, even at this stage, human nature makes us build inaccuracies into the plan at the very inception of low level planning.
Then it gets worse. In a massive plan these under-estimates of timescales get fed into a myriad of other tasks all of which are dependent on this erroneous forecast, and all of which are also optimistically forecast, thus causing a snowballing effect of exponentially expanding under-estimation which all consolidated makes for a woefully over-optimistic plan overall.
What Can We Do To Plan Better?
These then, are the challenges – what can we do to improve our skills at forecasting projects? We should be able to do better, or at least improve over time.
Here are some possible solutions that have been put forward for improving planning: firstly, get all those involved in project planning to think in terms of ‘actually likely to’ instead of ‘should’; secondly, employing enforced contingency and/or reference class forecasting; and finally, we need to remember Hofstadter's law.
The first concept is to demand of your planners to avoid forecasting as if we live in the best of all possible worlds, but to make a realistic estimation for each constituent stage of the project plan, including taking into account that some dependent tasks will not occur on time. Then, when they come back with their plans analyse the language that people use when presenting their forecasts and, if they use the ‘should’ word, ask them to reconsider their estimates by specific reference to worst case scenario.
A related stratagem, ‘enforced contingency’ is to get the planners to add generic contingency to the project (both in relation to effort and spend); either to each task/spend item as appropriate or to the project as a whole. This is surprisingly rarely done. How often do you see a contingency task in each activity of a large project, or a contingency activity added to the bottom line?
Slightly more scientific, is the concept of ‘reference class forecasting’. Formulated by Nobel Laureate Daniel Kahneman, this is the practice of basing current plans on a ‘reference’ class of similar previous actions. Reference class forecasting for a specific project involves the following three steps:
Identify a reference class of past, similar projects – i.e. find similar projects and project activities;
Establish a probability distribution for the selected reference class for the parameter that is being forecast – work out how wrong you, or others, have been in the past;
Compare the specific project with the reference class distribution, in order to establish the most likely outcome for the specific project – and change your forecasts accordingly.
As a simple example, if we were going to apply reference class forecasting to the future projection of when a psychology student says they will finish their thesis, then (assuming that the 1994 students are representative) we could get a fairly accurate result by either taking the optimistic estimate and multiplying by two, or taking the more pessimistic estimate and adding 20%.
A more complex, and more relevant example, would be for the IT department to undertake post-mortem analysis for each completed project and compare the final results as to timescales and project spend with the original forecast. Ideally, this would be done at the level of replicable tasks or phases such as; commission hardware, develop training, system configuration, change management, data migration etc. The resulting discrepancy can be turned into a bias that can be applied to future similar forecasts in order to uplift them to more realistic levels.
Even more relevant for law firms, is the proposed use of Reference Class Forecasting in the matter budgeting process: taking into account matter complexity criteria for gradually improving the ability of lawyers to estimate likely fees. This topic is dealt with in more detail in my second article on Matter Budgeting.
And what is Hofstadter's Law? Well, it is a self-referencing time-related adage, coined by Douglas Hofstadter in 1979 and named after himself:
It always takes longer than you expect, even when you take into account Hofstadter’s Law.
In other words, the chances are that we will never get this forecasting stuff absolutely right – the innate limitations of our species and the real world will interfere too much; but we should be able to do much better than we do at present.
Comments