99 ways to think more clearly?
by Anders Persson
Another “international bestseller” that, just like Daniel Kahneman’s book on fast and slow thinking, for once deserves its name is Rolf Dorbelli’s “The Art of Thinking Clearly” from 2012, now in pocket. It is also available in French “”L’Art de penser clairement” and in German. The German edition “Kunst des klaren Denkens” is no translation, but the original since Dorbelli is Swiss!
The book consists of 99 chapters, of 2-3 pages, each describing a typical path to draw the wrong conclusions. As with Kahneman’s book its contents relates, in my view, strongly to the way forecasters in meteorology (and perhaps hydrology) can go wrong. Already in chapter 1 “Survivorship bias” he points out that we see more success in our business than failure, just because the latter disappears in our minds. In chapter 2 “The Swimmer body’s illusion” he warns against biased selections which will affect the results we are on the outlook for.
Wisdom afterwards
In chapter 14 it is about the “Hindsight bias” how in retrospect everything seems so clear and inevitable:
“The Hindsight bias makes us believe we are better predictors than we actually are, causing us to be arrogant about knowledge and consequently to take too much risk.”
Overcoming the “Hindsight bias” is not easy and according to Dorbelli even those who are aware of it fall for it just as much as everyone else. But there is a glimmer of hope and the main advice is already in the header “Why You Should Keep a Diary!”
“Write down your predictions and compare them with the actual outcome!”
This was the advice I used to give forecasters at the ECMWF training courses. They had a tendency to see systematic errors or, the reverse; develop thumb rules that just didn’t stand the tests. As routine forecasters doing shifts they had few chances to make sophisticated statistical verifications. But just keeping a journal about their main observations of possible systematic (mis)behaviours of the ECMWF forecasts, would tell them if there was something in their observations.
Just being lucky – or unlucky
Yesterday evening I had come to chapter 20 “Never Judge a Decision by its Outcome” which deals with the “Outcome bias”. What Dorbelli says adds to my comment recently about the dangers of drawing conclusions from just one case. In essence: you might have done the “right thing” but just been unlucky. The same applies to the opposite, “success”. You might NOT have done the “right things” but still been LUCKY. That is why autobiographical memoirs by famous and successful people should be read with great care.
As a mathematical example Dorbelli brings up three heart surgeons A, B and C who have to operate on five patients. You could easily replace them by three hydrological forecasters facing five difficult situations. Anyhow, with surgeon A all patients survive, with surgeon B one die and with surgeon C two patients die. Common sense would deem A the “best” and C the “worst” surgeon. However, as Dorbelli shows, the sample is far too small for ANY conclusions to be drawn.
If we assume the statistical survival rate is 80% than it can easily (?) be found that the probability that by PURE CHANCE all patients survive is 33%, than one dies is 41% and that two die 20%. This leave 6% for three or more patients to die just by chance. Surgeon A might be the “best” and “C” the worst, but the test where they differ by just 13% show that it is not a watertight proof that this is the case.
Too much to choose from?
Now I must go back to the book and read chapter 21 “Less is more” about the “Paradox of choice”. It quotes an experiment where the customers one day were offered twenty-four varieties of jelly to choose from (with free tries). Next day only six varieties were offered and the sales went up TEN TIMES!
PS: It always take some days from writing a blog to have it published. In the meantime I read the book to the end and found many other illuminating examples of questionable thinking with bearings into our sciences. So for example at least five chapters, number 11, 13, 41, 87 and 97 relate to the mystery (at least to me) why forecasters, at least meteorological forecasters, prefer the deterministic computer forecasts to the ensemble system.
Chapters 11 and 12 deal with the availability and the story biases: “People prefer information that is easy to obtain” and “shape everything into meaningful stories”. Deterministic forecasts are more easily available, hanging on the walls or found everywhere on the internet, than ensemble products. The consequences of a cold front in the deterministic forecast passing 6 days ahead and a wave forming on this front the next day giving 23 mm of rain the following day appears more easily comprehended and meaningful than a map of irregularly scattered rain probabilities, although the latter has much higher predictive skill than the former.
Chapter 41 on the conjunction fallacy warns against believing that a more detailed statement (such as the deterministic forecast) is more likely than a less detailed (such as the smoother information from an ensemble system). A group of experts fell for this error in a test where they ranked the prediction that “oil consumption will decrease by 30%” less likely than “a dramatic rise in oil prices will lead to a decrease in consumption by 30%.”
Chapter 87 on personification reminds us that we tend to be less affected by hearing statistics about the number of starving people in a famine than seeing a photo of one of the victims. A forecast of a cyclonic depression, beautifully presented in colour graphics, is more striking to our minds than a b-w map with a grey shaded area of “40% probability of > 30 mm”.
Finally chapter 97 on the fallacy of the single cause. The ensemble tells us there is in five days time a 70% probability of heavy rain in an area. Just because the deterministic forecast also has heavy rain there does not necessarily mean that its meteorological scenario is the most likely; there are many ways rain producing atmospheric systems can develop.
0 comments