POSTED BY: Jim Ittenbach | September 18, 2009
If Mark Twain were alive today he might rephrase his frequently cited observation about everyone talking about the weather but not doing anything about it to say, “Everyone reads or watches weather forecasts, but many people don’t understand them.”
He’d do that because new research indicates that only about half the population knows what a forecast means when it predicts a 20 percent chance of rain, according to researchers at the University of Washington.
Writing in the Bulletin of the American Meteorological Society, the researchers said the confusion comes because people don’t understand what the 20 percent chance of rain actually refers to. Many people think it means that it will rain over 20 percent of the area covered by the forecast or for 20 percent of the time period covered by the forecast, said Susan Joslyn, a UW cognitive psychologist and senior lecturer.
“When a forecast says there is 20 percent chance of rain tomorrow it actually means it will rain on 20 percent of the days with exactly the same atmospheric conditions,” she said. “With the exception of the probability of precipitation, most weather forecasts report a single value such as the high temperature will be 53 degrees. This is deterministic because it implies that forecasters are sure the high temperature will be 53 degrees. But forecasting is probabilistic and 53 degrees is in the middle of the range of possible temperatures, say 49 to 56 degrees.”
To probe people’s understanding of the more familiar probability of precipitation, a technique used in public forecasts since the late 1960s, Joslyn and her colleagues tested more than 450 Pacific Northwest college students in three experiments.
The first experiment evaluated forecasts of either a low or a high percentage chance of precipitation accompanied by a series of icons, or “precipicons,” that were visual representations of the chance of rain. In addition to using a simplified cloud icon that is shown in many TV and newspaper forecasts, the researchers also used pie charts and bar graphs to indicate the chances of rain. Each student only saw one icon and forecast, and filled out a questionnaire.
Two of the questions asked how much of the time it would rain and over approximately what area of the region would it likely rain today. The correct answer for both questions was “can’t tell from this forecast,” and only 43 percent of the students correctly responded to both questions. Those who responded incorrectly were more likely to indicate they would wear a hooded jacket or carry an umbrella, actions suggesting they thought they were getting a deterministic forecast for rain.
The procedure was similar in the second experiment except the students were asked open-ended, rather than multiple-choice, questions, about similar weather forecasts. When asked to explain the probability of precipitation, few of the students did so and the percentage who correctly answered was close to the results of the first experiment.
The third experiment was similar to the first one. This time the participants received one of three forecasts. On one, the conventional chance of rain forecast appeared. On another, the chance of rain as well as the chance of no rain was given. On the third the pie chart icon appeared beneath the chance of rain.
This time 52 percent of the students knew that the forecast did not predict how much of the time and over what area it would rain. In addition, the odds of making an error were significantly lower if the phrase about a chance of no rain was included in the forecast.
Joslyn said the third experiment provides the first evidence that percent of time and area misconceptions can be weakened by an explicit statement that there is a chance it won’t rain.
She added that if the misunderstandings uncovered in this research exist among a college-educated group of students from the Pacific Northwest, where it frequently rains, then similar error probably occur in similar, or larger, numbers elsewhere among the general public.
The researchers believe that the errors are caused by the difficulty in making decisions when uncertainty is involved.
“In dealing with a forecast about rain people must simultaneously consider several hypothetical outcomes, their corresponding levels of uncertainty and their consequences. For some people it may be easier to commit to a single outcome, reducing cognitive load, and proceed as through the uncertainty has been resolved. In some cases they may not be aware of this simplification,” Joslyn said.
The research also has financial implications for forecast uncertainty and misinterpretations about such weather-related decisions as school closures, agricultural crop protection and highway and road clearing during storms. A person who thinks that a probabilistic forecast means that the weather event will occur (in some percent of the area or for some percent of the time) may be more likely to take expensive precautionary action than someone who realizes that there is only a chance of that event occurring.
The National Science Foundation funded the research. Co-authors of the study were Limor Nadav-Greenberg, a UW psychology doctoral graduate who is now in Israel, and Rebecca Nichols, a former UW undergraduate who is now a graduate student at the University of California, Irvine.