Skip Navigation

Environment Magazine September/October 2008

 

January-February 2016

Print
Email
ResizeResize Text: Original Large XLarge Untitled Document Subscribe

Our Hazardous Environment: A Retrospective

The article and commentaries below are the first in a new Environment Magazine series that will take a look back at seminal articles in the magazine's history that helped define a field of study or offered alternative explorations of new ideas. For each article, we will ask leaders in the field to comment on the topic. This first installment focuses on a trio of articles published in 1978 under the heading Our Hazardous Environment with Christoph Hohenemser and Robert W. Kates as series editors. The first of the trio is reprinted on the following pages and the remaining two can be accessed via our website at www.environmentmagazine.org. Three leaders in the field of hazards management, Baruch Fischhoff, Paul Slovic, and Howard Kunreuther, provide new perspectives on the topic in their commentaries that follow.

Damage in New Orleans from Hurricane Katrina in 2005.


The following commentary is in response to "Our Hazardous Environment", printed in the September 1978 issue of Environment.

Conditions for Sustainability Science

As precursors of sustainability science, the three articles that comprise the “Our Hazardous Environment” series had several features that characterize the field.1 One is that each article had contributors from multiple disciplines: geography, physics, psychology, geochemistry, and law. A second is that each article integrated knowledge of diverse types: theories, observations, principles, and preferences. A third is that each article bridged science and practice, seeking to satisfy the needs of both worlds. A fourth is that each article addressed the concerns of stakeholders with diverse, even conflicting, interests. A fifth is that each article was published in a peer-reviewed scientific journal, Environment. The conditions that allowed the work to have these unusual features are ones needed to nurture sustainability science:2

Collaboration. Multidisciplinary research requires scholars who recognize the limits to their own profession and the possibility of learning from others. Interdisciplinary research requires scholars who realize that those limits and possibilities are not properly understood until the knowledge of their own discipline has been integrated with that of others. That realization does not come easily to sciences dominated by a hegemonic theory (as with much economics) or focused on the experimental study of isolated factors in controlled settings (as with much psychology).

Significantly, each of the three articles is anchored by scholars from two inherently collaborative disciplines: geography and decision science. Operating across scales of time, space, social organization, and ecologies, geography has long sought to accommodate knowledge from diverse sources. Decision science characterizes problems in analytical terms that do not inherently privilege any concern, stakeholder, or kind of information. Absent such collaborative dispositions and methodologies, disciplinary pressures may overwhelm good intentions.

Tornado damage.

Integration. Each article adopts an analytical approach that seeks a balance between ambiguity and hyperpre-cision. On the one hand, each article defines terms clearly enough for others to verify any calculations and to identify the values embedded in its measures of cost, risk, or benefit, as well as in its selection of models or estimates—when those choices are known to favor particular interests. On the other hand, each article resists the temptation to reduce all valued outcomes to a common unit (e.g., net present economic value, quality-adjusted life years), so as not to obscure the trade-offs among those outcomes or neglect outcomes that are not readily quantified.

Thus, Harriss et al.1 estimate the costs of natural and technological hazards separately, recognizing that different values and policies may pertain to each. They also estimate social costs and mortality separately, rather than trying to monetize human life. They use a broad measure of social cost, which includes losses to property and productivity, as well as investments in prevention and mitigation. They normalize each risk to national conditions (percent of GNP, percent of mortality), in order to treat developing and developed countries equitably. They note neglected items. Thus, the analyses emphasize computability over computation, in the sense that the elements of each analysis are specified clearly enough that one could “run the numbers” were the data requirements satisfied.3 However, summary calculations are not required, lest they be dominated by factors that are readily quantified.

Science and practice. Practical concerns guide the science in each article, whether evaluating and improving the Consumer Products Safety Commission, identifying opportunities to interrupt accident sequences, or setting priorities for risk management. Commitment to that higher cause encouraged an engineering science approach, wherein participants recognize that they need one another in order to make their science useful, just as structural engineers' expertise has little value in designing a bridge without material scientists' expertise to keep it from corroding away.

That faith is strengthened by the belief that science needs practice as much as the other way around. Alan Baddeley, when head of the Medical Research Council Applied Psychology Unit, argued that psychology's intellectual capital came, in large part, from two bridging enterprises.4 Generalized to all disciplinary science, these enterprises are applied basic science and basic applied science. The former examines the ability of theories from basic science to predict events, or even to make predictions, in applied settings. The latter pursues basic science topics identified in those settings—rather than just topics that emerge endogenously from the science itself (e.g., when researchers pursue seeming anomalies in their data).

Diverse stakeholders. Any analysis embodies political and ethical judgments in its choice of outcomes to estimate, as well as in the units used to measure them.5 As a result, an analysis can deliberately or unwittingly favor some stakeholders by choosing outcomes or measures that favor their cause. Each of the three articles seeks to reduce such bias by examining a wide range of outcomes, including those valued by diverse stakeholders. By leaving estimates in their natural units (e.g., deaths, lost species), the analyses do not prejudge the trade-offs among them.

Each article also seeks to increase the transparency of its analyses by translating them into terms meaningful to Environment's broad readership. That translation includes using diagrams designed (implicitly) to fulfill the conditions proposed by Larkin and Simon for being worth ten thousand words: facilitating the three functions of search, recognition, and inference.6 In the context of hazard management, search is locating relevant factors in the complex processes creating and controlling hazards; recognition is identifying options for manipulating those factors; inference is predicting the effects of those manipulations on valued outcomes.

Peer review. One vital difference between science and consultancy (or pun-ditry) is its insistence on peer review. Scientists need the support of their disciplinary peers to ensure that their claims are consistent with the evidentiary record and properly qualified in terms of their strength and generalizability. However, that quality control can come at the price of a kind of mind control, if satisfying their colleagues makes authors more loyal to the discipline than to the problem. Such loyalty might encourage authors to elide limits to their science, fearing that the admission would undermine its influence and break faith with their colleagues. Thus, experimentalists may hesitate to acknowledge that they create conditions that exaggerate the effects of factors that interest them, so as to see better how they work. Computational modelers may hesitate to acknowledge that they omit factors that are not readily quantified.

Environment has long provided a home for authors willing to reach across disciplines and connect science and practice. In fulfilling its duty to them, the journal has recruited and trusted reviewers and editors able to exercise the professional judgment needed to help authors accomplish those tasks. As such, it is the rare journal seeking wisdom, as well as accuracy and innovation.

Creating These Conditions

Scientists who share these predispositions still need places to work. Although the authors of these articles had disciplinary training, none wrote from a disciplinary department. Rather, they were in innovative programs at Clark University (including its renowned Graduate School of Geography), federal government offices (Commerce, NASA), and an independent research institute (Decision Research). Their funding came from a National Science Foundation program that supported scientists whose commitment to applied problems drew them outside their disciplinary folds, both to work together and to maintain the sustained relationships needed to understand stakeholders' problems and perspectives.

Such boundary organizations are unstable.7 They need to overcome the centripetal forces that keep institutions circling around their traditional pursuits. They need funders and researchers willing to take these gambles. Those conditions are perhaps most likely to be met when impelled by the urgency of a problem. The Medical Research Council Applied Psychology Unit arose from its invaluable service during World War II. Yet, after a half century of working on a broad suite of problems, it was replaced by a unit with a more focused, conventional mission.8 The American Soldier Project, also prompted by the war effort, transformed American social science, and then was closed.9

However frustrating, these struggles for survival may be productive, if they attract people committed to cause over career (while recognizing that they do still need to make a living). Although sustainability issues are with us for the duration, their identify will change—and, with it, the skills needed to address them. As a result, the health of sustain-ability science may depend on its ability to recruit new people from traditional disciplines for applied basic research projects—and then return them to their home disciplines with exciting basic applied research ideas, thereby convincing their colleagues that such excursions are good for science, as well as for the world.

ORCID

Baruch Fischhoff http://orcid.org/0000-0002-3030-6874

 

NOTES

 

1. The three articles referred to are: T. Bick and R. E. Kasperson, “Pitfalls of Hazard Management,” Environment, 20, no. 8 (1978): 30–42; B. Fischhoff, C. Hohenemser, R. E. Kasperson, and R. W. Kates, “Handling Hazards,” Environment 20, no. 7 (1978): 16–20, 32–37; and R. C. Harriss, C. Hohensemser, and R. W. Kates, “Our Hazardous Environment,” Environment 20, no. 7 (1978): 6–15, 38–41.

2. These issues also arise in two later articles in this series: B. Fischhoff, P. Slovic, and S. Lichtenstein, “Weighing the Risks,” Environment 21, no. 5 (1979): 17–20, 32–38; and P. Slovic, B. Fischhoff, and S. Lichtenstein, “Rating the Risks,” Environment 21, no. 4 (1979): 14–20, 36–39.

3. B. Fischhoff, W. Bruine de Bruin, U. Guvenc, D. Caruso, and L. Brilliant, “Analyzing Disaster Risks and Plans: An Avian Flu Example,” Journal of Risk and Uncertainty 33, no. 1 (2006): 133–51.

4. A. D. Baddeley, “Applied Cognitive and Cognitive Applied Research,” in L.-G. Nilsson, ed., Perspectives on Memory Research (Hillsdale, NJ: Lawrence Erlbaum Associates, 1979): 367-388.

5. B. Fischhoff, “The Realities of Risk-Cost-Benefit Analysis,” Science 350, no. 6260 (2015): 527, http://dx.doi.org/10.1126/science.aaa6516; and B. Fischhoff and J. Kadvany, Risk: A Very Short Introduction (Oxford, UK: Oxford University Press, 2011).

6. J. H. Larkin and H. A. Simon, “Why a Diagram Is (Sometimes) Worth Ten Thousand Words,” Cognitive Science 11, no. 1 (1987): 65–99.

7. J. Parker and B. Crona, “On Being All Things to All People: Boundary Organizations and the Contemporary University,” Social Studies of Science 42, no. 2 (2012): 262–89.

8. http://www.mrc-cbu.cam.ac.uk/history (accessed October 16, 2015).

9. P. F. Lazarsfeld, “The American Soldier—An Expository Essay,” Public Opinion Quarterly 13, no. 3 (1949): 377–404.

Baruch Fischhoff is Howard Heinz University Professor in the Department of Engineering and Public Policy and the Department of Social and Decision Science at Carnegie Mellon University, Pittsburgh, Pennsylvania, and a member of the National Academy of Medicine.

Preparation of this article was supported by the National Science Foundation (SES-0949710) Center for Climate and Energy Decision Making and the Swedish Foundation for the Humanities and the Social Sciences (Riksbanken Jubileiumsfond) Program on Science and Proven Experience. That support is gratefully acknowledged. The views expressed are those of the author.


The following commentary is in response to "Handling Hazards: Can Hazard Management be Improved?", printed in the September 1978 issue of Environment.

Understanding Perceived Risk: 1978–2015

Tim O'Riordan made it sound easy when he convinced me to write a brief perspective on hazard management almost four decades after the publication of three seminal articles in Environment on that topic. I agreed, not realizing the breadth of those articles and the relatively narrow focus of my own work and knowledge. Moreover, much has happened in the world of risk since the fall of 1978. Fortunately, Baruch Fischhoff and Howard Kunreuther agreed to join me in reflecting on these developments.

I focus this commentary on the second article, “Handling Hazards: Can Hazard Management be Improved?” by Fischhoff, Hohenemser, Kasperson, and Kates.1 This important article, which can be viewed online at www.tandfonline.com, presented a framework of hazard causation pointing to opportunities for management interventions. Concepts such as risk perception, acceptable risk, and value trade-offs were introduced, along with institutional failures in properly attending to the most serious hazards.

The article by Fischhoff et al. was insightful in its treatment of risk perception, noting that the way we think about and respond to hazards shapes the agendas of public interest groups and politicians, as well as the attempts of laypeople to manage the hazards of their daily lives. The importance of perceived benefits and value trade-offs was also stressed along with the observation that reducing a hazard might conflict directly with other widely held values or political goals. Also noted was the fact that our tolerance of risk varies widely among activities and technologies, and this inconsistency of public values greatly complicates hazard management.

This article was soon followed by another article in Environment, “Rating the Risks,” in which my colleagues and I described early attempts to quantify perceptions of risk and document their implications for hazard management.2 Revisiting this article, I am struck by how harshly we came down on the public. Risk was characterized narrowly in terms of annual fatality rates, and serious public misjudgments of these rates were attributed to lack of knowledge compounded by biases linked to the imaginability and memorability of the hazard. Although the possibility was raised that experts, too, often rely on judgments that might sometimes be biased, we concluded that the public needed to be better informed, to rely less on unexamined judgments, to be aware of the qualitative aspects of hazards that could bias its judgments (e.g., involuntary exposure, emotions), and to be open to new evidence that might alter its risk perceptions. Despite the inaccuracy of public perceptions, we noted that removing the public from the hazard-management process was not feasible in a democratic society.

Almost four decades later, many of the same issues still challenge risk management, though our understanding of them has greatly increased and a more balanced appreciation of the strength and weaknesses of both expert risk assessments and public perceptions has evolved.3 The method of using numerical rating scales to measure risk perceptions was later named “the psychometric paradigm” and was extended to characterize and assess perceptions in many different ways. Perceived risk and acceptable risk were found to be systematic and predictable. Psychometric techniques seemed well suited for identifying similarities and differences among groups with regard to risk perceptions and attitudes. This research showed that the concept of “risk” meant different things to different people. The public was found to have a broad conception of risk, qualitative and complex, that incorporates considerations such as uncertainty, dread, catastrophic potential, controllability, equity, risk to future generations, and so forth, into the risk equation. In contrast, experts' perceptions of risk are not closely related to these characteristics. Rather, studies found that experts tend to see riskiness as synonymous with probability of harm or expected mortality. As a result of these different perspectives, conflicts often resulted from experts and laypeople having different definitions of the concept “risk.” In this light, it is not surprising that expert recitations of “risk statistics” often did little to change people's attitudes and perceptions.

Over time, it was recognized that there are legitimate, value-laden issues underlying the multiple dimensions of public risk perceptions, and these values need to be considered in risk management decisions.4 For example, is risk from cancer (a dreaded disease) worse than risk from auto accidents (not dreaded)? Is a risk imposed on a child more serious than a known risk accepted voluntarily by an adult? Are the deaths of 50 passengers in separate automobile accidents equivalent to the deaths of 50 passengers in one airplane crash? Is the risk from a polluted Superfund site worse if the site is located in a neighborhood that has a number of other hazardous facilities nearby? Quantitative risk assessments cannot answer such questions.

At much the same time, the technical foundations of scientific risk assessment also came under scrutiny, perhaps because of sharp discrepancies with public perceptions and the frequent conflicts and controversies centered around these differences. Social research challenged the traditional view that dangers result from physical and natural processes in ways that can be objectively quantified by risk assessment. Social scientists argued instead that the measurement of risk is inherently subjective. For example, the nuclear engineer's probabilistic risk estimate for a nuclear accident and the toxicologist's quantitative estimate of a chemical's carcinogenic risk are both based on theoretical models, whose structure is subjective and assumption-laden, and whose inputs are dependent on judgment at every stage of the assessment process, from the initial structuring of a risk problem to deciding which endpoints or consequences to include in the analysis, identifying and estimating exposures, choosing dose-response relationships, and so on. For example, even the apparently simple task of choosing a risk measure for a well-defined endpoint such as human fatalities is surprisingly complex and judgmental. Table 1 shows a few of the many different ways that fatality risks can be measured. How should we decide which measure to use when planning a risk assessment, recognizing that the choice is likely to make a big difference in how the risk is perceived and evaluated?

The influence of social values on risk perception can also be seen in studies by Kahan and colleagues examining the impact of worldviews and general attitudes toward society and its organization.5 Recognition of the subjectivity and value-laden nature of both technical risk assessments and public views has highlighted the need for an approach for addressing risk controversies that focuses upon introducing public participation into both risk assessment and decision making in order to make the process more democratic, improve the relevance and quality of technical analysis, and increase the legitimacy and public acceptance of the resulting decisions. Work by scholars and practitioners in Europe and North America laid the foundations for improved methods of public participation within deliberative decision processes that include negotiation, mediation, oversight committees, and other forms of public involvement.6

This need for a participatory approach has been clearly recognized by high-level committees formed to examine risk-assessment practices. Notable was a report by the National Academy of Sciences, “Understanding Risk: Informing Decisions in a Democratic Society,”7 that concluded that risk assessment should be performed as part of an iterative analytic-deliberative process designed to inform risk-management decision making. According to the report, each step of the process should have an appropriately diverse participation or representation of the spectrum of interested and affected parties, decision makers, and risk-assessment specialists.

Another direction taken by work within the psychometric paradigm was to examine the role of perceptions in determining the degree of impact resulting from an “unfortunate event” (e.g., an accident, a discovery of pollution, sabotage, product tampering).

Table 1.

Some Ways of Expressing Mortality Risks

Deaths per million people in the population
Deaths per million people within x miles of the source of exposure
Deaths per unit of concentration
Deaths per facility
Deaths per ton of air toxic released
Deaths per ton of air toxic absorbed by people
Deaths per ton of chemical produced
Deaths per million dollars of product produced
Loss of life expectancy associated with exposure to the hazard

Source: Slovic (1997).4

Early theories equated the magnitude of impact to the number of people killed or injured or to the amount of property damaged. However, risk-perception studies showed that there were other impacts as well, analogous to the ripples from a stone dropped into a pond. These secondary impacts could be enormous and were found to depend upon characteristics of the hazard, as well as on risk perceptions stimulated by the extensive media coverage that accompanies certain events.

A conceptual framework aimed at describing how psychological, social, cultural, and political factors interact to “amplify risk” and produce ripple effects was developed by Kasperson et al.8 and named “the Social Amplification of Risk.” An important element of this framework is the assumption that the perceived seriousness of an accident or other unfortunate event, the media coverage it gets, and the long-range costs and other higher-order impacts on the responsible company, industry, or agency are determined, in part, by what that event signals or portends. Signal value reflects the perception that the event provides new information about the likelihood of similar or more destructive future mishaps.9 A few of the high-signal events whose ripple effects dwarfed their direct damages were the nuclear-power accidents at Three Mile Island, Chernobyl, and Fukushima, the 1982 poisoning of Tylenol, the chemical explosion at Bhopal, India, mad cow disease and the British beef scare, and the terrorist attack on September 11, 2001. These sorts of events stigmatize products, places, and technologies, triggering avoidance behaviors that are exceedingly costly.10,11

The aftermath of a car crash.

The pace of psychometric research accelerated over the years.12 The early work was replicated and extended with diverse samples of respondents worldwide and with very different sets of hazards. The earliest psychometric studies were distinguished by their comparisons of large numbers of hazards containing items as diverse as bicycles and nuclear power plants. Subsequent surveys have been dedicated to hazards within the same domain, such as natural hazards, medicines, biotechnology, terrorism, nuclear waste, and climate change. However, these quantitative studies left many important questions unanswered. For example, why wouldn't motorists wear seat belts until they were mandated by law? Why do adolescents engage in so many dangerous activities, even those that they supposedly recognize as risky (e.g., smoking cigarettes)? Why do we dread risks from chemicals (except for medicines) but not auto accidents? Why do we fear radiation exposures from nuclear wastes but not from radon in our homes? Why do we value individual lives greatly but become less motivated to protect them as the numbers of people at risk increase?

Answers to these sorts of questions require different methods of analysis—methods that may afford deeper understanding of specific issues rather than broad, but shallow, quantitative assessments. One important approach, pioneered and successfully applied by researchers at Carnegie-Mellon University, has used extensive open-ended interviews to construct influence diagrams and “mental models” depicting people's knowledge, attitudes, beliefs, values, perceptions, and inference mechanisms with regard to specific hazards such as radon and global climate change.13–15

This method of questioning was employed to describe and compare the mental models of experts and laypersons regarding the effects of chemicals on human health.16 This research examined the cognitive models, assumptions, and inference methods that comprise laypeople's “intuitive toxicological theories” and compared these theories with the models underlying scientific toxicology and risk assessment. Toxicologists give great importance to considerations of exposure and dose when evaluating chemical risks, whereas laypeople were found to believe that any exposure to a toxic substance or carcinogen, no matter how small, is likely to prove harmful. Another important finding was the divergence of opinion among toxicologists on questions pertaining to the reliability and validity of animal tests for assessing the risks that chemicals pose to humans. The research also documented a strong “affiliation bias” indicating that toxicologists who worked for industry saw chemicals as more benign than did their counterparts in academia and government. In sum, the knowledge gained from these studies of intuitive toxicology appears to provide a valuable starting point around which to structure discussion, education, and communication about assessing and managing risks from chemicals.

Two important findings from the earliest psychometric studies were not adequately appreciated and lay somewhat dormant for two decades, until additional findings led them to be recognized as key links in a theory about the role of affective processes in judgment, decision making, and risk perception. Fischhoff et al.17 noted in passing that, across different hazards, perceived risk declined as perceived benefit increased. They also found that the characteristic most highly correlated with perceived risk was the degree to which a hazard evoked feelings of dread.

Three Mile Island power plant.

A significant step toward understanding the importance of these findings was taken by Alhakami and Slovic,18 who observed that the inverse relationship between perceived risk and perceived benefit was linked to an individual's general positive and negative feelings about a hazard. For example, technologies we like (e.g., x-rays and medicines) are judged high in benefit and low in risk. Technologies that carry negative feelings (e.g., nuclear power and pesticides) are judged low in benefit and high in risk. Reliance on feelings as a guide to risk perception and, more generally, all manner of judgments and decisions was named the affect heuristic.19,20

Although it is tempting to conclude that these studies demonstrate that laypeople's perceptions of risk are derived from emotion rather than reason, and hence should not be respected, such a conclusion is incorrect. Research shows that affective and emotional processes interact with reason-based analysis in all normal thinking and, indeed, are essential to rationality.21 Reliance on “the feeling of risk” was essential to human survival in the course of evolution, and even today, feelings serve as a compass that guides most of our daily decisions. Although analytic thinking is certainly important for decisions involving risk, reliance on affect is a quicker, easier, and efficient way to navigate in a complex, uncertain, and sometimes dangerous world.

This is not to say, however, that our feelings might not, in some cases, mislead us. For example, a rare event will seem much more likely when we are told it will occur to one out of 100 of people like us than when we are told we have a 1% chance of experiencing it. The “1%” likely makes us think of a small number. “One out of 100” creates images in our mind of “the one” with corresponding positive or negative feelings that amplify the feeling of benefit or risk.22 Another foible is that envisioning a scary consequence may feel as frightening when its probability is low as when it is high, leading to what some have called probability neglect.23

Perhaps the most serious shortcoming of unanalyzed feelings is that they don't respond adequately to consequences large in scale, described by numbers or statistics.23 We will spare no effort to protect or rescue one identified individual, but that same life loses its value when others are also at risk. We will not feel or respond differently learning that 88 persons are in danger rather than 87. The feeling system loses sensitivity and responsiveness when the scale of a problem increases, a phenomenon known as “psychic numbing.” This insensitivity contributes significantly to societal underreaction to mass threats from problems such as climate change, poverty, famine, disease, and genocide that are communicated to us through statistics. We need faces, and stories, and careful deliberation to comprehend the realities underlying these statistics and to motivate the creation of analytic procedures, laws, and institutions that can counter the destructive anaesthetizing illusions brought about by psychic numbing, much in the spirit of Howard Kunreuther's essay in this issue.

NOTES

1. B. Fischhoff, C. Hohenemser, R. E. Kasperson, and R. W. Kates, “Can Hazard Management Be Improved?” Environment 20, no. 7 (1978): 16–20, 32–37.

2. P. Slovic, B. Fischhoff, and S. Lichtenstein, “Rating the Risks,” Environment 21, no. 3 (1979): 14–20, 36–39.

3. P. Slovic, ed., The Perception of Risk (London, UK: Earthscan, 2000).

4. P. Slovic, “Trust, Emotion, Sex, Politics, and Science: Surveying the Risk-Assessment Battlefield,” in M. H. Bazerman, D. M. Messick, A. E. Tenbrunsel, and K. A. Wade-Benzoni, eds., Environment, Ethics, and Behavior (San Francisco, CA: New Lexington, 1997), 277–313.

5. D. M. Kahan, et al., “Cultural Cognition of the Risks and Benefits of Nanotechnology,” Nature Nanotechnology 4, no. 2 (2009): 87–90.

6. O. Renn, T. Webler, and P. Wiedemann, Fairness and Competence in Citizen Participation: Evaluating Models for Environmental Discourse (Dordrecht, The Netherlands: Kluwer Academic, 1995), vol. 10.

7. National Research Council, Committee on Risk Characterization, P. C. Stern and H. V. Fineberg, eds., Understanding Risk: Informing Decisions in a Democratic Society (Washington, DC: National Academy Press, 1996).

8. R. E. Kasperson et al, “The Social Amplification of Risk: A Conceptual Framework,” Risk Analysis 8, no. 2 (1988): 177–87.

9. P. Slovic, S. Lichtenstein, and B. Fischhoff, “Modeling the Societal Impact of Fatal Accidents,” Management Science 30, no. 4 (1984): 464–74.

10. J. Flynn, P. Slovic, and H. Kunreuther, eds., Risk, Media, and Stigma: Understanding Public Challenges to Modern Science and Technology (London, UK: Earthscan, 2001).

11. J. A. Giesecke et al., “Assessment of the Regional Economic Impacts of Catastrophic Events: CGE Analysis of Resource Loss and Behavioral Effects of an RDD Attack Scenario,” Risk Analysis 32, no. 4 (2012): 583–600.

12. P. Slovic, ed., The Perception of Risk (London, UK: Earthscan, 2000).

13. A. Bostrom, B. Fischhoff, and M. G. Morgan, “Characterizing Mental Models of Hazardous Processes: A Methodology and an Application to Radon,” Journal of Social Issues 48, no. 4 (1992): 85–110.

14. C. J. Atman, A. Bostrom, B. Fischhoff, and M. G. Morgan, “Designing Risk Communications: Completing and Correcting Mental Models of Hazards Processes, Part I,” Risk Analysis 14, no. 5 (1994): 779–88.

15. A. Bostrom, M. G. Morgan, B. Fischhoff, and D. Read, “What Do People Know About Global Climate Change? 1. Mental Models,” Risk Analysis 14, no. 6 (1994): 959–70.

16. N. Kraus, T. Malmfors, and P. Slovic, Intuitive Toxicology: Expert and Lay Judgments of Chemical Risks. Risk Analysis 12, no. 2 (1992): 215–32.

17. B. Fischhoff, P. Slovic, S. Lichtenstein, S. Read, and B. Combs, “How Safe Is Safe Enough? A Psychometric Study of Attitudes Toward Technological Risks and Benefits,” Policy Sciences 9, no. 2 (1978): 127–52.

18. A. S. Alhakami and P. Slovic, “A Psychological Study of the Inverse Relationship Between Perceived Risk and Perceived Benefit,” Risk Analysis 14, no. 6 (1994): 1085–96.

19. M. L. Finucane, A. Alhakami, P. Slovic, and S. M. Johnson, “The Affect Heuristic in Judgments of Risks and Benefits,” Journal of Behavioral Decision Making 13, no. 1 (2000): 1–17.

20. P. Slovic, M. L. Finucane, E. Peters, and D. G. MacGregor, “The Affect Heuristic” in T. Gilovich, D. Griffin, and D. Kahneman, eds., Heuristics and Biases: The Psychology of Intuitive Judgment (New York, NY: Cambridge University Press, 2002), 397–420.

21. A. R. Damasio, Descartes' Error: Emotion, Reason, and the Human Brain (New York, NY: Avon, 1994).

22. P. Slovic, J. Monahan, and D. G. MacGregor, “Violence Risk Assessment and Risk Communication: The Effects of Using Actual Cases, Providing Instructions, and Employing Probability vs. Frequency Formats,” Law and Human Behavior 24, no. 3 (2000): 271–96.

23. C. R. Sunstein, “Terrorism and Probability Neglect,” Journal of Risk and Uncertainty 26, no. 2/3 (2003): 121–36.

24. S. Slovic and P. Slovic, Numbers and Nerves: Information, Emotion, and Meaning in a World of Data (Corvallis, OR: OSU Press, 2015).

Paul Slovic is the president of Decision Research and is a professor of psychology at the University of Oregon, Eugene, Oregon.

This material is based upon work supported by the National Science Foundation under Grant No. 1227729.


The following commentary is in response to "Pitfalls of Hazard Management", printed in the October 1978 issue of Environment.

 

Reducing Losses From Catastrophes: Role of Insurance and Other Policy Tools

 

We are in a new era of catastrophes. Worldwide, economic losses from natural catastrophes increased from $528 billion in the decade 1981–1990, to $1,197 billion during 1991–2000, and $1,213 billion during 2001–2010. In 2011 alone, economic losses amounted to over $400 billion, in large part due to the March 2011 Japan earthquake and resulting tsunami; 2012 brought another $170 billion in economic losses.1

Insured losses have dramatically increased as well. Between 1970 and the mid-1980s, annual insured losses from natural disasters worldwide (including forest fires) were only in the $3 billion to $4 billion range. Hurricane Hugo, which made landfall in Charleston, South Carolina, on September 22, 1989, was the first natural disaster in the United States to inflict more than $1 billion of insured losses, with insured losses of $4.2 billion (1989 prices). During the period 1980–1989, insured losses from disasters in the United States averaged $9.1 billion annually (2014 prices), far less than during the period 2005–2014, when insured losses from disasters in the United States averaged $24.7 billion annually (2014 prices).2 Figure 1 depicts the evolution of the direct economic losses and the insured portion from great natural disasters over the period 1980–2014. The flood damage to South Carolina in the aftermath of Hurricane Joaquin in October 2015 is estimated to be more than $1 billion, with insured losses of at least $450 million to the private insurance industry and the National Flood Insurance Program.

Figure 1  Natural Catastrophes Worldwide, 1980–2014

Figure 1

In dealing with this new era of catastrophes, insurance is not effectively meeting two of its most important objectives:

  • providing information to those residing in hazard-prone areas as to the nature of the risks they face.

  • incentivizing those at risk to undertake loss reduction measures prior to a disaster.

A street in New Orleans in the aftermath of Hurricane Katrina.

When factory mutual insurance companies were formed in the mid-1800s, these were their two central goals. Inspections were undertaken prior to issuing an insurance policy and were continued on a regular basis after coverage was in force. High risks had their policies canceled; premiums reflected risk and were reduced for factories that instituted additional risk reduction measures. In many cases, factory mutual companies would provide coverage only to firms that adopted specific loss prevention methods. For example, one company, the Spinners Mutual, insured only factories where automatic sprinkler systems were installed.3

This article proposes a strategy for using insurance coupled with other policy tools to take steps to return to its 19th-century roots in dealing with the risks facing property owners in hazard-prone areas. More specifically, I address the following question: What roles can the private insurance market and the public sector play in reducing losses from future natural disasters, recognizing the limitations of individuals and firms in dealing with low-probability, high-consequence (LP-HC) events and the challenges the insurance industry faces in providing coverage against these risks?

To answer this question it is helpful to understand why residents in hazard-prone areas often ignore future disasters, and how information can be presented in ways that they are more likely to pay attention to the hazard. To encourage investment in loss reduction measures, private insurance can be coupled with other policy tools. The public sector has a role to play by providing assistance to deal with affordability issues and offering financial protection to insurers against catastrophic losses.

Role of Intuitive and Deliberative Thinking in Dealing With Extreme Events

A large body of cognitive psychology and behavioral decision research over the past 30 years has revealed that individuals and organizations often make decisions under conditions of risk and uncertainty by combining intuitive thinking with deliberative thinking. In his thought-provoking book Thinking, Fast and Slow, Nobel Laureate Daniel Kahneman has characterized the differences between these two modes of thinking. Intuitive thinking (System 1) operates automatically and quickly with little or no effort and no voluntary control. It is often guided by emotional reactions and simple rules of thumb that have been acquired by personal experience. Deliberative thinking (System 2) allocates attention to effortful and intentional mental activities where individuals undertake trade-offs, and recognize relevant interdependencies and the need for coordination.4

Choices are normally made by combining these two modes of thinking and generally result in good decisions when individuals have considerable past experience as a basis for their actions. With respect to low-probability, high-consequence events, however, there is a tendency to either ignore a potential disaster or overreact to a recent one so that decisions may not reflect expert risk assessments. For example, after a disaster, individuals are likely to want to purchase insurance even at high prices, while insurers often consider restricting coverage or even withdraw from the market. In these situations, both parties focus on the losses from a worst-case scenario without adequately reflecting on the likelihood of this event occurring in the future.

Empirical studies have revealed that many individuals engage in intuitive thinking and focus on short-run goals when dealing with unfamiliar LP-HC risks.5-7 More specifically, individuals often exhibit systematic biases such as the availability heuristic, where the judged likelihood of an event depends on its salience and memorability.8 There is thus a tendency to ignore rare risks until after a catastrophic event occurs. This is a principal reason why it is common for individuals at risk to purchase insurance only after a large-scale disaster. A study of the risk perception of homeowners in New York City revealed that they underestimated the likelihood of water damage from hurricanes. This may partially explain why only 20% of those who suffered damage from Hurricane Sandy had purchased flood insurance before the storm occurred.9

Guiding Principles for Insurance

The following two guiding principles should enable insurance to play a more significant role in the management and financing of catastrophic risks.7,10

Principle 1—Premiums Should Reflect Risk

Insurance premiums should be based on risk to provide individuals with accurate signals as to the degree of the hazards they face and to encourage them to engage in cost-effective mitigation measures to reduce their vulnerability. Risk-based premiums should also reflect the cost of capital that insurers need to integrate into their pricing to assure an adequate return to their investors.

Catastrophe models have been developed and improved over the past 25 years to more accurately assess the likelihood and damages resulting from disasters of different magnitudes and intensities. Today, insurers and reinsurers utilize the estimates from these models to determine risk-based premiums and how much coverage to offer in hazard-prone areas.11

If Principle 1 is applied to risks where premiums are currently subsidized, some residents will be faced with large price increases. This concern leads to the second guiding principle.

Principle 2—Dealing With Equity and Affordability Issues

Any special treatment given to low-income individuals currently residing in hazard-prone areas should come from general public funding and not through insurance premium subsidies. It is important to note that Principle 2 applies only to those individuals who currently reside in hazard-prone areas. Those who decide to locate in these regions in the future would be charged premiums that reflect the risk.

Strategies for Reducing Future Losses

Use of Choice Architecture

If those residing in hazard-prone areas perceive the likelihood of losses to be below their threshold level of concern, they will have no interest in purchasing insurance or investing in loss reduction measures. One way to address this problem is to recognize that individuals' decisions depend in part on how different options are framed and presented—that is, the use of choice architecture.12,13 In the context of LP-HC events, framing refers to the way in which likelihoods and outcomes of a given risk are characterized.

With respect to the likelihood dimension, people are better able to evaluate low-probability risks when these are presented via a familiar concrete context. For example, individuals might not understand what a one-in-a-million risk means but can more accurately interpret this figure when it is compared to the risk of an automobile accident (1 in 20) or lightning striking your home on your birthday (less than one in a billion).

Probability is more likely to be a consideration if it is presented using a longer time frame. People are more willing to wear seat belts if they are told they have a 1-in-3 chance of an accident over a 50-year lifetime of driving, rather than a 1-in-100,000 chance of an accident on each trip they take.14 Similarly, a homeowner or manager considering flood protection over 25-years is far more likely to take the risk seriously if told that the chance of at least one severe flood occurring during this time period is greater than 1 in 5, rather than 1 in 100 in any given year.15

Another way to frame the risk so that individuals pay attention is to construct a worst-case scenario. Residents in hazard-prone areas who learn about the financial consequences of being uninsured if they were to suffer severe damage from a flood or earthquake would have an incentive to purchase insurance coverage and may refrain from canceling their insurance even if they have not made a claim for a few years.

Means-Tested Vouchers

Individuals at risk may be reluctant to invest in cost-effective loss reduction measures when these involve a high up-front cash outlay. Given budgetary constraints and individuals' focus on short time horizons, it is difficult to convince them that the expected discounted benefits of the investment over the expected life of the property exceeds the immediate up-front cost. Decision makers' resistance is likely to be compounded if they perceive the risk to be below their threshold level of concern. Residents in hazard-prone areas may also be concerned that if they move in the next few years, the property value of their home will not reflect the expected benefits from investing in loss reduction measures because the new owner will not be concerned about the risk of a disaster.16

Flooding is one of many environmental hazards facing some communities today.

One way to maintain risk-based premiums while at the same time addressing issues of affordability is to offer means-tested vouchers that cover part of the cost of insurance. Several existing programs could serve as models for developing such a voucher system: the Food Stamp Program, the Low Income Home Energy Assistance Program (LIHEAP), and the Universal Service Fund (USF).10 The amount of the voucher would be based on current income and determined by a specific set of criteria as outlined by the National Research Council.17 If the property owners were offered a multiyear loan to invest in mitigation measure(s), the voucher could cover not only a portion of the resulting risk-based insurance premium, but also the annual loan cost to make the package affordable. As a condition for the voucher, the property owner could be required to invest in mitigation.

A woman walks among building ruins in the aftermath of a large earthquake in Nepal.

An empirical study of homeowners in Ocean County, New Jersey, reveals that the amount of the voucher is likely to be reduced significantly from what it would have been had the structure not been mitigated, as shown in Figure 2 for property in a 100-year coastal hazard flood area (the V Zone) and a 100-year inland hazard area (the A Zone).18

Well-Enforced Building Codes

Risk-based insurance premiums could be coupled with building codes so that those residing in hazard-prone areas adopt cost-effective loss-reduction measures. Following Hurricane Andrew in 1992, Florida reevaluated its building code standards, and coastal areas of the state began to enforce high-wind design provisions for residential housing. As depicted in Figure 3, homes that met the wind-resistant standards enforced in 1996 had a claim frequency that was 60% percent less than homes that were built prior to that year. The average reduction in claims from Hurricane Charley to each damaged home in Charlotte County built according to the newer code was approximately$20,000.19

Homeowners who adopt cost-effective mitigation measures could receive a seal of approval from a certified inspector that the structure meets or exceeds building code standards. A seal of approval could increase the property value of the home by informing potential buyers that damage from future disasters is likely to be reduced because the mitigation measure is in place. Evidence from a July 1994 telephone survey of 1,241 residents in six hurricane-prone areas on the Atlantic and Gulf coasts provides supporting evidence for some type of seal of approval. More than 90% of the respondents felt that local home builders should be required to adhere to building codes, and 85% considered it very important that local building departments conduct inspections of new residential construction [Insurance Institute for Property Loss Reduction (1995)].20

Figure 2.  Cost of Program to the Federal Government and a Hypothetical Homeowner

Figure 2.

Multiyear Insurance

Insurers could consider designing multiyear insurance (MYI) contracts of three to five years with the policy tied to the structure rather than the property owner. The annual risk-based premium would remain stable over the length of the contract. Property owners who cancel their insurance policy early would incur a penalty cost in the same way that those who refinance a mortgage have to pay a cancellation cost to the bank issuing the mortgage. Insurers would have an increased incentive to inspect the property over time to make sure that building codes are enforced, something they would be less likely to do with annual contracts. For a private insurer to want to offer multiyear coverage, there needs to be sufficient demand to cover the fixed and administrative costs of developing and marketing the product. A Web-based experiment revealed that a large majority of the responders preferred a 2-year insurance contract over two 1-year contracts and increased the aggregate demand for disaster insurance.21

Figure 3.  Average Claim Severity by Building Code Category From Hurricane Charley

Figure 3.

Features of a Private–Public Partnership for Insuring LP-HC Events

The history of flood insurance provides guidelines for developing a public-private partnership for insuring extreme events. Following the severe Mississippi River floods of 1927 no private insurer offered flood coverage, thus leading to the formation of the federally run National Flood Insurance Program (NFIP) in 1968.

To market coverage against the flood risk, private insurers need the partnership of the public sector to deal with issues of affordability and catastrophic losses, and develop standards and regulations that will be well enforced. Such a program for residential property in flood-prone areas would include these features:

  • Risk-based premiums, using accurate hazard maps and damage estimates, would give private insurers an incentive to market coverage.

  • Means-tested vouchers to address the affordability issue would be provided by the public sector to those who undertook cost-effective mitigation measures.

  • Premium discounts would be given to homeowners to reflect the reduction in expected losses from undertaking cost-effective mitigation measures. Long-term loans for mitigation would encourage these investments.

  • Well-enforced building codes and seals of approval would provide an additional rationale to undertake these loss-reduction measures. Land-use regulations could restrict property development in high hazard areas.

  • A multiyear insurance (MYI) policy with stable annual premiums tied to the property would prevent policyholders from canceling their policies.

  • Private reinsurance and risk-transfer instruments marketed by the private sector would cover a significant portion of the catastrophic losses from future disasters.

  • Federal reinsurance would be provided to insurers so they are protected against extreme losses.

The benefits of this proposed program would be significant: less damage to property and potentially higher property values, lower costs and peace of mind to homeowners knowing they are protected against future disasters, more secure mortgages for banks and financial institutions, and less disaster relief assistance by the public sector borne by the general taxpayer.

The NFIP that comes up for renewal in 2017 provides a target of opportunity for taking steps to move in the direction of a more effective public-private partnership. Guidelines for modifying the NFIP so that the private insurance sector can be more involved, addressing affordability issues, and encouraging investments in cost-effective mitigation measures are discussed in two recent reports by the National Research Council.17,22 Changes in the NFIP could serve as a model for dealing with other extreme events.

In summary, the case for making communities and residential property more resilient to floods and other natural disasters by investing in loss reduction measures is critical, given increased economic development in hazard-prone areas.23 For insurers to be part of such a strategy in the spirit of the factory mutuals, there is a need for support from other key interested parties. These include real estate agents, developers, banks and financial institution, and residents in hazard-prone areas, as well as public sector organizations at the local, state, and federal levels.

NOTES

1. Munich Re, “Topics Geo. Natural Catastrophes 2012,” Report (Munich, Germany: Munich Re, 2013) Münchener Rückversicherungs Gesellschaft, Königinstrasse 107, 80802 München, Germany. https://www.munichre.com/site/wrap/get/documents_E200191439/mram/assetpool.mr_america/PDFs/3_Publications/Topics_Geo_2012_us.pdf.

2. http://www.iii.org/issue-update/catastrophes-insurance-issues

3. J. Bainbridge, Biography of an Idea: The Story of Mutual Fire and Casualty Insurance (Garden City, NY: Doubleday, 1952).

4. D. Kahneman, Thinking, Fast and Slow (New York, NY: Farrar, Straus and Giroux, 2011).

5. D. M. Cutler and R. J. Zeckhauser, “Extending the Theory to Meet the Practice of Insurance,” in R. Litan and R. Herring, eds., Brookings–Wharton Papers on Financial Services (Washington, DC: Brookings Institute, 2004), 1–53.

6. D. Krantz and H. Kunreuther, “Goals and Plans in Decision-Making.” Judgment and Decision Making 2 (2007): 137–68.

7. H. Kunreuther, M. V. Pauly, and S. McMorrow, Insurance and Behavioral Economics: Improving Decisions in the Most Misunderstood Industry (New York, NY: Cambridge University Press, 2013).

8. A. Tversky and D. Kahneman, “Availability: A Heuristic for Judging Frequency and Probability,” Cognitive Psychology 5, no. 2 (1973), 207–32.

9. W. Botzen, H. Kunreuther, and E. Michel-Kerjan, “Divergence Between Individual Perceptions and Objective Indicators of Tail Risks: Evidence From Floodplain Residents in New York City,” Judgment and Decision Making 10, no. 4 (2015): 365–85.

10. H. Kunreuther and E. Michel-Kerjan, At War with the Weather: Managing Large-Scale Risks in a New Era of Catastrophes (Cambridge, MA: MIT Press, 2011).

11. P. Grossi and H. Kunreuther, Catastrophe Modeling: A New Approach to Managing Risk (New York, NY: Springer, 2005).

12. R. Thaler and C. Sunstein, Nudge: The Gentle Power of Choice Architecture (New Haven, CT: Yale University, 2008).

13. E. J. Johnson, S. B. Shu, B. C. G. Dellaert, C. Fox, D. G. Goldstein, G. Häubl, R. P. Larrick, et al., “Beyond Nudges: Tools of a Choice Architecture,” Marketing Letters 23 (2012): 487–504.

14. P. Slovic, B. Fischhoff, and S. Lichtenstein, “Accident Probabilities and Seat Belt Usage: A Psychological Perspective,” Accident Analysis & Prevention 10 (1978): 28185.

15. N. D. Weinstein, K. Kolb, and B. D. Goldstein, “Using Time Intervals Between Expected Events to Communicate Risk Magnitudes,” Risk Analysis 16, no. 3 (1996): 305–8.

16. H. Kunreuther, R. J. Meyer, and E. Michel-Kerjan, “Overcoming Decision Biases to Reduce Losses From Natural Catastrophes,” in E. Shafir, ed., Behavioral Foundations of Policy (Princeton, NJ: Princeton University Press, 2013), 398–413.

17. National Research Council, Affordability of National Flood Insurance Program Premiums—Report 1 (Washington, DC: National Academies Press, 2015).

18. C. Kousky and H. Kunreuther, “Addressing Affordability in the National Flood Insurance Program,” Journal of Extreme Events 1, no. 1 (2014): 1–28.

19. Institute for Business & Home Safety, The Benefits of Modern Wind Resistant Building Codes on Hurricane Claim Frequency and Severity—A Summary Report Tampa, FL: Institute for Business and Home Safety, 2007). http://www.coalition4safety.org/resources/IBHS_HurricaneCharleySummaryReport.pdf

20. Insurance Institute for Property Loss Reduction, Coastal Exposure and Community Protection: Hurricane Andrew's Legacy (Wheaton, IL: IRC, and Boston, MA: IIPLR, 1995).

21. H. Kunreuther and E. Michel-Kerjan, Demand for Fixed-Price Multi-Year Contracts: Experimental Evidence from Insurance Decisions, Journal of Risk and Uncertainty 51, no. 2 (2015).

22. National Research Council, Tying Flood Insurance to Flood Risk for Low-Lying Structures in the Floodplain (Washington, DC: National Academies Press, 2015).

23. National Research Council, Disaster Resilience: A National Imperative (Washington, DC: National Academies Press, 2013).

Howard Kunreuther is the James G. Dinan Professor of Decision Sciences and Public Policy, Operations, Information and Decisions (OID) Department, and Co-Director of the Risk Management and Decision Processes Center, Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania.

In this Issue

On this Topic

Taylor & Francis

© 2017 Taylor & Francis Group · 530 Walnut Street, Suite 850, Philadelphia, PA · 19106