Mass panic of this kind has a snowballing character. It starts with one or two people forming a strong enough belief (due to an external stimulus of some kind - popping balloons, backfiring exhausts or nearby applause) to motivate a behaviour change: running away, raising the alarm, communicating to others and so on. Subsequently, the observation of this behaviour becomes sufficient evidence for observers (who may include security officials) to act. At this point the panic spreads until it has 'burnt itself out' by spreading to everyone in the vicinity.
|Who started it? Photo: redjar|
By itself, this suggests that by the time a panic has got to the 'crystallisation' phase, it will be difficult to stop. Baseless mass panics are not vanishingly rare, but they're also not common. Getting data might be hard, but it's plausible that baseless mass panics occur with a frequency of a similar order-of-magnitude to genuine threats. This means that observing a mass panic should raise the probability of a genuine threat being present easily to above the 1-in-100 or so required to motivate evasion.
The interesting bit - the part to which the outcome is most sensitive - is therefore the 'nucleation' phase. What does it take for there to be more than (say) a 1% probability of a genuine threat? The answer is 'not much'. In fact, situations in which we believe there to be a 1% probability of (say) a terrorist attack ought to be 100 times more frequent than actual terrorist attacks. Why aren't mass panics more frequent then? Continuing the thought, it suggests that nearly all potential mass panics must dissolve at the 'nucleation' point. Why?
The answer plausibly lies in the speed with which new information resolves the issue. Normally, it doesn't take long to confirm that popping balloons or a backfiring exhaust are just that and not sounds of a terrorist attack. Where this kind of confirmatory feedback is not forthcoming, mass panics can quickly escalate to crystallisation, at which stage snowballing panic ceases to be stimulus-driven and becomes self-sustaining.
Designing against baseless mass panic should therefore involve taking into account ways in which information (that there isn't an attack) can spread quickly - fast enough to get to people before they can start taking flight. Large open spaces should be less vulnerable than spaces with lots of corridors and subspaces. Reducing extraneous noise might also help to increase the speed with which information can travel. On the other hand, we might think that occasional mass panics are a price worth paying for other principles to be emphasised in the design of public spaces. They are not very frequent, and when they do occur, they don't necessarily suggest that something's going wrong with our decision-making.
The model of mass panics presented above provides a useful analogy to decision-making within organisations. Governments, businesses, and other groups will often make decisions based on a set of beliefs. But most people in them don't, and don't need to, know what the reasons for those beliefs are. If enough other people believe that the decision's a good one, that's evidence in itself that someone must know what's going on. This of course is the origin of groupthink, and while not necessarily evidence of organisational irrationality, it's been at least one driver for a number of high-profile decision errors, arguably including the Bay of Pigs invasion, the surrender of France in 1940, the collapse of Kodak and the invasion of Iraq. As with our design of public spaces, reducing organisational 'mass panics' is a matter of increasing the speed of information (real information, that is, not propaganda). This is one way of interpreting the recommendations of the Chilcott Report into the UK decision processes underpinning the Iraq war, which we discussed in the Cognitive Engineering podcast a couple of weeks ago.