Wednesday, 28 January 2015

Are high-impact events necessarily rare?

It seems fortuitous that high-impact events are rare. Over half of all terrorist attacks kill no-one, one in five kill one person, and attacks that kills tens or hundreds of people are vanishingly rare. Earthquakes increase in frequency by a factor of about ten with every point decrease in magnitude. The same sort of distribution applies to meteors, floods, solar activity and so on. Many event categories are distributed in an exponential or power-law distribution with regard to impact. Is it just luck that we live in a world where high-impact events are rare? Here are three possible reasons why high-impact events might be necessarily low-probability.



The first relates to the processes whereby impacts are generated.  Events that are exponentially distributed are often generated by processes which have a constant probability of becoming larger, or though a process of sequential division. All 2km-long rivers started out as 1km-long rivers, but not all 1km-long rivers become 2km-long rivers. Terrorist attacks that kill 10 people need to start by being terrorist attacks that kill 1 person, but not all of those kill any more than that.  Starting from the other end, meteors break into several smaller ones (source: Asteroids) and so if there is a sufficiently high percentage chance of breakage, there will be lots of grain-of-sand-sized meteors for every one large one.

This has a superficial plausibility, but the problem is that a number of other event types - particularly those generated through additive processes - do not come in one-tailed distributions like this.  Daily temperature, for example, is normally distributed and (thankfully) does not therefore have a large peak at 0 degrees Kelvin; the same goes for rainfall, sizes of predators, and severity of heart attacks.  Why are there no distributions of this kind producing frequent high-impact events?

One reason might be habituation.  In other words, 'impact' might be defined implictly or explicitly around a baseline of 'normality'.  If terrorist attacks that killed 100 people happened every day, this theory goes, we'd no longer see them as an extreme event.  So it's not the high impact that drives low probability, but low probability which drives the perception of higher impact.  

Another possible and related reason is adaptation.  Evolution adapts organisms for survival within their environment, so we tend to evolve 'immunity' to frequent events. If large floods were commonplace, perhaps we would be better swimmers, or able to breathe underwater, like intertidal dwellers such as barnacles.

One interesting feature of this quite-fundamental question is how hard it is to structure a way of answering it.  How might we test hypotheses purporting to explain why high-impact events are low-frequency?

Thursday, 22 January 2015

Show More, Take All

There is a deservedly-obscure bravado-driven pub game called 'Show More, Take All', in which the two players compare the contents of their wallets, and the person with the most money takes all of the other person's cash.  Like a lot of asymmetric-information games, it has in interesting solution. Abstractly, the game is modelled as follows: each player has an amount of money known only to them, and a choice either to 'play' or 'not play'.  Only if both players choose 'play' is the money transferred, from the person with the least to the person with the most.  

The game is potentially-terrifying.  One of Aleph Insights' correspondents writes that he was once offered a game of SMTA by a friend-of-a-friend in a pub.  Coincidentally, he'd just drawn out £700 as he was travelling abroad and was about to get foreign exchange.  He thought he therefore stood a very good chance of winning and accepted.  His interlocutor, it turns out, was a builder who had just finished a job and was carrying £1500 in his wallet.  Only a last-minute attack of conscience on the builder's part prevented the game from going ahead.

You might start by thinking that a good strategy would be to 'play' if you have a large amount of money, and 'not play' if you have a small amount of money.  This kind of strategy can be specified by a single number (call it 'x') which is the cut-off below which you don't play.  

The trouble with a strategy like this is that, if one player is playing it, the other person then has an incentive not to play if their amount of money is close to 'x'.  The optimal cut-off against someone playing with a strategy 'x' is going to be something higher than x - call it 'x1'.  In turn, of course, the strategy that beats strategy x1 will be a cut-off even higher than x1 - call it 'x2'.  And so on.

The upshot is that there is no cut-off that can be a best strategy for both players.  The equilibrium is for both players to 'not play', regardless of how much money they have.  Intuitively, the only person you would want to play against is someone that didn't want to play themselves.  In the words of the WOPR from 'WarGames', the only way to win is not to play. 

This perhaps-surprising solution raises a puzzle as to why superficially-similar real-world situations can occur.  One obvious analogue is armed conflict: people should only enter into conflicts that they know they're going to win; why, then, are there always two willing parties?  They can't both be right. What are the features of armed conflict that make it differ relevantly from 'Show More, Take All' that mean it is still a viable course of action?

(Photo: Cecil Beaton)

"Never, never, never believe any war will be smooth and easy, or that anyone who embarks on the strange voyage can measure the tides and hurricanes he will encounter. The statesman who yields to war fever must realise that once the signal is given, he is no longer the master of policy but the slave of unforeseeable and uncontrollable events. Antiquated War Offices, weak, incompetent, or arrogant Commanders, untrustworthy allies, hostile neutrals, malignant Fortune, ugly surprises, awful miscalculations — all take their seats at the Council Board on the morrow of a declaration of war. Always remember, however sure you are that you could easily win, that there would not be a war if the other man did not think he also had a chance." - Winston Churchill

Why not Randomise Election Timings?

'Backwards induction' is a useful solution concept in game theory.  Games that have a finite number of turns can often be straightforwardly solved by looking at the final outcome, working out the optimal play in the previous turn, then using that to work out the optimal play in the turn before that and so on.  Games like the Rubinstein bargaining model, or the finitely-repeated prisoners' dilemma (PD), raise interesting insights when solved with backwards induction.  In the Rubinstein game, the relative power of the last mover 'infects' the rest of the game and leads to a division of spoils in their favour even if agreement is reached on turn one.  In the repeated PD, co-operation can be sustained as long as there is the prospect of further games.  But where there is a known final game - be it ever so far away - the co-operative solution can 'unravel' from the end and lead to a destructive 'defection' solution from the start.

A Hive of Inactivity(Photo by David Iliff. License: CC-BY-SA 3.0)

The timing of political elections, and its ramifications for legislative activity, are often analysed using a similar approach.  There is little incentive to pass legislation at the end of a parliament: there is no time for the incumbents to gain political capital from it, and if they lose the opposition will either get the credit or simply reverse the decision on taking power.  In the UK, there has been talk of a 'zombie parliament' while MPs wait for the election.  The US's four-yearly electoral cycle produces a similar torpor in the election year - with the added dampener that in the second term there is no prospect for the incumbent to be re-elected.  There is no question, of course, of giving governments total freedom to decide the timing of elections: this would be an invitation to perpetual delay and functional tyranny.

Game theory provides a possible solution though.  In the repeated PD, if you randomly decide whether or not the game will be played once more, the 'backwards induction' effect disappears (provided the probability of a repeat is above a certain level).  We could easily implement this for the UK.  It would require a National Lottery-style ball tumbler containing 61 balls, two of which would be labelled 'ELECTION'.  Each morning, before parliamentary business begins, two balls would be drawn.  If both are the 'ELECTION' balls then an election would be called in - say - six months time.  This would produce parliaments every five years on average, although sometimes you'd have elections very close to one another, and other times they would drag on for considerably longer.

The main advantage would be the elimination of 'zombieism': every day, the expected length of time till the next election would be the same.  Watching the draw would also make for a fun five-minute coffee break for politicos.  What would the disadvantages be?  Why couldn't we randomise elections?

Edit (9am): six months is probably too long; it would simply replicate the problem.  Perhaps two weeks would be better?  What would the optimal time be?

Tuesday, 20 January 2015

Are Hypotheses Instrumental or Fundamental?

At the most fundamental level, there is only one decision-problem.  You have an information-set (call it 'I').  You have a ranking function that tells you, for any outcome, how desirable it is - call this function 'U(x)', for utility, where 'x' is an outcome.  We could say that U takes as its arguments states-of-the-world, because in one sense it's the world I care about and am trying to affect.  But the model is immediately more parsimonious if we think of U as taking information-sets as its arguments; after all, a decision-making agent only interacts with the world through information and would only be able to make inferences about the world using it; it makes more sense to think of the decision-maker's goal as being other information sets that it would like to get to.  And finally you have a decision-variable, representing the bundle of things it's in your power to decide the value of: call this bundle of things 'D'.

All decisions can be represented this way.  And the 'single unified decision-problem' is to choose the value of D in such a way as to maximise expected desirability of the outcomes.  Any value for the decision-variable - e.g. D=2 - will map to a probability distribution over a set of outcomes.  One or more values of the decision-variable will map to outcome probability distributions that have a higher expectation than any other value for D: this is (or these are) the optimal decisions.  This means we can represent the optimal decision as another function, D*(I, U(x)), which maps an information-set, together with a utility function, onto a value of D.  The decision is then determined only by the information at hand, and the objectives that we have represented by the utility function.

The interesting thing about this satisfying and abstract representation of the decision-problem is that nowhere in it do 'hypotheses' or 'beliefs' appear.

In the analytical literature, the hypothesis is a fundamental concept.  Probability is defined in terms of the connections between hypotheses; the probability of a hypothesis A, given some other hypothesis B, is precisely defined as a measure of the event-space in which both A and B are true, divided by the size of the event-space in which B is true.  This idea is fundamental to Bayesian inference, information theory, statistics and so on.

But if we don't need hypotheses to state the fundamental decision-problem, it suggests that hypotheses actually only serve an instrumental purpose in decision-making.  Some models of inference, such as Solomonoff induction, do dispense with hypotheses, but they are not designed to form practical, implementable approaches to solving decision problems.  Perhaps hypotheses, and particularly the law-like ones that science aims to identify, act as a useful shorthand for a vast and complex range of possible information-sets that would be impossible to enumerate exhaustively.  Hypotheses' role as mental representations of possible worlds suggests that our cognitive machinery may have evolved them for this reason.  It is an interesting question as to what sorts of stability must exist in the information environment for hypotheses to be a useful instrument for making decision-focused inferences.  

Saturday, 17 January 2015

Narrative and Propositional Scenarios

In their widely-cited 1983 paper, Extensional Versus Intuitive Reasoning, Tversky and Kahneman discuss the 'conjunction fallacy'.  This is an intriguing phenomenon in which adding more detail to a scenario increases its perceived probability, in a way that is unambiguously fallacious.

The conjunction fallacy suggests that when asked about the probability of a scenario like this:

"US forces engage in a fatal confrontation with Chinese forces"

people put a lower figure on it than for a scenario like this:

"A US naval patrol vessel, upholding the Phillippines' claim to the Second Thomas Shoal, attempts to inderdict an approaching Chinese landing craft.  The vessels collide and several Chinese sailors are killed."

even though the latter scenario is an extremely specific scenario that forms part of the family of scenarios circumscribed by the former and so must have a smaller probability.

This leaves scenario analysts in a difficult position.  Propositional scenarios - like the former of the two above - might be dismissed before being considered appropriately, leading to collective failures of imagination.  Narrative scenarios - like the latter of the two above - can seem more vivid to customers and might lead to better decisions about prioritisation of threats, but from a technical point-of-view have a vanishingly small probability of being true.

The best approach might be to see narrative scenarios as a tool, rather than an end-product, for getting decision-makers to identify categories of scenario that are potentially significant and worth investigating.  Analysts can then convert these infinitessimally-likely scenarios into propositional scenarios for probabilistic assessment.

More widely, the problem of what to do when faced with a customer who is themselves predictably biased is a difficult one faced by professionals not just in intelligence but across all policy domains.

Thomas Schelling
(photo: T Zadig)
"There is a tendency in our planning to confuse the unfamiliar with the improbable. The contingency we have considered seriously looks strange; what looks strange looks improbable; what is improbable need not be considered seriously.”

Saturday, 3 January 2015

'Luck' is Ignorance

A new paper in 'Science' suggests that differences in stem-cell division rates strongly influence differences in cancer rates in different types of tissue. Due to some perhaps-unfortunate wording in the paper's abstract, news organisations have been reporting that two-thirds of cancer is explained by 'bad luck'. The paper almost certainly does not suggest this conclusion. But the question of whether 'luck' can be an explanation of any kind at all is an interesting one.

A naive, but pervasive, misconstrual of probabilities is that they measure some quantity in the world - as though the probability of a '6' on a six-sided die - one-in-six - is some feature of the die itself that we can measure.  This view has been largely supplanted by the Bayesian interpretation of probability - a far more powerful set of concepts - in which 'probability' is a measure of the information available to, or alternatively the ignorance of, the observer.

Dr Arbuthnot

The extraordinary John Arbuthnot, forward-thinking as ever, summed this insight as early as 1692: "It is impossible for a Die, with such determin'd force and direction, not to fall on such determin'd side, only I don't know the force and direction which makes it fall on such determin'd side, and therefore I call it Chance, which is nothing but the want of art."

In The Imaginary Invalid, Molière pokes fun at some attendant doctors who seek to explain opium's sleep-inducing properties by citing its 'vertus dormitiva'.  'Vertus dormitiva' simply means 'sleep-inducing quality', so the doctors are really saying 'opium makes you sleepy because of its sleep-inducing quality'.  It looks like an explanation, but it isn't.

It's the same with the ascription of events to 'luck' or 'chance'.  To say something happened 'by chance' is merely to say that it was caused by things of which we are ignorant, or in other words, 'because reasons'.

The Evolution of Analytical Questions

Analytical problems - the things organisations identify as problems, worry about, and employ people to gather information on and analyse - tend to move over time from being open questions to being closed questions.  Open questions cannot be answered 'yes' or 'no', and begin with words like 'what', 'how', and 'why'.  Closed questions can often be answered 'yes' or 'no' - or with a specific value - and begin with words like 'will', 'can', 'is', 'how many', or 'when'.

Open questions often do not begin with a clear solution concept.  Investigation and analysis of open questions tends to be more intuitive, associative, divergent, and multidisciplinary.  The result of this type of analysis is one or more closed questions that can be answered using deductive, critical, enumerative and algorithmic approaches where subject-matter expertise is valuable.  Time, and more information, transform open questions alchemically into closed questions.

It is through this process that questions like 'what will happen in Iraq in the next six months?', 'where shall we go on holiday next year?' and 'how are the continents formed?' transform into 'will the Kurdistan Regional Government hold an independence referendum in the next six months?', 'when does the rainy season start in Kerala?', and 'how fast is continental drift happening?'

Because of the difficulty of approaching open questions in a defined, process-driven, algorithmic fashion, individuals and organisations are sometimes averse to them and do not devote enough time to ensuring that their answers provide an adequate analytical audit trail for the closed questions on which they supervene.  In the pathological case, this can lead to tunnel vision and concomitant unmitigated existential risk for organisations.

 "You see, what happened to me - what happened to the rest of us - is we started for a good reason, then you're working very hard to accomplish something and it's a pleasure, it's excitement.  And you stop thinking, you know; you just stop."
Richard Feynman, on working at Los Alamos