*encapsulated*by probability (the information-theoretical definition) or are vulnerable to arbitrary manipulation using redundant material (as with the 'in-tray' model).

The 'expected value' theory that we looked at last time is an attempted refinement of the 'ignorance' theory that relates confidence not to how

*much*information is as-yet untapped (this is implicitly captured by a statement's probability), but to how

*valuable*that information is likely to be, which depends on current uncertainty and the benefits and risks associated with the decision that needs to be made.

The 'expected cost' theory, which is the last one we'll be looking at in this series, adds an extra dimension to this theory by incorporating the expected

*cost*of information. The idea is that if new information is relatively

*costly*, we will have more confidence than if new information is relatively

*cheap*. This theory captures the same intuition driving the 'ignorance' theory - that confidence is about analytical 'due diligence' - but in a more-sophisticated way. How does it stand up to closer examination?

#### Is it coherent?

Wikipedia: the low-hanging fruit of the information orchard |

'Cost' does not always (or indeed usually) have to mean financial cost. Collecting information has an indirect time cost, and may have 'psychic' costs - the technical term for the unpleasant mental effort involved in consciously assimilating and processing information. The cost of information is not, in practice, usually constant: each unit is likely to

*cost more*than its predecessor, assuming information is being collected in a roughly sensible way so that the low-hanging informational fruit (Wikipedia, the paper, what your colleague thinks) is picked before the harder-to-reach stuff (academic journals, paywalled data, secret intelligence).

*cost*of information (per bit) is mirrored by the diminishing marginal

*value*of information (per bit). As we saw when we looked at the 'expected value' theory, information exhibits

*diminishing returns*in terms of the expected value of a decision. Adding additional bits of information changes the (decimal) probability of a hypothesis by ever-decreasing amounts as it approaches either 0 or 1. Since expected rewards (or costs) of a decision are (more or less)

*linearly*-related to the decimal probability, each subsequent bit of information is worth less than the previous bit.

This means that if we bolt the 'expected value' and 'expected cost' theories together, we get a theory of confidence that turns it neatly into an optimal information-searching metric. Optimal information searching means continuing to collect information until the benefits of continuing to do so are outweighed by the cost; at that point, you should make a decision. If 'confidence' is higher when the expected value of new information is low, or its cost is high, we will tend to see confidence

*rising*as new information comes in, because of diminishing returns and rising costs, until we are at a point where further investigation would be counter-productive.

The 'expected cost' theory also has an appealing link to a metaphysical distinction, often made by intelligence analysts, between 'secrets' and 'mysteries'. 'Secrets' are things that

*someone*knows (but not you), while 'mysteries' are things that

*no-one*knows, and in broad terms, intelligence analysts feel that their job is to unearth or make inferences about

*secrets*, but not necessarily to worry about

*mysteries*. This distinction, of course, has nothing other than a very superficial foundation: what makes information contained in a

*human brain*any more special than that contained in any other system? But when situated in a more general theory about the

*cost*of obtaining information, it begins to make

*some*sense; assuming it's relatively-easy to get information

*out*of a human (compared to, say, getting it out of seismic activity or the economy), the distinction between secrets and mysteries maps neatly onto the distinction between cheap information and expensive information.

#### Does it accord with usage?

In general, analysts seem

*not*to think that their appraisal of confidence is connected to the expected cost of further information. One of the questions in our survey was designed to test the connection between consequence, cost and confidence; it concerned the possibility of a bridge failure in either an isolated or a busy location, and one of the variables was the cost of additional information (a quick site visit compared to an expensive structural survey). The*consequences*mattered: confidence was lower when the bridge was on a busy commuter route. But the cost made no difference to people's confidence assessments. Just as tellingly, no-one proposed a cost-driven measure of confidence when given the opportunity to define it in their own words.#### Is it decision-relevant?

The expected cost of further information

*is*decision-relevant. As laid out above, it is one half of the equation defining optimal information searching (the other being the expected value of that information).#### Summary

The 'expected cost' theory of confidence is both decision-relevant and conceptually coherent. Its only drawback is that nobody believes that it's actually what they

In the next - and final - post in this series, we'll summarise the arguments and discuss what they mean for the prospects of a standard, cogent, workable metric that could be used by analysts to communicate confidence to customers alongside their probabilistic judgements.

*mean*by 'confidence'.In the next - and final - post in this series, we'll summarise the arguments and discuss what they mean for the prospects of a standard, cogent, workable metric that could be used by analysts to communicate confidence to customers alongside their probabilistic judgements.