Some of these theories contradict each other, but others are mutually consistent. Some (it will turn out) are difficult to make internally coherent by themselves. But all have at least an intuitive appeal, and we will consider each in turn over the next few posts to ask:
- Is the interpretation internally coherent? Can the thing it purports to describe be made meaningful, and how?
- Does the interpretation accord with what we think, according to survey and other evidence, that analysts and their customers actually intend to mean using confidence language?
- Is the interpretation decision-relevant? In other words, does or should the information affect optimal decision-making, and how? Is it therefore something we should be communicating to decision-makers, and if so how?
1: The 'Uncertain Probability' Theory
![]() |
"Doctors say he's got a fifty-fifty chance of living, though there's only a ten percent chance of that." |
2: The 'Information' Theory
The 'information' theory of confidence suggests that it measures the amount of information you have access to. The more information you have, the higher confidence you will have in your judgement. This idea is premised on the concept that 'information quantity' is meaningfully separable from probability - whether this is possible is something we'll examine.
The 'ignorance' theory is a counterpart to the 'information' theory, and posits that confidence is instead related to how much information you reasonably think you don't have. In other words, if you think you've seen everything useful relating to a hypothesis, you will report higher confidence than if you think there is still information out there that will have a bearing on it. Like the 'information' theory, this idea presupposes that 'quantity of information' can be separated from probability, and will prove problematic for the same reason.
3: The 'Ignorance' Theory
4: The 'Quality' Theory
This idea proposes that 'confidence' is related to a basket of qualitative indicators relating to the credibility, reliability and so on of the information used to form the judgement in question. This notion hinges on the idea that there are qualitative differences between types of evidence which are important to convey, but that are somehow not reflected in the probabilities of the hypotheses they support.
5: The 'Prior Weight' Theory
This somewhat more technical suggestion proposes that 'confidence' captures the degree to which a judgement is formed using prior probabilities - informally, 'background knowledge' - rather than the likelihood ratio (or 'diagnosticity') of subject-specific evidence. If this theory is true, low-confidence probabilities will be little changed from statistical priors, while high-confidence probabilities will diverge considerably from statistical priors as a result of highly-diagnostic evidence.
6: The 'Expected Value' Theory
The 'expected value' theory is that confidence relates to the anticipated economic value of new items of information. This theory suggests that, all things being equal, if we face a high-cost or high-risk decision, and thus if new information would be likely to add more expected value to our decision, we will have lower confidence in our judgement. Conversely, if our decision is of little consequence, we will treat the judgement more confidently. This proposal is similar to the 'ignorance' theory, but considers not the 'amount' of missing evidence so much as its value in terms of the fundamental characteristics of the decision being made.
7: The 'Expected Cost' Theory
In the next post, we'll look at the first of these proposals: the 'Uncertain Probability' theory. This will involve a dive into the intriguing and subtle distinction between frequency and probability that lies behind a philosophical and statistical debate that has been smouldering for centuries.
No comments:
Post a comment