## Tuesday, 20 January 2015

### Are Hypotheses Instrumental or Fundamental?

At the most fundamental level, there is only one decision-problem.  You have an information-set (call it 'I').  You have a ranking function that tells you, for any outcome, how desirable it is - call this function 'U(x)', for utility, where 'x' is an outcome.  We could say that U takes as its arguments states-of-the-world, because in one sense it's the world I care about and am trying to affect.  But the model is immediately more parsimonious if we think of U as taking information-sets as its arguments; after all, a decision-making agent only interacts with the world through information and would only be able to make inferences about the world using it; it makes more sense to think of the decision-maker's goal as being other information sets that it would like to get to.  And finally you have a decision-variable, representing the bundle of things it's in your power to decide the value of: call this bundle of things 'D'.

All decisions can be represented this way.  And the 'single unified decision-problem' is to choose the value of D in such a way as to maximise expected desirability of the outcomes.  Any value for the decision-variable - e.g. D=2 - will map to a probability distribution over a set of outcomes.  One or more values of the decision-variable will map to outcome probability distributions that have a higher expectation than any other value for D: this is (or these are) the optimal decisions.  This means we can represent the optimal decision as another function, D*(I, U(x)), which maps an information-set, together with a utility function, onto a value of D.  The decision is then determined only by the information at hand, and the objectives that we have represented by the utility function.

The interesting thing about this satisfying and abstract representation of the decision-problem is that nowhere in it do 'hypotheses' or 'beliefs' appear.

In the analytical literature, the hypothesis is a fundamental concept.  Probability is defined in terms of the connections between hypotheses; the probability of a hypothesis A, given some other hypothesis B, is precisely defined as a measure of the event-space in which both A and B are true, divided by the size of the event-space in which B is true.  This idea is fundamental to Bayesian inference, information theory, statistics and so on.

But if we don't need hypotheses to state the fundamental decision-problem, it suggests that hypotheses actually only serve an instrumental purpose in decision-making.  Some models of inference, such as Solomonoff induction, do dispense with hypotheses, but they are not designed to form practical, implementable approaches to solving decision problems.  Perhaps hypotheses, and particularly the law-like ones that science aims to identify, act as a useful shorthand for a vast and complex range of possible information-sets that would be impossible to enumerate exhaustively.  Hypotheses' role as mental representations of possible worlds suggests that our cognitive machinery may have evolved them for this reason.  It is an interesting question as to what sorts of stability must exist in the information environment for hypotheses to be a useful instrument for making decision-focused inferences.