The analytical resource-management task is fairly easy to state in cost-benefit terms. Analytical work on a topic has an expected benefit in terms of the extent to which it reduces uncertainty, and a cost. It adds value because of the possibility of it changing a decision. If you have £10 to bet on a two-horse race, with each horse valued at 1-1, and you currently believe Horse A has a 60% probability of winning, you will (if you have to) bet on it and expect to win £10 60% of the time, and lose £10 40% of the time, for a net expected profit of £2. If you are offered new information that will accurately predict the winning horse 90% of the time, your winnings will (whatever it tells you) be expected to rise to £8. This information would therefore have net value of £6 and if it costs less than that then you should buy it.
Although there is no easily-stated general solution to this problem, you can show how the optimal amount of resources to throw at an analytical problem will change, and roughly how significantly, when the problem parameters change. When the costs and risks associated with a problem rise, analysis becomes more valuable. When new information increases uncertainty (i.e. pushes the probabilities associated with an outcome away from 0 or 1), analysis becomes more valuable. When information becomes easier (cheaper) to gather, analysis becomes more valuable. The optimising organsiation might attempt to measure these things and move resources accordingly - both between analytical topics (within analysis teams) and towards and away from analysis in general (within larger organisations of which only parts produce analysis).
Of course it's not as simple as that, and that's not very simple in the first place. It's expensive to move analytical resources, not least because it takes time for humans to become effective at analysing a new topic. This adds an additional dimension of complexity to the problem. But it surprises me that firms and other analysis organisations don't attempt explicitly to measure the value their analysis adds - perhaps by looking at the relative magnitude of the decisions it helps support - because among other things this would give them a basis on which to argue for more resources, and a framework to help explain why surprises were missed when they were.
Animals have evolved interesting solutions to this problem that we might learn from. As humans, we do this so naturally we barely notice it. Under high-risk situations - while driving, negotiating a ski run, or when walking around an antiques shop - our information processing goes into overdrive at the expense of our other faculties such as speech or abstract thought (one theory suggests this is why time seems to slow down when a disaster is unfolding). Generally, humans sleep at night - switching information-processing to a bare minimum - when threat levels (from e.g. large predators) are lowest in the evolutionary environment.
A bat yesterday
Bats, however, make an interesting study because their information-gathering is easy to measure. When bats are searching for insects they emit a sonar pulse interval of 100ms. This enables them to 'see' objects up to 30-40 metres away but with a resolution of only ten times a second. When bats are in the final moments before capture (where the decision-infoming benefit of frequent information is much higher) this pulse interval falls to 5ms - 200 times a second - but this pulse only provides information about objects less than about 1 metre away. Sonar is expensive though. A medium-sized bat needs about one moth an hour. This would rise to around twenty moths an hour if its sonar was kept switched on at full power. Bats have therefore found a solution to the problem of optimising information-acquisition from which analysis organisations perhaps have something to learn.