|Overflows are simple but effective control systems|
Photo: Myles Smith
Simple control systems are everywhere. Overflows regulating water levels, governors on steam engines, crossguards on swords preventing contact with the blade, thermostats switching heating on and off, and fuses burning out in the event of a power surge are all simple mechanical control systems. They fulfil design objectives by causing a difference in system behaviour when certain thresholds are met. Thanks to cheap processors, electronic control systems have become ubiquitous (do people even use the term 'smart phone' or 'smart TV' any more?) but this feels like cheating: the ingenuity of the mechanical systems is so much more striking.
Living organisms have - or perhaps are - abundant control systems. From plant photoreceptors through to the human brain, evolution has found a wealth of solutions to the problem of getting an organism to reproduce, via the acquisition of energy. But from a user perspective, the brain doesn't feel like a mechanical control system. Overflow pipes don't form beliefs about the water level, thermostats don't gather evidence about the temperature, fuses don't make a decision when they burn out, and the traffic lights certainly didn't notice that they weren't melting snow any more. Our goal-seeking behaviour seems in contrast to be mediated through hypotheses about the world - we entertain cognitive models of the world, attach probabilities to them, run simulations with them, become attached to them, try to bring them about or avert them.
|Are we just a more sophisticated version of Dr Nim?|
The most interesting possibility, however, is that while hypotheses do represent a genuinely distinct cognitive technology to purely-mechanical systems, they are not fully general, and that there are ways to design an all-purpose, all-environments decision-maker that don't involve anything like hypotheses. Perhaps our present cognitive limitations simply prevent us from imagining it?
A lot depends on which of these possibilities is true. At present, we can understand why artificially-intelligent systems work, even if not exactly how. It is not fallacious to describe (for example) face recognition algorithms as essentially constructing and testing hypotheses about face-possibilities within images. If artificial intelligences greater than ours operate on the hypothesis paradigm then we will still be able to understand why they work, impressed though we might be with their speed of operation and the complexity of those hypotheses. But if the hypothesis-based approach is itself surpassed by a better cognitive technology, we might find ourselves at much more of a loss.