Monday, 15 February 2016

Build your own Artificial Intelligence

Matchbox educable artificial intelligences (MEAIs) are machines made of matchboxes that learn how to play simple games. They were first invented by Donald Michie in 1960, and popularised by Martin Gardner in his Mathematical Games column for Scientific American in 1962. MEAIs have been created for noughts and crosses and hexapawn (the latter being the game recommended for implementation by Martin Gardner). We've also created a kit for making an MEAI that learns how to play the tiny mancala variant 'Micro-Wari' - you can download it here (it makes a good half-term holiday project).



MEAIs can help embed some useful ways of thinking about artificial intelligences in general. Each matchbox corresponds to a 'state' in which a decision has to be made. For most real-life problems (and indeed most games other than very simple ones) there are a vast number of states, making MEAIs an impractical implementation method. But by helping us conceive of an artificial intelligence as any machine that turns 'states' (or information sets) into decisions, with performance - the quality of the intelligence - being gauged according to how closely certain 'desirable' states (e.g. winning games) are achieved, we can learn to avoid our near-compulsion to anthropomorphise artificial intelligences, and perhaps think about our own intelligences - their design trade-offs and clever workarounds - in a different way too.

MEAIs also point us to the importance of learning in artificial intelligence design. Specifically, they highlight that learning is a way of converging on intelligent behaviour without needing to have that intelligent behaviour 'built in' by the designer. Learning is not necessary for artificial intelligence; a matchbox AI could simply be pre-programmed with optimal moves. However, learning becomes a more important design principle when the 'game' becomes sufficiently complex that working out optimal behaviour is simply too difficult. AIs which learn are much more interesting, because they can surprise us by finding innovative strategies.

"Hi, I'm the Cyber Research Systems T-850 Terminator.
I'm here to tell you that by-and-large, articles about
artificial intelligence that feature pictures of me or my
colleagues are probably not worth reading. Except this one."
When we rely on learning to generate intelligent behaviour, our own understanding of the problem space becomes less important, but our specification of objectives (what constitutes 'winning') becomes more so. Among other things, the difficulty of specifying objectives for real-world behaviour (what does it mean to 'win' at life?) means that there are significant risks associated with the development of a strong, or general, artificial intelligence. A learning machine of sufficient power would, like its matchbox counterparts, eventually find the best way to achieve whatever it's been told to achieve, but in an unrestricted real-world setting. This means that if we told it to achieve the wrong things, or not to achieve the right things, we'd have no right to expect it to behave itself.

No comments: