Sunday, 21 February 2016

Running Away from the Spaceship

The 2012 Ridley Scott film Prometheus makes interesting viewing for those interested in decision-making. (For those who haven't seen it, the rest of this post will be slightly spoilery.) Setting aside higher-level questions like "why did they decide to make this film?", the story itself is full of terrible decisions made by the characters. For example:

  • Why did the crew sign up to a long space voyage without asking what the mission was?
  • Why did the Weyland Corporation not conduct any apparent crew screening or allow the crew to meet one another before launch?
  • Why didn't they send remote probes to scan the facility before landing on the planet and sending human crewmembers in?
  • Why did they take their helmets off?
  • Why did the Prometheus crew decide to go to bed instead of staying in contact with Fifield and Milburn overnight?
  • Why did no-one ask Dr Shaw why she was covered in blood?
  • Why didn't David investigate what had happened to the alien foetus?
People do crazy things though. The Prometheus mistakes were arguably no worse than those made during the Darien Scheme expeditions, and there's no particular reason to think that hubris and bad decisions will be a thing of the past by 2090. 

One allegedly bad decision in the film was this:


That's Vickers, played by Charlize Theron, running away from a big round alien spaceship that's rolling towards her. Internet wisdom has it that the best strategy to avoid a big round alien spaceship that's rolling towards you is to run sideways - i.e. orthogonal to the motion of the spaceship. But this is not necessarily true, and consequently calumniates Vickers and Shaw's capabilities vis-a-vis the avoidance of big round alien spaceships. Yet the myth that you should run sideways has spawned proto-memes such as 'The Prometheus School of Running Away from Things', and diagrams such as this one on TV Tropes that in fact promotes the least efficient escape method (a diagonal route would be quicker, and assuming you can't outpace the spaceship then by the time you've stopped running in the direction of its movement, you're strictly worse off than when you started).


DO NOT heed this terrible advice (pic: TV Tropes)
In fact, the problem of avoiding big round alien spaceships generalises easily to the problem of avoiding any objects that are moving towards you, and is a much more interesting one than it's generally given credit for. There are edge cases in which the solution is straightforward. In the case where running sideways will get you to safety, it's clearly a good choice. Where you can actually outrun the object, you can simply outpace it for a bit until you have enough of a headstart to make a break for it, and then run sideways. This won't be the most efficient solution (see below) but it works.

Where these conditions don't hold - when you can't go faster than the rolling object, and where running sideways will get you squashed, the problem gets more fun. There are two questions we want to ask: (1) can you escape the path of the object?, and (2) what's the most efficient escape path? It's actually easier to start with (2), because if the most efficient escape path gets you to safety, the answer to (1) is 'yes' and otherwise it's 'no'. Assuming we don't have to worry about dynamic effects (e.g. an accelerating object or getting tired), the problem can be modelled like this:



The problem you have, assuming you want to run as fast as possible (i.e. that 'v' is not a control variable - though you'd get style points for walking casually instead of running) is to select the angle 'a' at any instant. This means, perhaps remarkably, that the problem can be modelled in terms of a conic section



With time on the z-axis, the 'cone' represents the maximum distance reachable from your point of origin, while the intersecting plane represents the front of the moving object. The intersecting ellipse joins together the points where the object has caught you up; for survival to be possible, this ellipse needs to be wider than the object. Who would guess that conic sections, possibly the most useless thing in the pure maths pantheon, could help you escape a rolling spaceship?

The first thing that we can prove is that if there is an optimal path that will take you to safety, it must be a straight line, broadly because if the point of safety (on the right-hand edge of the object's path) can be reached, a straight line will be a safe way of reaching it (you can prove this, but we won't do it here), and a straight line is obviously the fastest route there. So zig-zagging, logarithmic curves or sigmoids are thankfully not required. This simplifies the problem immensely as it means there is only one value of the decision variable 'a' that will hold for the duration of the escape. Finding this value, however, is nowhere near as straightforward as stating the problem: as you might expect with things involving ellipses and boundary conditions, the optimisation conditions are a huge mess.

A much easier problem, whose solution simplifies nicely, is which angle you should choose to get as far to the right as possible before the object catches up with you. In terms of the 'ellipse of possibility', this is the question of which angle you need to head in to reach the its widest point. It won't give you the optimal route, but if you're not concerned about a few seconds here or there, it has the advantage that if you can make it to safety at all, this rule will definitely get you there. We won't derive it here, but the answer is satisfying as it turns out to depend only on your relative velocity compared to the spaceship. The angle of escape that puts you the furthest away from an object, by the time it catches up with you, turns out to be:


where gamma is the object's speed, divided by your maximum speed. Here it is as a graph, expressed in degrees, if you want to refer to it the next time you're in danger of being crushed by a large rolling object:


Intuitively, the faster the object's going, relative to you, the less relevant your angle is since it has proportionally less effect on the time it takes for it to catch you - so you should run increasingly perpendicular to its motion. If the object's going more slowly, running in the same direction as its motion gives you relatively more extra time, and so more of your velocity should be invested in doing so. So if you're going to be hit by a car, or a bullet, don't bother trying to outrun it; but for slower things it certainly isn't always optimal to run directly perpendicular to the motion of the object. 

What about the Prometheus situation? We can only guess at the parameter values; in different shots they seem to change significantly, and the 'magic countdown' may also be in effect. The engineer ship looks to be rolling at around 20m/s. Assuming the crew can run at about 6m/s, this gives us a gamma of 3, for an escape angle of about 70 degrees. With an apparent headstart of about 60m, this means that by the time the ship has caught up with them, a 70-degree escape path would have moved them about 19m sideways. This is a whole extra meter than the 18m they would have managed by running directly perpendicular to the ship's motion, and might have made the difference between life and death.

The 'running away from something' problem is a very common category of decision, in which we face a trade-off between two variables that work multiplicatively. In this case, we are trading time against velocity: the more perpendicular we run, the faster we go in the direction we care about, but the less time we'll have before the object catches up with us. This is similar to many other decisions, including some major ones, such as how hard we want to work, compared to how well we want to live. The spaceship is rolling towards all of us, and the faster we try move out of its way, the sooner it will catch us up. 

Monday, 15 February 2016

Build your own Artificial Intelligence

Matchbox educable artificial intelligences (MEAIs) are machines made of matchboxes that learn how to play simple games. They were first invented by Donald Michie in 1960, and popularised by Martin Gardner in his Mathematical Games column for Scientific American in 1962. MEAIs have been created for noughts and crosses and hexapawn (the latter being the game recommended for implementation by Martin Gardner). We've also created a kit for making an MEAI that learns how to play the tiny mancala variant 'Micro-Wari' - you can download it here (it makes a good half-term holiday project).



MEAIs can help embed some useful ways of thinking about artificial intelligences in general. Each matchbox corresponds to a 'state' in which a decision has to be made. For most real-life problems (and indeed most games other than very simple ones) there are a vast number of states, making MEAIs an impractical implementation method. But by helping us conceive of an artificial intelligence as any machine that turns 'states' (or information sets) into decisions, with performance - the quality of the intelligence - being gauged according to how closely certain 'desirable' states (e.g. winning games) are achieved, we can learn to avoid our near-compulsion to anthropomorphise artificial intelligences, and perhaps think about our own intelligences - their design trade-offs and clever workarounds - in a different way too.

MEAIs also point us to the importance of learning in artificial intelligence design. Specifically, they highlight that learning is a way of converging on intelligent behaviour without needing to have that intelligent behaviour 'built in' by the designer. Learning is not necessary for artificial intelligence; a matchbox AI could simply be pre-programmed with optimal moves. However, learning becomes a more important design principle when the 'game' becomes sufficiently complex that working out optimal behaviour is simply too difficult. AIs which learn are much more interesting, because they can surprise us by finding innovative strategies.

"Hi, I'm the Cyber Research Systems T-850 Terminator.
I'm here to tell you that by-and-large, articles about
artificial intelligence that feature pictures of me or my
colleagues are probably not worth reading. Except this one."
When we rely on learning to generate intelligent behaviour, our own understanding of the problem space becomes less important, but our specification of objectives (what constitutes 'winning') becomes more so. Among other things, the difficulty of specifying objectives for real-world behaviour (what does it mean to 'win' at life?) means that there are significant risks associated with the development of a strong, or general, artificial intelligence. A learning machine of sufficient power would, like its matchbox counterparts, eventually find the best way to achieve whatever it's been told to achieve, but in an unrestricted real-world setting. This means that if we told it to achieve the wrong things, or not to achieve the right things, we'd have no right to expect it to behave itself.

Wednesday, 3 February 2016

Groundhog Day, Anthropic Bias

Yesterday was Groundhog Day, but luckily today is 3 February and not 2 February again. The much-loved Harold Ramis film is the best known, but not the first work to deal with the topic of a time loop: previous examples include Philip K Dick's 1964 novel Martian Time Slip and the 1973 book 12:01 PM, but arguably also Beckett's Waiting for Godot and Joyce's Finnegans Wake; philosophical allusions to the idea can be found in Nietzsche's 'eternal return' thought experiment, and the Poincaré recurrence theorem; and one putative explanation for the Mandela Effect is that apparently-false memories are palimpsests of previous iterations of the world in which they were true.

Perhaps the biggest puzzle in Groundhog Day is why Bill Murray's character, Phil Connors, apparently has a memory (at least, among the characters we meet in Punxsutawney) that is immune from the loop and doesn't reset along with everything else. This raises some questions. Do other brain-states carry over? We see him get visibly depressed for several iterations, suggesting that they do. But then would he also be vulnerable to degenerative disorders such as dementia? (Estimates of the total time Connors spends in the loop have ranged up to about thirty years, so it's possible that he was lucky enough to leave the loop before neural degeneration became an on-screen issue.) When would his memory capacity become a constraint? And, upon returning home, would Connors regard his acquaintances like someone who he hadn't seen for many, many years? One unorthodox and startling interpretation of the story has Connors in multiple parallel universes, not actually reliving the same day but merely believing that he was, due to a 'memory leak' across the fifth dimension - an idea reminiscent of Derek Parfit's teletransporter thought experiments, which were given fictional expression in Christopher Nolan's 2006 film The Prestige.

More troublingly, if Connors memory did reset every day, how would he - or anyone else - know? Would it make sense to say that a time-loop was happening? A similar kind of puzzle was tackled by Sydney Shoemaker in the 1968 paper 'Time Without Change'. Shoemaker considers whether it makes sense to talk of time passing in a completely still universe, and concludes that under some circumstances it would. Could we also talk - even in theory - about time 'passing' in a Groundhog-Day-type time loop? On which scale?


Time loops - memoryless recurrences - are one way of characterising a thought experiment called 'Sleeping Beauty', which, for such a simple-sounding problem, has engaged and divided a number of distinguished thinkers. The canonical thought experiment involves a lady undergoing a rather-invasive procedure with sedatives and induced amnesia. But it might be easier to understand if expressed in Groundhog Day terms.

Sleeping Beauty is about to undergo the following procedure: on 1 February: a coin will be tossed, the result of which is kept secret from her. If it's heads, she will experience a single-day time-loop on 2 February, with no memories carried over to the second loop; if it's tails, she'll just experience 2 February once. The time-loop that occurs if heads is tossed only happens once - after the second 2 February, she'll awake on 3 February and all will be well. The puzzle is this: when she awakes on 2 February, what probability should she place on the hypothesis that the coin came down 'heads'?

Some initial analyses of this problem concluded that Beauty's assessed probability of heads should remain at 50%, on the basis that she gets no new information when she awakes. But analytical consensus is now converging on the probability of heads being two thirds, on the basis (broadly) that 'heads' is sampled twice but 'tails' only sampled once. This approach has been elucidated in Nick Bostrom's Anthropic Bias, and says essentially that the fact we exist is information in itself, and that we should form beliefs as though we are drawn at random from the pool of possible observers. This has ramifications far beyond mere thought experiments: reasoning along these lines can enable us to make judgements, among other things, about the nature of the cosmos, the possibility of extra-terrestrial life, and, perhaps most importantly, the fact that your Facebook friends are, on average, more popular than you are.

Worryingly, this reasoning would seem to suggest that we are almost surely trapped in an infinitely-repeated Groundhog Day loop. But as long as our memories aren't reset-immune like Phil Connors', who cares?