Minority Report explores a world where a technological breakthrough has allowed mankind to foresee violent crimes a few minutes before they take place. The predictions are infallible except – and here the logic must not be excessively scrutinised — a special police unit can race to the scene, intervene, and alter the course of the otherwise unalterable future. Naturally, the short forecast horizon makes for dramatic scenes.
It was the film of choice for President of the Royal Society Venki Ramakrishnan. Fittingly, Venki — a self-confessed film buff — couldn’t stay for the whole movie because he had to rush to the launch of the Royal Society’s report on machine learning and artificial intelligence the same evening. How close are we to the world of Minority Report? Big data and machine learning do allow us to extract new types of information. I’m reminded of Michelle Girvan’s Science-on-Screen event: Michelle demonstrated how easily we can infer the romantic partner of a Facebook user, with no more information than the network of “friend” connections (no relationship status or messages are required for this). But, unlike in Minority Report, our romantic-partner guesses will only be correct in most cases, not always.
Is it morally acceptable to extract such information? Machine learning can be hugely beneficial — neural networks are excellent at identifying cancerous tumors in medical images. But what might be the consequences of handing over ever more control, ever more intelligence, to machines? The self-driving car that has fewer accidents than the human-driven car may have to make moral decisions: crashing into a wall injuring its passenger or crashing into a pedestrian who’s accidentally stumbled into the road. Who is liable? The software engineer? The owner of the car? The owner’s insurance company?
Minority Report is an extreme version of prediction of the future, whose reliability creates outright paradoxes. For this reason alone it belongs to the realm of fiction. But exploring such extreme cases — however unrealistic they may seem — helps us identify the issues, and the nature of those issues, that we face as our interaction with artificial intelligence intensifies.