So I was very interested to see that, in order to work on just such questions as these, the new Bertrand Russell Professor of Philosophy at Cambridge University, Huw Price, is setting up a Centre for the Study of Existential Risk. In partnership with the Astronomer Royal, Martin Rees, and the founder of Skype, Jaan Tallinn, he is planning to do significant academic work in these areas of a kind that is currently not being undertaken. Jaan Tallinn is quoted as having said that he thinks he is more likely to die as the result of an AI accident than from cancer or a heart attack and this should give us pause for thought. AI does pose a real and existential threat and we should perhaps be putting far more resources into researching the possible kinds of AI that could arise and planning for both the near and the far future.
If it is true that many people in the UK are under-educated about mathematics and science, I think it's far more true to say that most of us do not have much of a clue about the spectrum of AI and what it encompasses. There are now online courses such as the one run by Sebastian Thrun of Stanford University in conjunction with Peter Norvig, Research Director at Google, offering an introduction to the world of AI. There are as many different definitions of the task of studying AI as there are philosophers and teachers but a good starting point would be 'the science of making computer software that reasons about the world around it.' The New York Times reported in February 2011 that a computer named 'Watson', designed by David Ferrucci and team at IBM, had won a TV quiz called Jeopardy. 'Trivial it's not' said the headline. The game depends on subtle language plays and the ability to find answers in unlikely contexts - all the kind of thing that it is difficult for computers to do. Ferrucci's team have developed a technique called DeepQA which takes advantage of computer power to range across staggering stores of information and pick out and sift things that might be relevant. It replaces human subtlety with cues that help the system to understand questions and supply relevant data at great speed. Watson does not 'think like a person', though he may appear to. 'The goal is to build a computer that can be more effective in understanding and interacting in natural language, but not necessarily in the same way that a human does it,' says Ferrucci. How right the NY Times was; this is not trivial, it's potentially hugely powerful and we ought to pay more attention to where it is all leading.
Man vs Robot, Lucy Jolin in Cam 68, March issues Cambridge Alumni Magazine.
The Future of Computers - Artificial Intelligence, Blogpost on Networks and Servers, Rui Natario networkandservers.blogspot.com.uk/