The University of Cambridge has opened a 10 million research centre to explore the impact of artificial intelligence, Wired reports. The Leverhulme Centre for the Future of Intelligence -- first announced last December and funded with a research grant from the Leverhulme Trust -- will study the impacts of this "potentially epoch-making technological development, both short and long term." The centre's new website details a list of projects that its researchers will look at. The centre also writes on its website that its aim is to build a new interdisciplinary community of researchers, with strong links to technologists and the policy world. Led by Cambridge philosophy professor Huw Price, the centre, which opened on Monday, will work in conjunction with the university's Centre for the Study of Existential Risk (CSER), which is funded by Skype cofounder Jaan Tallinn and looks at emerging risks to humanity's future including climate change, disease, warfare, and artificial intelligence.
A new center to study the implications of artificial intelligence and try to influence its ethical development has been established at the U.K.'s Cambridge University, the latest sign that concerns are rising about AI's impact on everything from loss of jobs to humanity's very existence. The Leverhulme Trust, a non-profit foundation that awards grants for academic research in the U.K., on Thursday announced a grant of 10 million ( 15 million) over ten years to the university to establish the Leverhulme Centre for the Future of Intelligence. The new facility will be directed by Professor Huw Price, the university's Bertrand Russell Professor of Philosophy. Others on the team include political scientists, lawyers, psychologists and technologists, said Prof. Gordon Marshall, the director of the Leverhulme Trust. The Trust sprang out of a company that now is part of Unilever.
The UK is about to further boost its status as a world leading centre for artificial intelligence (AI) with the launch of a new centre in Cambridge which is set to explore the implications the technology holds for humans. Officially opened by Professor Stephen Hawking on Wednesday, the 10m Leverhulme Centre for the Future of Intelligence (CFI) will bring together top researchers from across computer science, philosophy, social sciences and other disciplines such as law and politics to explore the development of AI and both the opportunities and challenges it brings. Read more: Quiz: Who said it - real Trump or robot Trump? "Success in creating AI could be the biggest event in the history of our civilisation," said Hawking. "But it could also be the last – unless we learn how to avoid the risks.
AI systems are now used in everything from the trading of stocks to the setting of house prices; from detecting fraud to translating between languages; from creating our weekly shopping lists to predicting which movies we might enjoy. This is just the beginning. Soon, AI will be used to advance our understanding of human health through analysis of large datasets, help us discover new drugs and personalise treatments. Self-driving vehicles will transform transportation and allow new paradigms in urban planning. Machines will run our homes more efficiently, make businesses more productive and help predict risks to society.
In a lecture at the University of Cambridge this week, Stephen Hawking made the bold claim that the creation of artificial intelligence will be "either the best, or the worst thing, ever to happen to humanity". The talk was celebrating the opening of the new Leverhulme Centre of the Future of Intelligence, where some of the best minds in science will try to answer questions about the future of robots and artificial intelligence - something Hawking says we need to do a lot more of. "We spend a great deal of time studying history," Hawking told the lecture, "which, let's face it, is mostly the history of stupidity." But despite all our time spent looking back at past errors, we seem to make the same mistakes over and over again. "So it's a welcome change that people are studying instead the future of intelligence," he explained.