Sir Richard Lambert told an audience at Warwick Business School that companies need to take responsibility for the consequences of the rise of the robots. Speaking in the first of the WBS 50th Anniversary Distinguished Lectures held at WBS London at The Shard, Sir Richard outlined the threat to society of the increasing use of automation through machine learning, artificial intelligence and robots. The Bank of England's chief economist Andy Haldane has warned that 15 million jobs in the UK are under threat from mass automation, almost half those employed in the country. The possible destruction of so many jobs has led a number of academics, economists and prominent CEOs, like Tesla's Elon Musk, to predict that governments will have to hand out a universal basic income to citizens. Sir Richard, who was Director General of the Confederation of British Industry (CBI) from 2006 to 2011, believes CEOs must make sure their companies shoulder their share of responsibility, either voluntarily or by force of regulation.
"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.
Alan Turing's seminal 1950 paper "Computing Machinery and Intelligence", posed the question "Can machines think?" Since then machine learning (ML) has found its way into numerous processes; seeking to simplify our lives by making processes smarter, better and faster. Financial risk management is an industry that is rife with opportunities for ML to disrupt in the coming years, one of the most obvious areas being credit scoring. In this blog we explore some of the main findings of the recently published Bank of England survey on ML, this is followed by our views on the challenges and potential solutions of implementing ML within a credit risk scoring framework. When we talk about the rise of ML in credit risk, we quite often forget that one of the earliest real life use cases for ML was within this very industry.
From the Alan Turing Institute to DeepMind, the UK boasts a rich history and exciting present in machine learning and artificial intelligence research and development, led by academia and industry. According to recent research, AI is the largest commercial opportunity for Britain, projected to add £232 billion to the UK economy by 2030. SMEs and start-ups will play a significant role in grasping this. The segment highlighted by Theresa May during her speech at the World Economic Forum in Davos in January, where she told world leaders that the UK's strong start-up scene will be instrumental in making the UK a world leader in ethical AI. However, start-ups today still need to overcome some significant challenges before reaching their full potential.