Collaborating Authors


The impact of machine learning and AI on the UK economy


A recent virtual event addressed another such issue: the potential impact machines, imbued with artificial intelligence, may have on the economy and the financial system. The event was organised by the Bank of England, in collaboration with CEPR and the Brevan Howard Centre for Financial Analysis at Imperial College. What follows is a summary of some of the recorded presentations. The full catalogue of videos are available on the Bank of England's website. In his presentation, Stuart Russell (University of California, Berkeley), author of the leading textbook on artificial intelligence (AI), gives a broad historical overview of the field since its emergence in the 1950s, followed by insight into more recent developments.

Europe and AI: Leading, Lagging Behind, or Carving Its Own Way?


For its AI ecosystem to thrive, Europe needs to find a way to protect its research base, encourage governments to be early adopters, foster its startup ecosystem, expand international links, and develop AI technologies as well as leverage their use efficiently.

Are we Living in an Artificial Intelligence Simulation?


The existential question that we should be asking ourselves, is are we living in a simulated universe? The idea that we are living in a simulated reality may seem unconventional and irrational to the general public, but it is a belief shared by many of the brightest minds of our time including Neil deGrasse Tyson, Ray Kurzweil and Elon Musk. Elon Musk famously asked the question'What's outside the simulation?' in a podcast with Lex Fridman a research scientist at MIT. To understand how we could be living in a simulation, one needs to explore the simulation hypothesis or simulation theory which proposes that all of reality, including the Earth and the universe, is in fact an artificial simulation. While the idea dates back as far as the 17th-century and was initially proposed by philosopher René Descartes, the idea started to gain mainstream interest when Professor Nick Bostrom of Oxford University, wrote a seminal paper in 2003 titled "Are you Living in a Computer Simulation?" Nick Bostrom has since doubled down on his claims and uses probabilistic analysis to prove his point.

Artificial Intelligence: The time for ethics is over


Organising ethical debates has long been an efficient way for industry to delay and avoid hard regulation. Europe now needs strong, enforceable rights for its citizens, writes Green MEP Alexandra Geese. If the rules are too weak, there is a too great a risk that our rights and freedoms will be undermined: This currently applies to all applications of artificial intelligence, which up to now have only been based on non-binding ethical principles and values. In this legislation, Europe has the chance to adopt a legal framework for AI with clear rules. We need strong instruments to protect our fundamental rights and democracy.

Abolish the #TechToPrisonPipeline


The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.

Academics call on nations to work together on A.I. and ensure it benefits all of humanity


A research group made up of academics from across the globe have published a paper arguing that "cross-cultural cooperation" on AI ethics and governance is vital if the technology is to "bring about benefit worldwide." The experts -- from Cambridge University's Leverhulme Centre for the Future of Intelligence, Peking University's Center for Philosophy and the Future of Humanity, and the Beijing Academy of Artificial Intelligence -- specifically want to see cooperation across different domains, disciplines, and cultures, as well as different nations. "Such cooperation will enable advances to be shared across different parts of the world, and will ensure that no part of society is neglected or disproportionately negatively impacted by AI," wrote researcher Jess Whittlestone in a blog post this week that summarizes the paper. "Without such cooperation, competitive pressures between countries may also lead to underinvestment in safe, ethical, and socially beneficial AI development, increasing the global risks from AI." AI is poised to change the world in the coming decades as machines become increasingly competent at a range of tasks, from driving cars to discovering new drugs. But some are concerned that AI could end up being a dangerous technology if it is developed in isolated silos across different labs in different countries. In the near term, there's a genuine risk that AI could be used in warfare to power autonomous weapons, and in the long term, some have speculated that "superintelligent" machines could decide humans are no longer necessary and wipe them out altogether.

Executive Interview: Dr. David Bray, Director, Atlantic Council - AI Trends


Dr. David Bray is the Inaugural Director of the new global GeoTech Center & Commission of the Atlantic Council, a nonprofit for international political, business, and intellectual leaders founded in 1961. Headquartered in Washington, DC, the Council offers programs related to international security and global economic prosperity. In previous leadership roles, Bray led the technology aspects of the Centers for Disease Control's bioterrorism preparedness program in response to 9/11, the outbreak response to the West Nile virus, SARS, monkey pox and other emergencies. He also spent time on the ground in Afghanistan in 2009 as a senior advisor to both military and humanitarian assistance efforts, serving as the non-partisan Executive Director for a bipartisan National Commission on R&D, and providing leadership as a non-partisan federal agency Senior Executive focused on digital modernization. He also is a Young Global Leader for 2017-2021 of the World Economic Forum. Bray is a member of multiple Boards of Directors and has worked with the U.S. Special Operations Command on counter-misinformation efforts. He was invited to give the 2019 UN Charter Keynote on the future of AI & IoT governance. His academic background includes a PhD from Emory University; he also has held affiliations with MIT, Harvard, and the University of Oxford. He recently took a few moments to speak to AI Trends Editor John P. Desmond about current events, including the geopolitics of the COVID-19 pandemic. AI Trends: Thank you David for talking to AI Trends today.

Thomas Lukasiewicz awarded AXA Chair in Explainable Artificial Intelligence for Healthcare Professorship

Oxford Comp Sci

Thomas Lukasiewicz is the recent recipient of a prestigious professorship - the AXA Chair in Explainable Artificial Intelligence for Healthcare, which is the first AXA Chair at the University of Oxford. With the generous support of the AXA Research Fund, Professor Lukasiewicz will pursue opportunities to progress the role of AI in improving disease diagnosis, treatment, and prevention in healthcare. Healthcare is expected to benefit substantially from the recent revolutionary progress in artificial intelligence (AI), because it deals with huge amounts of data on a daily basis, such as patient information, medical histories, diagnostic results, genetic data, hospital billing, and clinical studies. This huge pool of data can train AI to detect patterns and make predictions and recommendations, substantially reducing the uncertainties that professionals face.

Robots in the Danger Zone: Exploring Public Perception through Engagement Artificial Intelligence

Public perceptions of Robotics and Artificial Intelligence (RAI) are important in the acceptance, uptake, government regulation and research funding of this technology. Recent research has shown that the public's understanding of RAI can be negative or inaccurate. We believe effective public engagement can help ensure that public opinion is better informed. In this paper, we describe our first iteration of a high throughput in-person public engagement activity. We describe the use of a light touch quiz-format survey instrument to integrate in-the-wild research participation into the engagement, allowing us to probe both the effectiveness of our engagement strategy, and public perceptions of the future roles of robots and humans working in dangerous settings, such as in the off-shore energy sector. We critique our methods and share interesting results into generational differences within the public's view of the future of Robotics and AI in hazardous environments. These findings include that older peoples' views about the future of robots in hazardous environments were not swayed by exposure to our exhibit, while the views of younger people were affected by our exhibit, leading us to consider carefully in future how to more effectively engage with and inform older people.

Nick Bostrom: Simulation and Superintelligence AI Podcast #83 with Lex Fridman


Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere. This conversation is part of the Artificial Intelligence podcast.