Goto

Collaborating Authors

Results


The AI Index 2021 Annual Report

arXiv.org Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.


Robot soldiers could soon make up a quarter of the army

#artificialintelligence

In the age of artificial intelligence, robots will soon represent a large part of the armed forces, according to the UK's chief of the defence staff Nick Carter, who predicted that up to a quarter of the army could be made up of autonomous systems in the near future. Speaking on Sky News for Remembrance Day, the general speculated that as cyber and space join the more traditional army domains of land, air, and maritime, so will AI systems become an integral part of the armed forces' modernization effort. Carter warned that decisions haven't been taken yet, and insisted that his predictions were not based on firm targets. He nevertheless shared his visions for an armed force that is "designed for the 2030s". SEE: An IT pro's guide to robotic process automation (free PDF) (TechRepublic) "You'll see armed forces that are designed to do (cyber and space). And I think it absolutely means we'll have all manner of different people employed because those domains require different skill sets, and we will absolutely avail ourselves with autonomous platforms and robotics wherever we can," said Carter.


Robots soldiers could soon make up a quarter of the army

ZDNet

In the age of artificial intelligence, robots will soon represent a large part of the armed forces, according to the UK's chief of the defence staff Nick Carter, who predicted that up to a quarter of the army could be made up of autonomous systems in the near future. Speaking on Sky News for Remembrance Day, the general speculated that as cyber and space join the more traditional army domains of land, air, and maritime, so will AI systems become an integral part of the armed forces' modernization effort. Carter warned that decisions haven't been taken yet, and insisted that his predictions were not based on firm targets. He nevertheless shared his visions for an armed force that is "designed for the 2030s". "You'll see armed forces that are designed to do (cyber and space). And I think it absolutely means we'll have all manner of different people employed because those domains require different skill sets, and we will absolutely avail ourselves with autonomous platforms and robotics wherever we can," said Carter.


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.


Europe and AI: Leading, Lagging Behind, or Carving Its Own Way?

#artificialintelligence

For its AI ecosystem to thrive, Europe needs to find a way to protect its research base, encourage governments to be early adopters, foster its startup ecosystem, expand international links, and develop AI technologies as well as leverage their use efficiently.


Artificial intelligence and the regulatory landscape Lexology

#artificialintelligence

Currently, the European Union does not have any specific legislative instrument or standard to regulate the use and development of AI. However, these requirements are likely to set the stage for future legislation, similar in scope and effect as the General Data Protection Regulation (GDPR) for privacy, therefore indicating that the European Union may be on the cusp of providing for specific and unique AI regulatory legislation.


Not smart enough: The poverty of European military thinking on artificial intelligence

#artificialintelligence

"Artificial intelligence" (AI) has become one of the buzzwords of the decade, as a potentially important part of the answer to humanity's biggest challenges in everything from addressing climate change to fighting cancer and even halting the ageing process. It is widely seen as the most important technological development since the mass use of electricity, one that will usher in the next phase of human evolution. At the same time, some warnings that AI could lead to widespread unemployment, rising inequality, the development of surveillance dystopias, or even the end of humanity are worryingly convincing. States would, therefore, be well advised to actively guide AI's development and adoption into their societies. For Europe, 2019 was the year of AI strategy development, as a growing number of EU member states put together expert groups, organised public debates, and published strategies designed to grapple with the possible implications of AI. European countries have developed training programmes, allocated investment, and made plans for cooperation in the area. Next year is likely to be an important one for AI in Europe, as member states and the European Union will need to show that they can fulfil their promises by translating ideas into effective policies. But, while Europeans are doing a lot of work on the economic and societal consequences of the growing use of AI in various areas of life, they generally pay too little attention to one aspect of the issue: the use of AI in the military realm. Strikingly, the military implications of AI are absent from many European AI strategies, as governments and officials appear uncomfortable discussing the subject (with the exception of the debate on limiting "killer robots"). Similarly, the academic and expert discourse on AI in the military also tends to overlook Europe, predominantly focusing on developments in the US, China, and, to some extent, Russia. This is likely because most researchers consider Europe to be an unimportant player in the area.


Brexit voters more likely to live in areas at risk from rise of robots

#artificialintelligence

Brexit supporters are more likely to live in areas most threatened by the economic impact of automation, according to a study of the impact of robots and artificial intelligence in the workplace. A map of the parts of the UK likely to be hit by automation fits more closely with the map of leave voters than any other factor, said the Institute for the Future of Work (IFW). Up to 15 million workers are expected to have their employment prospects endangered by automation over the next decade, according to a series of reports that have tried to gauge the impact of new technology in the workplace. In 2015, the Bank of England estimated as many as 15m jobs would need to change or be lost through automation. A report by the consultancy firm PwC found that 10m, or 30%, of jobs in Britain were potentially under threat from breakthroughs in artificial intelligence In some sectors half the jobs could go, it warned.


Algorithmic decision-making in AVs: Understanding ethical and technical concerns for smart cities

arXiv.org Artificial Intelligence

Autonomous Vehicles (AVs) are increasingly embraced around the world to advance smart mobility and more broadly, smart, and sustainable cities. Algorithms form the basis of decision-making in AVs, allowing them to perform driving tasks autonomously, efficiently, and more safely than human drivers and offering various economic, social, and environmental benefits. However, algorithmic decision-making in AVs can also introduce new issues that create new safety risks and perpetuate discrimination. We identify bias, ethics, and perverse incentives as key ethical issues in the AV algorithms' decision-making that can create new safety risks and discriminatory outcomes. Technical issues in the AVs' perception, decision-making and control algorithms, limitations of existing AV testing and verification methods, and cybersecurity vulnerabilities can also undermine the performance of the AV system. This article investigates the ethical and technical concerns surrounding algorithmic decision-making in AVs by exploring how driving decisions can perpetuate discrimination and create new safety risks for the public. We discuss steps taken to address these issues, highlight the existing research gaps and the need to mitigate these issues through the design of AV's algorithms and of policies and regulations to fully realise AVs' benefits for smart and sustainable cities.


The 2018 Survey: AI and the Future of Humans

#artificialintelligence

"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.