Goto

Collaborating Authors

Results


White Paper Machine Learning in Certified Systems

arXiv.org Artificial Intelligence

Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. It is also an opportunity to implement and embed new capabilities out of the reach of classical implementation techniques. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices? These are some of the questions addressed by the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup\'ery de Toulouse (IRT), as part of the DEEL Project.


The AI Index 2021 Annual Report

arXiv.org Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.


Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges

arXiv.org Artificial Intelligence

As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.


AI Research Considerations for Human Existential Safety (ARCHES)

arXiv.org Artificial Intelligence

Framed in positive terms, this report examines how technical AI research might be steered in a manner that is more attentive to humanity's long-term prospects for survival as a species. In negative terms, we ask what existential risks humanity might face from AI development in the next century, and by what principles contemporary technical research might be directed to address those risks. A key property of hypothetical AI technologies is introduced, called \emph{prepotence}, which is useful for delineating a variety of potential existential risks from artificial intelligence, even as AI paradigms might shift. A set of \auxref{dirtot} contemporary research \directions are then examined for their potential benefit to existential safety. Each research direction is explained with a scenario-driven motivation, and examples of existing work from which to build. The research directions present their own risks and benefits to society that could occur at various scales of impact, and in particular are not guaranteed to benefit existential safety if major developments in them are deployed without adequate forethought and oversight. As such, each direction is accompanied by a consideration of potentially negative side effects.


The 2018 Survey: AI and the Future of Humans

#artificialintelligence

"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.


A 20-Year Community Roadmap for Artificial Intelligence Research in the US

arXiv.org Artificial Intelligence

Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.



AI-augmented government

#artificialintelligence

For decades, artificial intelligence (AI) researchers have sought to enable computers to perform a wide range of tasks once thought to be reserved for humans. In recent years, the technology has moved from science fiction into real life: AI programs can play games, recognize faces and speech, learn, and make informed decisions. As striking as AI programs may be (and as potentially unsettling to filmgoers suffering periodic nightmares about robots becoming self-aware and malevolent), the cognitive technologies behind artificial intelligence are already having a real impact on many people's lives and work. AI-based technologies include machine learning, computer vision, speech recognition, natural language processing, and robotics;1 they are powerful, scalable, and improving at an exponential rate. Developers are working on implementing AI solutions in everything from self-driving cars to swarms of autonomous drones, from "intelligent--? And the public sector is seeking--and finding--applications to improve services; indeed, cognitive technologies could eventually revolutionize every facet of government operations. For instance, the Department of Homeland Security's Citizenship and Immigration and Services has created a virtual assistant, EMMA, that can respond accurately to human language. EMMA uses its intelligence simply, showing relevant answers to questions--almost a half-million questions per month at present. Learning from her own experiences, the virtual assistant gets smarter as she answers more questions. Customer feedback tells EMMA which answers helped, honing her grasp of the data in a process called "supervised learning.--?3


AI-augmented government

#artificialintelligence

While EMMA is a relatively simple application, developers are thinking bigger as well: Today's cognitive technologies can track the course, speed, and destination of nearly 2,000 airliners at a time, allowing them to fly safely.4 Over time, AI will spawn massive changes in the public sector, transforming how government employees get work done. It's likely to eliminate some jobs, lead to the redesign of countless others, and create entirely new professions.5 In the near term, our analysis suggests, large government job losses are unlikely. But cognitive technologies will change the nature of many jobs--both what gets done and how workers go about doing it--freeing up to one quarter of many workers' time to focus on other activities.


AI-augmented government

#artificialintelligence

While EMMA is a relatively simple application, developers are thinking bigger as well: Today's cognitive technologies can track the course, speed, and destination of nearly 2,000 airliners at a time, allowing them to fly safely.4 Over time, AI will spawn massive changes in the public sector, transforming how government employees get work done. It's likely to eliminate some jobs, lead to the redesign of countless others, and create entirely new professions.5 In the near term, our analysis suggests, large government job losses are unlikely. But cognitive technologies will change the nature of many jobs--both what gets done and how workers go about doing it--freeing up to one quarter of many workers' time to focus on other activities.