Goto

Collaborating Authors

Artificial Intelligence and life in 2030

#artificialintelligence

And see also this great piece from Mashable on what manufacturers are up to next. In the near future, sensing algorithms will achieve super-human performance for capabilities required for driving. Automated perception, including vision, is already near or at human-performance level for well-defined tasks such as recognition and tracking. Advances in perception will be followed by algorithmic improvements in higher level reasoning capabilities such as planning. Beyond self-driving cars, we'll have a variety of autonomous vehicles including robots and drones. AI also has the potential to transform city transportation planning, but is being held back by a lack of standardisation in the sensing infrastructure and AI techniques used. Accurate predictive models of individuals' movements, their preferences, and their goals are likely to emerge with the greater availability of data. That last sentence is worth reflecting on for a while. It does indeed seem highly likely to happen, but that doesn't mean we have to like what it might mean for society.


New Report Assesses Progress And Risks Of Artificial Intelligence

#artificialintelligence

Artificial intelligence has reached a critical turning point in its evolution, according to a new report by an international panel of experts assessing the state of the field. Substantial advances in language processing, computer vision and pattern recognition mean that AI is touching people's lives on a daily basis -- from helping people to choose a movie to aiding in medical diagnoses. With that success, however, comes a renewed urgency to understand and mitigate the risks and downsides of AI-driven systems, such as algorithmic discrimination or use of AI for deliberate deception. Computer scientists must work with experts in the social sciences and law to assure that the pitfalls of AI are minimized. Those conclusions are from a report titled "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report," which was compiled by a panel of experts from computer science, public policy, psychology, sociology and other disciplines.


La veille de la cybersécurité

#artificialintelligence

A newly published report on the state of artificial intelligence says the field has reached a turning point where attention must be paid to the everyday applications of AI technology -- and to the ways in which that technology are being abused. The report, titled "Gathering Strength, Gathering Storms," was issued today as part of the One Hundred Year Study on Artificial Intelligence, or AI100, which is envisioned as a century-long effort to track progress in AI and guide its future development . AI100 was initiated by Eric Horvitz, Microsoft's chief scientific officer, and hosted by the Stanford University Institute for Human-Centered Artificial Intelligence. The project is funded by a gift from Horvitz, a Stanford alumnus, and his wife, Mary. The project's first report, published in 2016, downplayed concerns that AI would lead to a Terminator-style rise of the machines and warned that fear and suspicion about AI would impede efforts to ensure the safety and reliability of AI technologies. At the same time, it acknowledged that the effects of AI and automation could lead to social disruption.


The ethics of AI

#artificialintelligence

It is associated with great hopes, but it also raises fears. Therefore, the call for ethical guidelines regarding the new technologies is becoming increasingly louder. We organized a panel discussion on what is importance of implementing ethical practices within your predictive models, data workflows, products and AI research. I was the part of the panel along with Scott Haines, Lizzie Siegle and Nick Walsh. In this article, we will go through some of the points we discussed with panel and their views on various topics along with my view on each topic.


The Liability Problem for Autonomous Artificial Agents

AAAI Conferences

This paper describes and frames a central ethical issue–the liability problem–facing the regulation of artificial computational agents, including artificial intelligence (AI) and robotic systems, as they become increasingly autonomous, and supersede current capabilities. While it frames the issue in legal terms of liability and culpability, these terms are deeply imbued and interconnected with their ethical and moral correlate–responsibility. In order for society to benefit from advances in AI technology, it will be necessary to develop regulatory policies which manage the risk and liability of deploying systems with increasingly autonomous capabilities. However, current approaches to liability have difficulties when it comes to dealing with autonomous artificial agents because their behavior may be unpredictable to those who create and deploy them, and they will not be proper legal or moral agents. This problem is the motivation for a research project that will explore the fundamental concepts of autonomy, agency and liability; clarify the different varieties of agency that artificial systems might realize, including causal, legal and moral; and the illuminate the relationships between these. The paper will frame the problem of liability in autonomous agents, sketch out its relation to fundamental concepts in human legal and moral agency–including autonomy, agency, causation, intention, responsibility and culpability–and their applicability or inapplicability to autonomous artificial agents.