Goto

Collaborating Authors

Results


The CPSC Digs In On Artificial Intelligence - Consumer Protection - United States

#artificialintelligence

American households are increasingly connected internally through the use of artificially intelligent appliances.1 But who regulates the safety of those dishwashers, microwaves, refrigerators, and vacuums powered by artificial intelligence (AI)? On March 2, 2021, at a virtual forum attended by stakeholders across the entire industry, the Consumer Product Safety Commission (CPSC) reminded us all that it has the last say on regulating AI and machine learning consumer product safety. The CPSC is an independent agency comprised of five commissioners who are nominated by the president and confirmed by the Senate to serve staggered seven-year terms. With the Biden administration's shift away from the deregulation agenda of the prior administration and three potential opportunities to staff the commission, consumer product manufacturers, distributors, and retailers should expect increased scrutiny and enforcement.2


The CPSC Digs In on Artificial Intelligence

#artificialintelligence

American households are increasingly connected internally through the use of artificially intelligent appliances.1 But who regulates the safety of those dishwashers, microwaves, refrigerators, and vacuums powered by artificial intelligence (AI)? On March 2, 2021, at a virtual forum attended by stakeholders across the entire industry, the Consumer Product Safety Commission (CPSC) reminded us all that it has the last say on regulating AI and machine learning consumer product safety. The CPSC is an independent agency comprised of five commissioners who are nominated by the president and confirmed by the Senate to serve staggered seven-year terms. With the Biden administration's shift away from the deregulation agenda of the prior administration and three potential opportunities to staff the commission, consumer product manufacturers, distributors, and retailers should expect increased scrutiny and enforcement.2


White Paper Machine Learning in Certified Systems

arXiv.org Artificial Intelligence

Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. It is also an opportunity to implement and embed new capabilities out of the reach of classical implementation techniques. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices? These are some of the questions addressed by the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup\'ery de Toulouse (IRT), as part of the DEEL Project.


Ethics of Artificial Intelligence - Cisco Blogs by Utkarsh Srivastava

#artificialintelligence

Intelligent machines have helped humans in achieving great endeavors. Artificial Intelligence (AI) combined with human experiences have resulted in quick wins for stakeholders across multiple industries, with use cases ranging from finance to healthcare to marketing to operations and more. There is no denying the fact that Artificial Intelligence (AI) has helped in quicker product innovation and an enriching user experience. However, few of these use cases include context-aware marketing, sales forecasting, conversational analytics, fraud detection, credit scoring, drug testing, pregnancy monitoring, self-driving cars – a never-ending list of applications. But the very idea of developing smart machines (AI-powered systems) raises numerous ethical concerns.


The AI Index 2021 Annual Report

arXiv.org Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.


People in these jobs are most afraid of a robot takeover

#artificialintelligence

Sometimes, it seems like robots are completely taking over the world. Every year, thousands of machines are deployed into the workforce, taking jobs that humans used to do. And, workers are rightly worried. A new survey from CNBC and Survey Monkey found that almost four in 10 workers between the ages of 18 and 24 are concerned about new technology – like robots and artificial intelligence systems, taking over their jobs. Dan Schawbel, research director of Future Workplace, told CNBC that one reason why the younger generation is more concerned about a robot takeover is that artificial intelligence has rapidly become normalized throughout our society, and the length remaining in young people's careers will likely be impacted by AI. "They are starting to see the value of [AI] and how it's impacting their personal and professional lives," he said.


On the Philosophical, Cognitive and Mathematical Foundations of Symbiotic Autonomous Systems (SAS)

arXiv.org Artificial Intelligence

Symbiotic Autonomous Systems (SAS) are advanced intelligent and cognitive systems exhibiting autonomous collective intelligence enabled by coherent symbiosis of human-machine interactions in hybrid societies. Basic research in the emerging field of SAS has triggered advanced general AI technologies functioning without human intervention or hybrid symbiotic systems synergizing humans and intelligent machines into coherent cognitive systems. This work presents a theoretical framework of SAS underpinned by the latest advances in intelligence, cognition, computer, and system sciences. SAS are characterized by the composition of autonomous and symbiotic systems that adopt bio-brain-social-inspired and heterogeneously synergized structures and autonomous behaviors. This paper explores their cognitive and mathematical foundations. The challenge to seamless human-machine interactions in a hybrid environment is addressed. SAS-based collective intelligence is explored in order to augment human capability by autonomous machine intelligence towards the next generation of general AI, autonomous computers, and trustworthy mission-critical intelligent systems. Emerging paradigms and engineering applications of SAS are elaborated via an autonomous knowledge learning system that symbiotically works between humans and cognitive robots.


The Fear of Artificial Intelligence in Job Loss

#artificialintelligence

With all the hype over Artificial Intelligence, there is additionally a lot of disturbing buzz about the negative results of AI. More than one-quarter (27%) of all employees state they are stressed that the work they have now will be disposed of within the next five years because of new innovation, robots or artificial intelligence, as indicated by the quarterly CNBC/SurveyMonkey Workplace Happiness review. In certain industries where technology already has played a profoundly disruptive role, employees fear of automation likewise run higher than the normal: Workers in automotives, business logistics and support, marketing and advertising, and retail are proportionately more stressed over new technology replacing their jobs than those in different industries. The dread stems from the fact that the business is already witnessing it. Self-driving trucks already are compromising the jobs of truck drivers, and it is causing a huge frenzy in this job line.


Global Big Data Conference

#artificialintelligence

With all the hype over Artificial Intelligence, there is additionally a lot of disturbing buzz about the negative results of AI. More than one-quarter (27%) of all employees state they are stressed that the work they have now will be disposed of within the next five years because of new innovation, robots or artificial intelligence, as indicated by the quarterly CNBC/SurveyMonkey Workplace Happiness review. In certain industries where technology already has played a profoundly disruptive role, employees fear of automation likewise run higher than the normal: Workers in automotives, business logistics and support, marketing and advertising, and retail are proportionately more stressed over new technology replacing their jobs than those in different industries. The dread stems from the fact that the business is already witnessing it. Self-driving trucks already are compromising the jobs of truck drivers, and it is causing a huge frenzy in this job line.


Explainable Artificial Intelligence (XAI): An Engineering Perspective

arXiv.org Artificial Intelligence

The remarkable advancements in Deep Learning (DL) algorithms have fueled enthusiasm for using Artificial Intelligence (AI) technologies in almost every domain; however, the opaqueness of these algorithms put a question mark on their applications in safety-critical systems. In this regard, the `explainability' dimension is not only essential to both explain the inner workings of black-box algorithms, but it also adds accountability and transparency dimensions that are of prime importance for regulators, consumers, and service providers. eXplainable Artificial Intelligence (XAI) is the set of techniques and methods to convert the so-called black-box AI algorithms to white-box algorithms, where the results achieved by these algorithms and the variables, parameters, and steps taken by the algorithm to reach the obtained results, are transparent and explainable. To complement the existing literature on XAI, in this paper, we take an `engineering' approach to illustrate the concepts of XAI. We discuss the stakeholders in XAI and describe the mathematical contours of XAI from engineering perspective. Then we take the autonomous car as a use-case and discuss the applications of XAI for its different components such as object detection, perception, control, action decision, and so on. This work is an exploratory study to identify new avenues of research in the field of XAI.