Goto

Collaborating Authors

Results


Challenges of Artificial Intelligence -- From Machine Learning and Computer Vision to Emotional Intelligence

arXiv.org Artificial Intelligence

Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.


Randomized Classifiers vs Human Decision-Makers: Trustworthy AI May Have to Act Randomly and Society Seems to Accept This

arXiv.org Artificial Intelligence

As \emph{artificial intelligence} (AI) systems are increasingly involved in decisions affecting our lives, ensuring that automated decision-making is fair and ethical has become a top priority. Intuitively, we feel that akin to human decisions, judgments of artificial agents should necessarily be grounded in some moral principles. Yet a decision-maker (whether human or artificial) can only make truly ethical (based on any ethical theory) and fair (according to any notion of fairness) decisions if full information on all the relevant factors on which the decision is based are available at the time of decision-making. This raises two problems: (1) In settings, where we rely on AI systems that are using classifiers obtained with supervised learning, some induction/generalization is present and some relevant attributes may not be present even during learning. (2) Modeling such decisions as games reveals that any -- however ethical -- pure strategy is inevitably susceptible to exploitation. Moreover, in many games, a Nash Equilibrium can only be obtained by using mixed strategies, i.e., to achieve mathematically optimal outcomes, decisions must be randomized. In this paper, we argue that in supervised learning settings, there exist random classifiers that perform at least as well as deterministic classifiers, and may hence be the optimal choice in many circumstances. We support our theoretical results with an empirical study indicating a positive societal attitude towards randomized artificial decision-makers, and discuss some policy and implementation issues related to the use of random classifiers that relate to and are relevant for current AI policy and standardization initiatives.


California jury could decide if Tesla's 'Autopilot' claim is false advertising

Daily Mail - Science & tech

A lawsuit accusing Tesla of'false advertising' when marketing its Autopilot is making its way through the Santa Barbara Superior Court in California, even though the Elon Musk-run firm is disputing the claims. Judge Thomas Anderle ruled this week that the case of Alexandro and Iaian Filippini, two brothers who operate a Santa Barbara–based wealth management company, versus Tesla is allowed to proceed to its next phase. The ruling is due to the Filippini brothers presenting enough evidence to show fraud and that the firm violated the the Consumer Legal Remedies Act – this means a jury could soon hear the case. The lawsuit, filed in 2020, states Tesla misrepresented the system in the $120,000 Model S the pair purchased in 2016. The lawsuit, filed in 2020, states Tesla misrepresented the system in their $120,000 Model S the pair purchased in 2016.


The Role of Social Movements, Coalitions, and Workers in Resisting Harmful Artificial Intelligence and Contributing to the Development of Responsible AI

arXiv.org Artificial Intelligence

There is mounting public concern over the influence that AI based systems has in our society. Coalitions in all sectors are acting worldwide to resist hamful applications of AI. From indigenous people addressing the lack of reliable data, to smart city stakeholders, to students protesting the academic relationships with sex trafficker and MIT donor Jeffery Epstein, the questionable ethics and values of those heavily investing in and profiting from AI are under global scrutiny. There are biased, wrongful, and disturbing assumptions embedded in AI algorithms that could get locked in without intervention. Our best human judgment is needed to contain AI's harmful impact. Perhaps one of the greatest contributions of AI will be to make us ultimately understand how important human wisdom truly is in life on earth.


White Paper Machine Learning in Certified Systems

arXiv.org Artificial Intelligence

Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. It is also an opportunity to implement and embed new capabilities out of the reach of classical implementation techniques. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices? These are some of the questions addressed by the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup\'ery de Toulouse (IRT), as part of the DEEL Project.


Weekly Brief: Levandowski – Once Upon Today in America – TU Automotive

#artificialintelligence

Former Waymo and Uber self-driving car-whiz kid, Anthony Levandowski was sentenced last week to 18 months in federal prison for stealing trade secrets. Levandowski will also pay a $95,000 fine and $756,499.22 in restitution to Waymo. He co-founded Google's self-driving car program, now Waymo, in 2009 and served as the program's technical lead until January 2016, when he left to co-found self-driving truck start-up Otto. Seven months later Uber acquired Otto for $680M and named Levandowski the head of its self-driving car division. He was on top of the tech world. He appeared in Wired Magazine as the go-to voice in Silicon Valley for self-driving cars and LiDAR technology.


The 84 biggest flops, fails, and dead dreams of the decade in tech

#artificialintelligence

The world never changes quite the way you expect. But at The Verge, we've had a front-row seat while technology has permeated every aspect of our lives over the past decade. Some of the resulting moments -- and gadgets -- arguably defined the decade and the world we live in now. But others we ate up with popcorn in hand, marveling at just how incredibly hard they flopped. This is the decade we learned that crowdfunded gadgets can be utter disasters, even if they don't outright steal your hard-earned cash. It's the decade of wearables, tablets, drones and burning batteries, and of ridiculous valuations for companies that were really good at hiding how little they actually had to offer. Here are 84 things that died hard, often hilariously, to bring us where we are today. Everyone was confused by Google's Nexus Q when it debuted in 2012, including The Verge -- which is probably why the bowling ball of a media streamer crashed and burned before it even came to market.


One Big Problem With Driverless Cars: Figuring Out How They Make Money

#artificialintelligence

As it turns out, making cars drive themselves may have been the easy part. The hard part is yet to come. Over the past few days, the Financial Times has detailed in two reports how the autonomous vehicle businesses of Silicon Valley are beginning to reckon with a new issue: being functioning businesses. I understand this must be a new and troubling development for any Silicon Valley startup, but here, it sounds like a profound wakeup call. Increasingly, industry insiders recognise that commercialising their technologies may be more difficult than anticipated -- due to questions around "government approval, public trust, brand marketing, the ability to manufacture at scale and the technical knowhow to manage a fleet that competes with the likes of Uber and Lyft on timely pick-ups", Patrick McGee reports in his weekend Big Read.


Algorithmic decision-making in AVs: Understanding ethical and technical concerns for smart cities

arXiv.org Artificial Intelligence

Autonomous Vehicles (AVs) are increasingly embraced around the world to advance smart mobility and more broadly, smart, and sustainable cities. Algorithms form the basis of decision-making in AVs, allowing them to perform driving tasks autonomously, efficiently, and more safely than human drivers and offering various economic, social, and environmental benefits. However, algorithmic decision-making in AVs can also introduce new issues that create new safety risks and perpetuate discrimination. We identify bias, ethics, and perverse incentives as key ethical issues in the AV algorithms' decision-making that can create new safety risks and discriminatory outcomes. Technical issues in the AVs' perception, decision-making and control algorithms, limitations of existing AV testing and verification methods, and cybersecurity vulnerabilities can also undermine the performance of the AV system. This article investigates the ethical and technical concerns surrounding algorithmic decision-making in AVs by exploring how driving decisions can perpetuate discrimination and create new safety risks for the public. We discuss steps taken to address these issues, highlight the existing research gaps and the need to mitigate these issues through the design of AV's algorithms and of policies and regulations to fully realise AVs' benefits for smart and sustainable cities.


The 2018 Survey: AI and the Future of Humans

#artificialintelligence

"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.