Goto

Collaborating Authors

Results


Ex-Google Exec Sent to Prison for Stealing Robocar Secrets

#artificialintelligence

A former Google engineer has been sentenced to 18 months in prison after pleading guilty to stealing trade secrets before joining Uber's effort to build robotic vehicles for its ride-hailing service. The sentence handed down Tuesday by U.S. District Judge William Alsup came more than four months after former Google engineer Anthony Levandowski reached a plea agreement with the federal prosecutors who brought a criminal case against him last August. Levandowski, who helped steer Google's self-driving car project before landing at Uber, was also ordered to pay more than $850,000. Alsup had taken the unusual step of recommending the Justice Department open a criminal investigation into Levandowski while presiding over a high-profile civil trial between Uber and Waymo, a spinoff from a self-driving car project that Google began in 2007 after hiring Levandowski to be part of its team. Levandowski eventually became disillusioned with Google and left the company in early 2016 to start his own self-driving truck company, called Otto, which Uber eventually bought for $680 million. He wound up pleading guilty to one count, culminating in Tuesday's sentencing.


US prosecutors seek years in prison for Uber self-driving exec who stole Google trade secrets

ZDNet

US prosecutors are seeking a total of 27 months behind bars for Anthony Levandowski, the former head of Uber's self-driving arm who pleaded guilty to stealing trade secrets from Google. Levandowski was indicted by the US Department of Justice (DoJ) on 33 counts of theft and attempted theft in 2019 for stealing intellectual property belonging to his former employer. The ex-Google engineer worked on the tech giant's self-driving technologies from 2009 to 2016 before abruptly resigning to found his own company. Prosecutors claimed that before he left his post, Levandowski downloaded a treasure trove of 14,000 internal documents relating to engineering, manufacturing, and business, specifically linked to Google's LiDAR and self-driving car research. See also: Uber's future may be more about Uber Eats, Uber Freight than ride sharing Otto, a rival in the same space, was co-founded by the engineer together with Lior Ron.


Regulating human control over autonomous systems

arXiv.org Artificial Intelligence

In recent years, many sectors have experienced significant progress in automation, associated with the growing advances in artificial intelligence and machine learning. There are already automated robotic weapons, which are able to evaluate and engage with targets on their own, and there are already autonomous vehicles that do not need a human driver. It is argued that the use of increasingly autonomous systems (AS) should be guided by the policy of human control, according to which humans should execute a certain significant level of judgment over AS. While in the military sector there is a fear that AS could mean that humans lose control over life and death decisions, in the transportation domain, on the contrary, there is a strongly held view that autonomy could bring significant operational benefits by removing the need for a human driver. This article explores the notion of human control in the United States in the two domains of defense and transportation. The operationalization of emerging policies of human control results in the typology of direct and indirect human controls exercised over the use of AS. The typology helps to steer the debate away from the linguistic complexities of the term "autonomy." It identifies instead where human factors are undergoing important changes and ultimately informs about more detailed rules and standards formulation, which differ across domains, applications, and sectors.


Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy

arXiv.org Artificial Intelligence

Well-designed technologies that offer high levels of human control and high levels of computer automation can increase human performance, leading to wider adoption. The Human-Centered Artificial Intelligence (HCAI) framework clarifies how to (1) design for high levels of human control and high levels of computer automation so as to increase human performance, (2) understand the situations in which full human control or full computer control are necessary, and (3) avoid the dangers of excessive human control or excessive computer control. The methods of HCAI are more likely to produce designs that are Reliable, Safe & Trustworthy (RST). Achieving these goals will dramatically increase human performance, while supporting human self-efficacy, mastery, creativity, and responsibility.


Towards a Framework for Certification of Reliable Autonomous Systems

arXiv.org Artificial Intelligence

The capability and spread of such systems have reached the point where they are beginning to touch much of everyday life. However, regulators grapple with how to deal with autonomous systems, for example how could we certify an Unmanned Aerial System for autonomous use in civilian airspace? We here analyse what is needed in order to provide verified reliable behaviour of an autonomous system, analyse what can be done as the state-of-the-art in automated verification, and propose a roadmap towards developing regulatory guidelines, including articulating challenges to researchers, to engineers, and to regulators. Case studies in seven distinct domains illustrate the article. Keywords: autonomous systems; certification; verification; Artificial Intelligence 1 Introduction Since the dawn of human history, humans have designed, implemented and adopted tools to make it easier to perform tasks, often improving efficiency, safety, or security.


Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence

#artificialintelligence

Artificial intelligence (AI) has the potential to deliver significant social and economic benefits, including reducing accidental deaths and injuries, making new scientific discoveries, and increasing productivity.[1] However, an increasing number of activists, scholars, and pundits see AI as inherently risky, creating substantial negative impacts such as eliminating jobs, eroding personal liberties, and reducing human intelligence.[2] Some even see AI as dehumanizing, dystopian, and a threat to humanity.[3] As such, the world is dividing into two camps regarding AI: those who support the technology and those who oppose it. Unfortunately, the latter camp is increasingly dominating AI discussions, not just in the United States, but in many nations around the world. There should be no doubt that nations that tilt toward fear rather than optimism are more likely to put in place policies and practices that limit AI development and adoption, which will hurt their economic growth, social ...


The 2018 Survey: AI and the Future of Humans

#artificialintelligence

"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.


A 20-Year Community Roadmap for Artificial Intelligence Research in the US

arXiv.org Artificial Intelligence

Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.



Apple 'finds 2,000 Project Titan driverless car files on Chinese engineer Jizhong Chen's laptop'

Daily Mail - Science & tech

An Apple employee suspected of stealing trade secrets faces 10 years in prison and a fine of up to $250,000 after the company reportedly found their intellectual property on his personal hard drive. Jizhong Chen is the second Chinese national working on their autonomous project to be accused of the crime in six months, after Xiaolang Zhang was arrested by FBI last July. Now electrical engineer Chen is in hot water after Apple Global Security allegedly discovered'over two thousand files containing confidential and proprietary Apple material, including manuals, schematics, and diagrams' about Project Titan. Materials allegedly found on his device include an'assembly drawing of an Apple-designed wiring harness for an autonomous vehicle' as part of their Project Titan Approximately one hundred photographs taken from within the California Apple building and containing information on the driverless car project were allegedly stored on his computer. Employees said he allowed them to search his device after a fellow worker caught Chen taking pictures in an area that was deemed sensitive, NBC Bay Area reported.