Step by step, AI has been permeating virtually every application we use. From consumer-facing interactions to more advanced predictive B2B analytics, AI and ML algorithms consume increasing amounts of data. While thousands of companies have begun collecting data in vast quantities, the problem is that these data take a while to clean and prepare for AI consumption. The efficacy of an AI system depends on the quality of the data it's trained with. Real-world data comes with significant restrictions regarding its use and is limited in variance.
For the past couple of years, renowned technologist and researcher Bruce Schneier has been researching how societal systems can be hacked, specifically the rules of financial markets, laws, and the tax code. That led him to his latest examination of the potential unintended consequences of artificial intelligence on society: how AI systems themselves, which he refers to as "AIs," could evolve such that they automatically - and inadvertently - actually abuse societal systems. "It's AIs as the hacker," he says, rather than hackers hacking AI systems. Schneier will discuss his AI hacker research in a keynote address on Monday at the 2021 RSA Conference, which, due to the pandemic, is being held online rather than in person in San Francisco. The AI topic is based on a recent essay he wrote for the Cyber Project and Council for the Responsible Use of AI at the Belfer Center for Science and International Affairs at Harvard Kennedy School.
The trustworthiness of Robots and Autonomous Systems (RAS) has gained a prominent position on many research agendas towards fully autonomous systems. This research systematically explores, for the first time, the key facets of human-centered AI (HAI) for trustworthy RAS. In this article, five key properties of a trustworthy RAS initially have been identified. RAS must be (i) safe in any uncertain and dynamic surrounding environments; (ii) secure, thus protecting itself from any cyber-threats; (iii) healthy with fault tolerance; (iv) trusted and easy to use to allow effective human-machine interaction (HMI), and (v) compliant with the law and ethical expectations. Then, the challenges in implementing trustworthy autonomous system are analytically reviewed, in respects of the five key properties, and the roles of AI technologies have been explored to ensure the trustiness of RAS with respects to safety, security, health and HMI, while reflecting the requirements of ethics in the design of RAS. While applications of RAS have mainly focused on performance and productivity, the risks posed by advanced AI in RAS have not received sufficient scientific attention. Hence, a new acceptance model of RAS is provided, as a framework for requirements to human-centered AI and for implementing trustworthy RAS by design. This approach promotes human-level intelligence to augment human's capacity. while focusing on contributions to humanity.
Last year I wrote about how AI regulations will lead to the emergence of professional AI risk managers. This has already happened in the financial sector where regulations patterned after Basel rules have created a financial risk management profession to assess financial risks. Last week, the EU published a 108-page proposal to regulate AI systems. This will lead to the emergence of professional AI risk managers. The proposal doesn't cover all AI systems, just those deemed high-risk, and the regulation would vary depending on how risky the specific AI systems are: Since systems with unacceptable risks would be banned outright, most of the regulation is about high-risk AI systems.
One of the priorities announced in the 2021 Examination Priorities Report of the U.S. Securities and Exchange Commission's Division of Examinations ("EXAMS") is a review of robo-advisory firms that build client portfolios with exchange-traded funds ("ETF's") and mutual funds. EXAMS notes that these clients are almost entirely retail investors without investments large enough to support the costs of regular human investment advisers. EXAMS sees that the risks involved in these robo-advisor accounts pose particular issues, that retail clients may well not recognize. Accordingly, it may help to reflect on the Laws of Robotics invented by that science fiction author Isaac Asimov (for "I Robot," a short story in his 1950 collection), particularly the First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Investors may not understand the risks associated with specific investments; the risk profiles of mutual funds and of ETF's vary widely, from diversified to concentrated, from simple to complex strategies.
The European Union is considering banning the use of artificial intelligence for a number of purposes, including mass surveillance and social credit scores. This is according to a leaked proposal that is circulating online, first reported by Politico, ahead of an official announcement expected next week. If the draft proposal is adopted, it would see the EU take a strong stance on certain applications of AI, setting it apart from the US and China. Some use cases would be policed in a manner similar to the EU's regulation of digital privacy under GDPR legislation. Member states, for example, would be required to set up assessment boards to test and validate high-risk AI systems.
In February, McKinsey Global Institute predicted that 45 million Americans--one-quarter of the workforce--would lose their jobs to automation by 2030. That was up from its 2017 estimate that 39 million would be automated out of work, due to the economic dislocation of COVID-19. Historically, firms tend to replace some of the workers they fire during recessions with machines. Fear of robot-driven mass unemployment has become increasingly mainstream. Andrew Yang, who is currently leading the polls for the Democratic nomination to be the next mayor of New York City, made it a pillar of his unorthodox 2020 presidential campaign.
Robots applications in our daily life increase at an unprecedented pace. As robots will soon operate "out in the wild", we must identify the safety and security vulnerabilities they will face. Robotics researchers and manufacturers focus their attention on new, cheaper, and more reliable applications. Still, they often disregard the operability in adversarial environments where a trusted or untrusted user can jeopardize or even alter the robot's task. In this paper, we identify a new paradigm of security threats in the next generation of robots. These threats fall beyond the known hardware or network-based ones, and we must find new solutions to address them. These new threats include malicious use of the robot's privileged access, tampering with the robot sensors system, and tricking the robot's deliberation into harmful behaviors. We provide a taxonomy of attacks that exploit these vulnerabilities with realistic examples, and we outline effective countermeasures to prevent better, detect, and mitigate them.
In her first major speech to a U.S. audience after the U.S. presidential election, European Commission President Ursula von der Leyen laid out priority areas for transatlantic cooperation. She proposed building a new relationship between Europe and the United States, one that would encompass transatlantic coordination on digital technology issues, including working together on global standards for regulating artificial intelligence (AI) aligned with EU values. A reference to cooperation on standards for AI was included in the New Transatlantic Agenda for Global Change issued by the Commission on December 2, 2020. In remarks to Parliament on January 22, 2021, President von der Leyen called for "creating a digital economy rule book" with the United States that is "valid worldwide." Some would say Europe's new outreach on issues of tech governance and the suggestion of establishing an "EU-U.S. Trade and Technology Council" is incongruous to the current regulatory war being waged against ...
Digital Secretary Oliver Dowden revealed the move as he set out his Ten Tech Priorities to power a golden age of tech in the UK this week. Unleashing the power of AI is a top priority in our plan to be the most pro-tech government ever. The UK is already a world leader in this revolutionary technology and the new AI Strategy will help us seize its full potential - from creating new jobs and improving productivity to tackling climate change and delivering better public services. The Government will build on the UK's strong foundations put in place through the AI Sector Deal to develop and deliver an AI Strategy that is both globally ambitious and socially inclusive. It will consider recommendations from the AI Council, an independent expert committee that advises the government, which published its AI Roadmap in January, alongside input from industry, academia and civil society.