Last year I wrote about how AI regulations will lead to the emergence of professional AI risk managers. This has already happened in the financial sector where regulations patterned after Basel rules have created a financial risk management profession to assess financial risks. Last week, the EU published a 108-page proposal to regulate AI systems. This will lead to the emergence of professional AI risk managers. The proposal doesn't cover all AI systems, just those deemed high-risk, and the regulation would vary depending on how risky the specific AI systems are: Since systems with unacceptable risks would be banned outright, most of the regulation is about high-risk AI systems.
One of the priorities announced in the 2021 Examination Priorities Report of the U.S. Securities and Exchange Commission's Division of Examinations ("EXAMS") is a review of robo-advisory firms that build client portfolios with exchange-traded funds ("ETF's") and mutual funds. EXAMS notes that these clients are almost entirely retail investors without investments large enough to support the costs of regular human investment advisers. EXAMS sees that the risks involved in these robo-advisor accounts pose particular issues, that retail clients may well not recognize. Accordingly, it may help to reflect on the Laws of Robotics invented by that science fiction author Isaac Asimov (for "I Robot," a short story in his 1950 collection), particularly the First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Investors may not understand the risks associated with specific investments; the risk profiles of mutual funds and of ETF's vary widely, from diversified to concentrated, from simple to complex strategies.
The European Union is considering banning the use of artificial intelligence for a number of purposes, including mass surveillance and social credit scores. This is according to a leaked proposal that is circulating online, first reported by Politico, ahead of an official announcement expected next week. If the draft proposal is adopted, it would see the EU take a strong stance on certain applications of AI, setting it apart from the US and China. Some use cases would be policed in a manner similar to the EU's regulation of digital privacy under GDPR legislation. Member states, for example, would be required to set up assessment boards to test and validate high-risk AI systems.
In February, McKinsey Global Institute predicted that 45 million Americans--one-quarter of the workforce--would lose their jobs to automation by 2030. That was up from its 2017 estimate that 39 million would be automated out of work, due to the economic dislocation of COVID-19. Historically, firms tend to replace some of the workers they fire during recessions with machines. Fear of robot-driven mass unemployment has become increasingly mainstream. Andrew Yang, who is currently leading the polls for the Democratic nomination to be the next mayor of New York City, made it a pillar of his unorthodox 2020 presidential campaign.
Robots applications in our daily life increase at an unprecedented pace. As robots will soon operate "out in the wild", we must identify the safety and security vulnerabilities they will face. Robotics researchers and manufacturers focus their attention on new, cheaper, and more reliable applications. Still, they often disregard the operability in adversarial environments where a trusted or untrusted user can jeopardize or even alter the robot's task. In this paper, we identify a new paradigm of security threats in the next generation of robots. These threats fall beyond the known hardware or network-based ones, and we must find new solutions to address them. These new threats include malicious use of the robot's privileged access, tampering with the robot sensors system, and tricking the robot's deliberation into harmful behaviors. We provide a taxonomy of attacks that exploit these vulnerabilities with realistic examples, and we outline effective countermeasures to prevent better, detect, and mitigate them.
In her first major speech to a U.S. audience after the U.S. presidential election, European Commission President Ursula von der Leyen laid out priority areas for transatlantic cooperation. She proposed building a new relationship between Europe and the United States, one that would encompass transatlantic coordination on digital technology issues, including working together on global standards for regulating artificial intelligence (AI) aligned with EU values. A reference to cooperation on standards for AI was included in the New Transatlantic Agenda for Global Change issued by the Commission on December 2, 2020. In remarks to Parliament on January 22, 2021, President von der Leyen called for "creating a digital economy rule book" with the United States that is "valid worldwide." Some would say Europe's new outreach on issues of tech governance and the suggestion of establishing an "EU-U.S. Trade and Technology Council" is incongruous to the current regulatory war being waged against ...
Digital Secretary Oliver Dowden revealed the move as he set out his Ten Tech Priorities to power a golden age of tech in the UK this week. Unleashing the power of AI is a top priority in our plan to be the most pro-tech government ever. The UK is already a world leader in this revolutionary technology and the new AI Strategy will help us seize its full potential - from creating new jobs and improving productivity to tackling climate change and delivering better public services. The Government will build on the UK's strong foundations put in place through the AI Sector Deal to develop and deliver an AI Strategy that is both globally ambitious and socially inclusive. It will consider recommendations from the AI Council, an independent expert committee that advises the government, which published its AI Roadmap in January, alongside input from industry, academia and civil society.
Zhang, Daniel, Mishra, Saurabh, Brynjolfsson, Erik, Etchemendy, John, Ganguli, Deep, Grosz, Barbara, Lyons, Terah, Manyika, James, Niebles, Juan Carlos, Sellitto, Michael, Shoham, Yoav, Clark, Jack, Perrault, Raymond
Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.
Harm wrought by AI tends to fall most heavily on marginalized communities. In the United States, algorithmic harm may lead to the false arrest of Black men, disproportionately reject female job candidates, or target people who identify as queer. In India, those impacts can further impact marginalized populations like Muslim minority groups or people oppressed by the caste system. And algorithmic fairness frameworks developed in the West may not transfer directly to people in India or other countries in the Global South, where algorithmic fairness requires understanding of local social structures and power dynamics and a legacy of colonialism. That's the argument behind "De-centering Algorithmic Power: Towards Algorithmic Fairness in India," a paper accepted for publication at the Fairness, Accountability, and Transparency (FAccT) conference, which begins this week. Other works that seek to move beyond a Western-centric focus include Shinto or Buddhism-based frameworks for AI design and an approach to AI governance based on the African philosophy of Ubuntu.
California's high poverty rate, low wages and frayed public safety net require a new "social compact" between workers, business and government, according to a report by a blue-ribbon commission that highlights the state's widening inequality. In a report released Monday, the Future of Work Commission, a 21-member body appointed by Gov. Gavin Newsom in August 2019, laid out a grim picture of the challenges facing the world's fifth-largest economy, even as it acknowledged the Golden State's technology leadership, its ethnically and culturally diverse workforce and world-class universities. "Too many Californians have not fully participated in or enjoyed the benefits of the state's broader economic success and the extraordinary wealth generated here, especially workers of color who are disproportionately represented in low-wage industries," the report says. California has the highest poverty rate in the country when accounting for the cost of living, 17.2%, according to the report. Since 2012, wages in the state grew by 14% while home prices increased by 68%.