Goto

Collaborating Authors

Results


Time to regulate AI that interprets human emotions

#artificialintelligence

During the pandemic, technology companies have been pitching their emotion-recognition software for monitoring workers and even children remotely. Take, for example, a system named 4 Little Trees. Developed in Hong Kong, the program claims to assess children's emotions while they do classwork. It maps facial features to assign each pupil's emotional state into a category such as happiness, sadness, anger, disgust, surprise and fear. It also gauges'motivation' and forecasts grades.


What is artificial intelligence?

#artificialintelligence

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision. As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but a few, including Python, R and Java, are popular.


La veille de la cybersécurité

#artificialintelligence

Most of the public discourse around artificial intelligence (AI) policy focuses on one of two perspectives: how the government can support AI innovation, and how the government can deter its harmful or negligent use. Yet there can also be a role for government in making it easier to use AI beneficially--in this niche, the National Science Foundation (NSF) has found a way to contribute. Through a grant-making program called Fairness in Artificial Intelligence (FAI), the NSF is providing $20 million in funding to researchers working on difficult ethical problems in AI. The program, a collaboration with Amazon, has now funded 21 projects in its first two years, with an open call for applications in its third and final year. This is an important endeavor, furthering a trend of federal support for the responsible advancement of technology, and the NSF should continue this important line of funding for ethical AI.


Trust in EU approach to artificial intelligence risks being undermined by new AI rules

#artificialintelligence

The EU is winning the battle for trust among artificial intelligence (AI) researchers, academics on both sides of the Atlantic say, bolstering the Commission's ambitions to set global standards for the technology. But some fear the EU risks squandering this confidence by imposing ill-thought through rules in its recently proposed Artificial Intelligence act, which some academics say are at odds with the realities of AI research. "We do see a push for trustworthy and transparent AI also in the US, but, in terms of governance, we are not as far [ahead] as the EU in this regard," said Bart Selman, president of the Association for Advancement of Artificial Intelligence (AAAI) and a professor at Cornell University. Highly international AI researchers are "aware that AI developments in the US are dominated by business interests, and in China by the government interest," said Holger Hoos, professor of machine learning at Leiden University, and a founder of the Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE). EU policymaking, though slower, incorporated "more voices, and more perspectives" than the more centralised process in the US and China, he argued, with the EU having taken strong action on privacy through the General Data Protection regulation, which came into effect in 2018.


The ethics of AI: Should we put our faith in Big Tech?

#artificialintelligence

In September last year, Google's cloud unit looked into using artificial intelligence to help a financial firm decide whom to lend money to. It turned down the client's idea after weeks of internal discussions, deeming the project too ethically dicey because the AI technology could perpetuate biases like those around race and gender. Since early last year, Google has also blocked new AI features analysing emotions, fearing cultural insensitivity, while Microsoft restricted software mimicking voices and IBM rejected a client request for an advanced facial-recognition system. All these technologies were curbed by panels of executives or other leaders, according to interviews with AI ethics chiefs at the three US technology giants. Reuters reported for the first time their vetoes and the deliberations that led to them reflect a nascent industry-wide drive to balance the pursuit of lucrative AI systems with a greater consideration of social responsibility.


Survey XII: What Is the Future of Ethical AI Design? – Imagining the Internet

#artificialintelligence

Results released June 16, 2021 – Pew Research Center and Elon University's Imagining the Internet Center asked experts where they thought efforts aimed at ethical artificial intelligence design would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question. The Question – Regarding the application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts. By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public ...


AI Regulation Is Coming

#artificialintelligence

For most of the past decade, public concerns about digital technology have focused on the potential abuse of personal data. People were uncomfortable with the way companies could track their movements online, often gathering credit card numbers, addresses, and other critical information. They found it creepy to be followed around the web by ads that had clearly been triggered by their idle searches, and they worried about identity theft and fraud. Those concerns led to the passage of measures in the United States and Europe guaranteeing internet users some level of control over their personal data and images--most notably, the European Union's 2018 General Data Protection Regulation (GDPR). Some argue that curbing it will hamper the economic performance of Europe and the United States relative to less restrictive countries, notably China, whose digital giants have thrived with the help of ready, lightly regulated access to personal information of all sorts. Others point out that there's plenty of evidence that tighter regulation has put smaller European companies at a considerable disadvantage to deeper-pocketed U.S. rivals such as Google and Amazon. But the debate is entering a new phase. As companies increasingly embed artificial intelligence in their products, services, processes, and decision-making, attention is shifting to how data is used by the software--particularly by complex, evolving algorithms that might diagnose a cancer, drive a car, or approve a loan.


War Mongering for Artificial Intelligence

#artificialintelligence

The ghost of Edward Teller must have been doing the rounds between members of the National Commission on Artificial Intelligence. The father of the hydrogen bomb was never one too bothered by the ethical niggles that came with inventing murderous technology. It was not, for instance, "the scientist's job to determine whether a hydrogen bomb should be constructed, whether it should be used, or how it should be used." Responsibility, however exercised, rested with the American people and their elected officials. The application of AI in military systems has plagued the ethicist but excited certain leaders and inventors.


Cities worldwide band together to push for ethical AI

#artificialintelligence

From traffic control and waste management to biometric surveillance systems and predictive policing models, the potential uses of artificial intelligence (AI) in cities are incredibly diverse, and could impact every aspect of urban life. In response to the increasing deployment of AI in cities – and the general lack of authority that municipal governments have to challenge central government decisions or legislate themselves – London, Barcelona and Amsterdam launched the Global Observatory on Urban AI in June 2021. The initiative aims to monitor AI deployment trends and promote its ethical use, and is part of the wider Cities Coalition for Digital Rights (CC4DR), which was set up in November 2018 by Amsterdam, Barcelona and New York to promote and defend digital rights. It now has more than 50 cities participating worldwide. Apart from city participants, the Observatory is also being run in partnership with UN-Habitat, a United Nations initiative to improve the quality of life in urban areas, and research group CIDOB-Barcelona Centre for International Affairs.


Safe Transformative AI via a Windfall Clause

arXiv.org Artificial Intelligence

Society could soon see transformative artificial intelligence (TAI). Models of competition for TAI show firms face strong competitive pressure to deploy TAI systems before they are safe. This paper explores a proposed solution to this problem, a Windfall Clause, where developers commit to donating a significant portion of any eventual extremely large profits to good causes. However, a key challenge for a Windfall Clause is that firms must have reason to join one. Firms must also believe these commitments are credible. We extend a model of TAI competition with a Windfall Clause to show how firms and policymakers can design a Windfall Clause which overcomes these challenges. Encouragingly, firms benefit from joining a Windfall Clause under a wide range of scenarios. We also find that firms join the Windfall Clause more often when the competition is more dangerous. Even when firms learn each other's capabilities, firms rarely wish to withdraw their support for the Windfall Clause. These three findings strengthen the case for using a Windfall Clause to promote the safe development of TAI.