Well File:

Results


BBC Radio 4 - The Reith Lectures - Reith Lectures 2021 - Living With Artificial Intelligence

#artificialintelligence

The lectures will examine what Russell will argue is the most profound change in human history as the world becomes increasingly reliant on super-powerful AI. Examining the impact of AI on jobs, military conflict and human behaviour, Russell will argue that our current approach to AI is wrong and that if we continue down this path, we will have less and less control over AI at the same time as it has an increasing impact on our lives. How can we ensure machines do the right thing? The lectures will suggest a way forward based on a new model for AI, one based on machines that learn about and defer to human preferences. The series of lectures will be held in four locations across the UK; Newcastle, Edinburgh, Manchester and London and will be broadcast on Radio 4 and the World Service as well as available on BBC Sounds.


Getting Government AI Engineers to Tune into AI Ethics Seen as Challenge - AI Trends

#artificialintelligence

Engineers tend to see things in unambiguous terms, which some may call Black and White terms, such as a choice between right or wrong and good and bad. The consideration of ethics in AI is highly nuanced, with vast gray areas, making it challenging for AI software engineers to apply it in their work. That was a takeaway from a session on the Future of Standards and Ethical AI at the AI World Government conference held in-person and virtually in Alexandria, Va. this week. An overall impression from the conference is that the discussion of AI and ethics is happening in virtually every quarter of AI in the vast enterprise of the federal government, and the consistency of points being made across all these different and independent efforts stood out. "We engineers often think of ethics as a fuzzy thing that no one has really explained," stated Beth-Anne Schuelke-Leech, an associate professor, Engineering Management and Entrepreneurship at the University of Windsor, Ontario, Canada, speaking at the Future of Ethical AI session.


What is explainable AI? Building trust in AI models

#artificialintelligence

As AI-powered technologies proliferate in the enterprise, the term "explainable AI" (XAI) has entered mainstream vernacular. XAI is a set of tools, techniques, and frameworks intended to help users and designers of AI systems understand their predictions, including how and why the systems arrived at them. A June 2020 IDC report found that business decision-makers believe explainability is a "critical requirement" in AI. To this end, explainability has been referenced as a guiding principle for AI development at DARPA, the European Commission's High-level Expert Group on AI, and the National Institute of Standards and Technology. Startups are emerging to deliver "explainability as a service," like Truera, and tech giants such as IBM, Google, and Microsoft have open-sourced both XAI toolkits and methods.


5 Best Practices for Testing AI Applications

#artificialintelligence

In light of the April 2021 announcement of the world's first legislative framework for regulating Artificial Intelligence (AI), the European Artificial Intelligence Act (EU AIA), now is an opportune time for developers to revisit their strategies for testing AI applications. Incoming regulations mean that the group of stakeholders who care about your testing results just got bigger and more involved. The stakes are high, not least because companies that violate the terms of the legislation could face fines higher than those levied under the General Data Protection Act (GDPR). For the purpose of transparency, certain types of AI also have to make their accuracy metrics available to users, which adds to the pressure to get functional testing right. Following on from Applause's step-by-step guide to training and testing your AI algorithm, this article summarizes how developers should be testing AI applications in anticipation of the new era of AI regulations.


AI Weekly: UN recommendations point to need for AI ethics guidelines

#artificialintelligence

The U.N.'s Educational, Scientific, and Cultural Organization (UNESCO) this week approved a series of recommendations for AI ethics, which aim to recognize that AI can "be of great service" but also raise "fundamental … concerns." UNESCO's 193 member countries, including Russia and China, agreed to conduct AI impact assessments and place "strong enforcement mechanisms and remedial actions" to protect human rights. "The world needs rules for artificial intelligence to benefit humanity. The recommendation[s] on the ethics of AI is a major answer," UNESCO chief Audrey Azoulay said in a press release. "It sets the first global normative framework while giving States the responsibility to apply it at their level. UNESCO will support its … member states in its implementation and ask them to report regularly on their progress and practices."


China's Military Has a New Enemy (No, Not America)

#artificialintelligence

One word: AI – Many of the world's leaders in the field of science and technology, including the late Stephen Hawking, Telsa founder Elon Musk, Apple co-founder Steve Wozniak and Microsoft founder Bill Gates, have all expressed concern in recent years over the risks of artificial intelligence (AI) – most notably its potential use in autonomous weapons. Along with many in academia and human rights groups, the science and tech visionaries have warned that in the wrong hands there is a serious danger posed by AI. One concern is that these weapons could be designed to be extremely difficult to simply "turn off," as the Future of Life Institute noted in its report on the development of autonomous weapon platforms. That could result in a scenario straight out of science fiction where humans lose control of their dangerous creations. While it may not mean a world-ending scenario presented in The Terminator, even losing control of a few AI weapons temporarily could result in unnecessary mass causalities or worse.


What is next for AI regulation?

#artificialintelligence

In September 2021, there was a panel at a ForHumanity conference, with senior guests from the US Equal Employment Opportunity Commission (EEOC), the US Government and Accountability Office, the European Commission and the UK Accreditation Service. The topic was AI-specific regulation, whether it is needed, the progress it is making and the implementation complexities. Paul Nemitz, from the European Commission, outlined the need for the proposed AI Act in the EU. Whilst GDPR regulates automated decision-making, it is focused on the use of personal data, rather than the technologies themselves. In Paul's opinion this leaves a gap, and the Act is expected to pass early in 2022.


AI ethics keeps getting more complex and surprising

#artificialintelligence

Talk about international curbs on face biometrics typically ignore two massive areas -- China and Africa. As China continues to use and sell facial recognition systems on scale that is unprecedented on Earth, regulation is not something that gets meaningful debate. And Africa continues to suffer the narrowmindedness of governments and industries in developed economies. An article in The Conversation touching on AI development on the continent lists three AI and machine learning programs underway in African nations when most people living north of the equator would be surprised that any work at all is being done there. But China surprised AI ethicists worldwide this week by endorsing draft United Nations recommendations intended to, among other things, convince signatory countries to ban AI for social scoring and mass surveillance.


6 positive AI visions for the future of work

#artificialintelligence

Current trends in AI are nothing if not remarkable. Day after day, we hear stories about systems and machines taking on tasks that, until very recently, we saw as the exclusive and permanent preserve of humankind: making medical diagnoses, drafting legal documents, designing buildings, and even composing music. Our concern here, though, is with something even more striking: the prospect of high-level machine intelligence systems that outperform human beings at essentially every task. This is not science fiction. In a recent survey the median estimate among leading computer scientists reported a 50% chance that this technology would arrive within 45 years.


A look back at the creation of LaborIA to better measure the impact of AI in companies - Actu IA

#artificialintelligence

On November 19, Elisabeth Borne, Minister of Labour, Employment and Integration, visited the Matrice innovation institute to sign an agreement with Bruno Sportisse of Inria to create a laboratory dedicated to artificial intelligence. Called LaborIA and operated by Matrice, this resource and experimentation centre will have the mission of "better understanding artificial intelligence and its effects on work, employment, skills and social dialogue in order to develop business practices and public action". According to the OECD's 2019 Employment Outlook report, medium-skilled jobs are increasingly exposed to profound transformations. Over the next 15 to 20 years, the development of automation could lead to the disappearance of 14% of current jobs, and another 32% are likely to be profoundly transformed. The report states that the future of work is in our hands and will depend, to a large extent, on the public policy choices countries make.