Goto

Collaborating Authors

 ai ethics board


Trust, Governance, and AI Decision Making

Communications of the ACM

IBM's Global Leader on Responsible AI and AI Governance, Francesca Rossi, arrived at her current area of focus after a 2014 sabbatical at the Harvard Radcliffe Institute, which inspired her to think beyond her training as an academic researcher and incorporate both humanistic and technological perspectives into the development of AI systems. In the intervening years, she helped build IBM's internal AI Ethics Board and foster external partnerships to shape best practices for responsible AI. Here, we talk about trust, governance, and what these issues have to do with AI decision making. The ethical issues around the use of AI evolved with the technology's capabilities. Traditional machine learning approaches introduced issues like fairness, explainability, privacy, transparency, and so on.


Here's Why Businesses Are Having A Tumultuous Love-Hate Relationship With AI Ethics Boards

#artificialintelligence

AI Ethics Advisory Boards are essential but also require focus and attention, else they can fall ... [ ] apart and be untoward for all concerned. Should a business establish an AI Ethics advisory board? You might be surprised to know that this is not an easy yes-or-no answer. Before I get into the complexities underlying the pros and cons of putting in place an AI Ethics advisory board, let's make sure we are all on the same page as to what an AI Ethics advisory board consists of and why it has risen to headline-level prominence. As everyone knows, Artificial Intelligence (AI) and the practical use of AI for business activities have gone through the roof as a must-have for modern-day companies. You would be hard-pressed to argue otherwise. To some degree, the infusion of AI has made products and services better, plus at times led to lower costs associated with providing said products and services. A nifty list of efficiencies and effectiveness boosts can be potentially attributed to the sensible and appropriate application of AI.


New AI ethics advisory board will deal with challenges

#artificialintelligence

A global AI ethics advisory board is aiming to fill the need for guidance for organizations that lack the means or ability to provide ethics oversight of the AI technology they use. The board will be housed at the Institute for Experiential AI at Northeastern University in Boston, an academic organization that creates AI products that use machine learning as an extension of human intelligence. Introduced on July 28, the board consists of 44 experts from multiple disciplines across the AI industry and academia. Board members will meet twice a year to discuss ethical questions surrounding AI. Some of the members will review applications submitted by organizations that want their products reviewed for ethical guidance. The new group is similar to an Institutional Review Board, which is federally mandated in industries such as healthcare, and biomedical research and clinical trials.


How to Prevent AI Dangers With Ethical AI

#artificialintelligence

After widespread protests against racism in the U.S., tech giants Microsoft, Amazon and IBM publicly announced they would no longer allow police departments access to their facial recognition technology. Artificial intelligence (AI) can be prone to errors, particularly in recognizing people of color and those in other underrepresented groups. Any organization developing or using AI solutions needs to be proactive in ensuring that AI dangers don't jeopardize their brand, draw regulatory actions, lead to boycotts or destroy business value. Microsoft President Brad Smith was widely quoted as saying his company wouldn't sell facial-recognition technology to police departments in the U.S., "until we have a national law in place, grounded in human rights, that will govern this technology." So, in the absence of highly rigorous institutional protections against AI dangers, what can organizations do themselves to guard against them?


How To Prevent AI Dangers With Ethical AI

#artificialintelligence

After widespread protests against racism in the U.S., tech giants Microsoft, Amazon and IBM publicly announced they would no longer allow police departments access to their facial recognition technology. Artificial intelligence (AI) can be prone to errors, particularly in recognizing people of color and those in other underrepresented groups. Any organization developing or using AI solutions needs to be proactive in ensuring that AI dangers don't jeopardize their brand, draw regulatory actions, lead to boycotts or destroy business value. Install an external AI ethics board to prevent -- not just mitigate -- AI dangers. Microsoft President Brad Smith was widely quoted as saying his company wouldn't sell facial-recognition technology to police departments in the U.S., "until we have a national law in place, grounded in human rights, that will govern this technology."


Artificial Intelligence: issues of ethics and morality - Cities Today - Connecting the world's urban leaders

#artificialintelligence

Adding cognitive abilities to a machine may appear to many as the background plot of every science-fiction movie, however, the debate around the future and limitations of Artificial Intelligence (AI) has unquestionably existed for decades in the world of computer automation systems, especially with AI being a rapidly growing trend in emergent technologies. AI algorithms are already in use in modern society and making human life easier; technologies such as voice recognition, car navigation systems, chatbots, social networking, purchase suggestions, robotics in healthcare, and many more, rely on these algorithms to perform the task they were specifically designed to accomplish. So far, these technologies are considered positive by various smart tech enthusiasts who believe that AI can be even further developed for the greater good. However, the question many of the wary poses is: will researchers and scientists proceed in developing artificial intelligence technologies to the point where humans lose the ability to understand and control the functioning of a super-intelligent machine? Although we are still too far from creating an AI technology that surpasses the capacity of the human brain, the current discussion mostly focuses on ethics, morality, and limitations.


Google's brand-new AI ethics board is already falling apart

#artificialintelligence

Just a week after it was announced, Google's new AI ethics board is already in trouble. The board, founded to guide "responsible development of AI" at Google, would have had eight members and met four times over the course of 2019 to consider concerns about Google's AI program. Those concerns include how AI can enable authoritarian states, how AI algorithms produce disparate outcomes, whether to work on military applications of AI, and more. Of the eight people listed in Google's initial announcement, one (privacy researcher Alessandro Acquisti) has announced on Twitter that he won't serve, and two others are the subject of petitions calling for their removal -- Kay Coles James, president of the conservative Heritage Foundation think tank, and Dyan Gibbens, CEO of drone company Trumbull Unmanned. Thousands of Google employees have signed onto the petition calling for James's removal.


Google pulls the plug on an AI ethics board it founded LAST WEEK

#artificialintelligence

Google has caved to pressure from its staff and abandoned a new AI ethics panel after hundreds demanded conservative members of the board were sacked for their views. The search giant announced last week that it was setting up a new board to tackle moral issues surrounding its use of the technology. It hoped to avoid controversies by using a broad spectrum of expertise to inform its future decisions, but the move has ironically stirred up a debacle of its own. Eight experts from outside the company were recruited and employees at the traditionally liberal leaning firm took issue with two of the appointees. More than 1,000 of its protest-prone workers signed an open letter objecting to specific board members, who they say are'anti-trans' and pro-military drones.


Google dissolves newly formed AI ethics board

Engadget

Google's Advanced Technology External Advisory Council was supposed to oversee its work on artificial intelligence and ensure it doesn't cross any lines. Now, the council wouldn't be able to do any of that, because the tech giant has officially cancelled it just a bit over a week after it was announced. According to Vox, the project was falling apart from the start due to Google's decision to name controversial figures as members of the board. The most problematic of them was perhaps Kay Coles James, the president of Heritage Foundation, which has long advocated against LGBT rights. The group also has a long history of climate change denial and anti-immigrant sentiments.


Google recruits eight leading experts for its newly-founded AI ethics board

Daily Mail - Science & tech

Google has set up an external AI ethics council to guide the tech giant away from morally questionable uses of its technology and encroaching on the privacy of its customers. It will advise the search giant on matters relating to the development and application of its artificial intelligence research. Google has been embroiled in past controversies regarding the use of its AI, as well as the way it protects the data it gathers. It established an internal AI ethics board in 2014 when it acquired DeepMind but this has been shrouded in secrecy with no details ever released about who it includes. The firm is a world-leader in many aspects of AI and the eight people recruited for the advisory board will'consider some of Google's most complex challenges'.