Goto

Collaborating Authors

responsible ai


Responsible AI in health care starts at the top -- but it's everyone's responsibility (VB Live)

#artificialintelligence

Health care's Quadruple Aim is to improve health outcomes, enhance the experiences of patients and providers, and reduce costs -- and AI can help. In this VB Live event, learn more about how stakeholders can use AI responsibly, ethically, and equitably to ensure all populations benefit. Breakthroughs in the application of machine learning and other forms of artificial intelligence (AI) in health care are rapidly advancing, creating advantages in the field's clinical and administrative realms. It's on the administrative side -- think workflows or back office processes -- where the technology has been more fully adopted. Using AI to simplify those processes creates efficiencies that reduce the amount of work it takes to deliver health care and improves the experiences of both patients and providers.


DoD Policy Calls for "Responsible AI" in Defense Procurements of Artificial Intelligence

#artificialintelligence

As the U.S. Department of Defense (DoD) seeks to increase funding for artificial intelligence (AI) technologies for defense and national security purposes, a new policy memorandum directs the DoD to take steps to ensure that AI is designed, developed, and deployed in a responsible manner. In a May 26, 2021, memorandum titled "Implementing Responsible Artificial Intelligence in the Department of Defense," Deputy Secretary of Defense Kathleen Hicks calls for the incorporation of responsible AI principles into the DoD's AI requirements and acquisition processes. Ms. Hicks wrote: "As the DoD embraces [AI], it is imperative that we adopt responsible behavior, processes, and outcomes in a manner that reflects the Department's commitment to its ethical principles, including the protection of privacy and civil liberties." The memorandum outlines six "foundational tenets" for the DoD to implement "Responsible AI" across the DoD. It also reaffirms the DoD's AI Ethical Principles and confirms that they apply to all DoD AI capabilities of any scale, including AI-enabled autonomous systems.


Deputy Defense Secretary Outlines Responsible AI Tenets in New Memo

#artificialintelligence

The Joint Artificial Intelligence Center will lead implementation of responsible AI across the Defense Department, according to a new directive. In a departmentwide memo signed last week, Deputy Defense Secretary Kathleen Hicks enumerated foundational tenets for responsible AI, reaffirmed the ethical AI principles the department adopted last year, and mandated the JAIC director start work on four activities for developing a responsible AI ecosystem. "As the DoD embraces artificial intelligence (AI), it is imperative that we adopt responsible behavior, processes, and outcomes in a manner that reflects the Department's commitment to its ethical principles, including the protection of privacy and civil liberties," Hicks said in the memo, which was announced June 1. "A trusted ecosystem not only enhances our military capabilities, but also builds confidence with end-users, warfighters, and the American public." Hicks assigned the JAIC director to coordinate responsible AI through a working council, which must in turn hammer out a strategy and implementation pathway, create a talent management framework, and report on how responsible AI can be integrated into acquisitions.


Speech Recognition Trends to Watch in 2021 and Beyond: Responsible AI - Rev

#artificialintelligence

Gazing at the horizon, there's no shortage of excitement in technology: the promise of a more interconnected world, greater opportunities, and the sheer wonder of what's going to be possible next. Of course Automated Speech Recognition (ASR) will be a key player, but this application fits into a greater narrative that includes everything from augmented reality (AR) to quantum computing to the nearly endless uses of artificial intelligence (AI). Using technology to create a better world, however, is harder than just developing the tech. There's a lot that can go wrong; and in truth, there's a lot that's already gone wrong. While most tech companies would rather pull the wool over their customers' eyes--your eyes--we're facing these issues head-on because we want to live in a world where tech helps instead of harms. Tech, especially AI, is a powerful tool.


Fico and Corinium survey looks at responsible AI in business - Actu IA

#artificialintelligence

FICO known for the "Credit score/FICO Score," an indicator used to predict credit issues, has released a report titled "The State of Responsible AI." The document is the results of a survey conducted with the help of business intelligence firm Corinium around responsible AI. The two organizations tried to understand the aspects that enable a company to adopt more responsible, ethical, transparent and secure AI. As part of an initiative led by Corinium and FICO, a survey was conducted on companies exploiting artificial intelligence on a daily basis. The objective was to better understand how companies are using AI and whether the issues of ethics, responsibility, and respect for the interests of customers are assimilated by these groups.


65% of execs can't explain how their AI models make decisions, survey finds

#artificialintelligence

Despite increasing demand for and use of AI tools, 65% of companies can't explain how AI model decisions or predictions are made. That's according to the results of a new survey from global analytics firm FICO and Corinium, which surveyed 100 C-level analytic and data executives to understand how organizations are deploying AI and whether they're ensuring AI is used ethically. "Over the past 15 months, more and more businesses have been investing in AI tools, but have not elevated the importance of AI governance and responsible AI to the boardroom level," FICO chief analytics officer Scott Zoldi said in a press release. "Organizations are increasingly leveraging AI to automate key processes that -- in some cases -- are making life-altering decisions for their customers and stakeholders. Senior leadership and boards must understand and enforce auditable, immutable AI model governance and product model monitoring to ensure that the decisions are accountable, fair, transparent, and responsible."


An update on Responsible AI at LinkedIn

#artificialintelligence

At LinkedIn, our guiding principle is "Members First." It ensures we honor our responsibility to protect our members and maintain their trust in every decision we make, and puts their interests first. A key area where we apply this value in engineering is within our design process. We call this "responsible design," which means that everything we build is intended to work as part of a unified system that delivers the best member experience, provides the right protections for our members and customers, and mitigates any unintended consequences in our products. One of the core pillars of "responsible design" is "responsible AI," which follows Microsoft's Responsible AI Principles.


How Does Your AI Work? Nearly Two-Thirds Can't Say, Survey Finds

#artificialintelligence

Nearly two-thirds of C-level AI leaders can't explain how specific AI decisions or predictions are made, according to a new survey on AI ethics by FICO, which says there is room for improvement. FICO hired Corinium to query 100 AI leaders for its new study, called "The State of Responsible AI: 2021," which the credit report company released today. While there are some bright spots in terms of how companies are approaching ethics in AI, the potential for abuse remains high. For example, only 22% of respondents have an AI ethics board, according to the survey, suggesting the bulk of companies are ill-prepared to deal with questions about bias and fairness. Similarly, 78% of survey-takers say it's hard to secure support from executives to prioritize ethical and responsible use of AI.


Who Is Responsible for Ethical AI?

#artificialintelligence

Five years ago, Microsoft released its infamous bot, Tay, into the world of Twitter. Tay used a machine learning algorithm to learn from interactions on the platform, to then echo novel responses back based on that learning. Within short time it became obvious that Twitter is not ideal ground for unsupervised learning. It turns out that the people who scream the loudest aren't always the best teachers. Without a filtering mechanism, Tay started parroting back all kinds of racist, bigoted and misogynistic tweets.


AI Weekly: How to implement AI responsibly

#artificialintelligence

Implementing AI responsibly implies adopting AI in a manner that's ethical, transparent, and accountable as well as consistent with laws, regulations, norms, customer expectations, and organizational values. "Responsible AI" promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable. According to Boston Consulting Group (BCG), less than half of enterprises that achieve AI at scale have fully mature, responsible AI deployments. Organizations' AI programs commonly neglect the dimensions of fairness and equity, social and environmental impact, and human-AI cooperation, BCG analysts found. The Responsible AI Institute (RAI) is among the consultancies aiming to help companies realize the benefits of AI implemented thoughtfully.