Goto

Collaborating Authors

responsible use


Back to Basics: Revisiting the Responsible AI Framework

#artificialintelligence

In the last few months we have seen promising developments in establishing safeguards for AI. This includes a landmark EU regulation proposal on AI that prohibits unacceptable AI uses and imposes mandatory disclosures and evaluations for high-risk systems, an algorithmic transparency standard launched by the UK government, mandatory audits for AI hiring tech in New York City, and a draft AI Risk Assessment Framework developed by NIST at the request of US congress, to name a few. That being said, we are still in the early days of AI regulation. There is a long road ahead to minimize harms that algorithmic systems can cause. In this article series, I explore different topics related to the responsible use of AI and its societal implications.


Monitaur launches GovernML to manage AI data lifecycle

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Artificial intelligence (AI) governance software provider Monitaur launched for general availability GovernML, the latest addition to its ML Assurance Platform, designed for enterprises committed to the responsible use of AI. GovernML, offered as a web-based, software-as-a-service (SaaS) application, enables enterprises to establish and maintain a system of record of model governance policies, ethical practices and model risk across their entire AI portfolio, CEO and founder Anthony Habayeb told VentureBeat. As AI deployment accelerates across industries, so have efforts to establish regulations and internal standards that ensure fair, safe, transparent and responsible use of this often-personal data, Habayeb said. "Good AI needs great governance," Habayeb said.


AI Regulation in Finance: Where Next?

#artificialintelligence

In the last three years, financial regulators worldwide have been actively highlighting the need for responsible use of Artificial Intelligence/ Machine Learning (AI/ML). What have they been saying? What common underlying concerns and regulatory themes are emerging? What can the industry expect in the coming years, and how can it start responding now? To date, no major financial regulator has introduced explicit regulations dedicated to the use of AI/ML.


Why Are We Failing at the Ethics of AI?

#artificialintelligence

As you read this, AI systems and algorithmic technologies are being embedded and scaled far more quickly than existing governance frameworks (i.e., the rules of the road) are evolving. While it is clear that AI systems offer opportunities across various areas of life, what amounts to a responsible perspective on their ethics and governance is yet to be realized. This should be setting off alarm bells across society. The current inability of actors to meaningfully address AI ethics has created a perfect storm: one in which AI is exacerbating existing inequalities while simultaneously creating new systemic issues at a rapid pace. But why hasn't this issue been effectively addressed?


Artificial Intelligence toolkit helps organisations overcome implementation challenges - Workplace Insight

#artificialintelligence

The World Economic Forum published the "Human-Centred AI for Human Resources: A Toolkit for Human Resources Professionals" to scale the responsible use of artificial intelligence in Human Resources (HR). The toolkit includes a guide covering key topics and steps in the responsible use of AI-based HR tools, and two checklists – one focused on strategic planning and the other on the adoption of a specific tool. There are now 250 HR tools that use AI, according to the paper. These tools aim to manage talent in ways that are more effective, fair, and efficient. However, the use of AI in HR raises concerns given AI's potential for problems in areas such as data privacy and bias.


NATO ups the ante on disruptive tech, artificial intelligence

#artificialintelligence

NATO has officially kicked off two new efforts meant to help the alliance invest in critical next-generation technologies and avoid capability gaps between its member nations. For months, officials have set the ground stage to launch a new Defense Innovator Accelerator -- nicknamed DIANA -- and establish an innovation fund to support private companies developing dual-use technologies. Both of those measures were formally agreed upon during NATO's meeting of defense ministers last month in Brussels, said Secretary-General Jens Stoltenberg. Allies signed the agreement to establish the NATO Innovation Fund and launch DIANA on Oct. 22, the final day of the two-day conference, Stoltenberg said in a media briefing that day. He expects the fund to invest €1 billion (U.S. $1.16 billion) into companies and academic partners working on emerging and disruptive technologies.


Societe Generale accelerates data and artificial intelligence strategy

#artificialintelligence

Societe Generale (SocGen) is increasing its focus on becoming more data driven, with increased use of artificial intelligence (AI) for the benefit of customers, regulators and staff. By using AI, the French investment bank wants to make existing business models more efficient and effective, while also creating new ones. It already has huge human resources in place, with 1,000 data experts and 65 chief data officers managing 330 AI and data use cases across the business. Of these, 170 are in AI, such as facial and biometric recognition functionality, automatic credit ratings and analysis tools for market activities. "SocGen has been engaged in a cultural and technological transformation for many years and has laid the necessary key technological and cultural foundation to reinforce its digital maturity in all the group's businesses, functions and geographies," said Frédéric Oudéa, CEO of the bank.


Summary of the NATO Artificial Intelligence Strategy

#artificialintelligence

A. Lawfulness: AI applications will be developed and used in accordance with national and international law, including international humanitarian law and human rights law, as applicable. B. Responsibility and Accountability: AI applications will be developed and used with appropriate levels of judgment and care; clear human responsibility shall apply in order to ensure accountability. C. Explainability and Traceability: AI applications will be appropriately understandable and transparent, including through the use of review methodologies, sources, and procedures. This includes verification, assessment and validation mechanisms at either a NATO and/or national level. D. Reliability: AI applications will have explicit, well-defined use cases.


How to Build Accountability into Your AI

#artificialintelligence

When it comes to managing artificial intelligence, there is no shortage of principles and concepts aiming to support fair and responsible use. But organizations and their leaders are often left scratching their heads when facing hard questions about how to responsibly manage and deploy AI systems today. That's why, at the U.S. Government Accountability Office, we've recently developed the federal government's first framework to help assure accountability and responsible use of AI systems. The framework defines the basic conditions for accountability throughout the entire AI life cycle -- from design and development to deployment and monitoring. It also lays out specific questions to ask, and audit procedures to use, when assessing AI systems along the following four dimensions: 1) governance, 2) data, 3) performance, and 4) monitoring.


Report finds startling disinterest in ethical, responsible use of AI among business leaders

#artificialintelligence

A new report from FICO and Corinium has found that many companies are deploying various forms of AI throughout their businesses with little consideration for the ethical implications of potential problems. The increasing scale of AI is raising the stakes for major ethical questions. There have been hundreds of examples over the last decade of the many disastrous ways AI has been used by companies, from facial recognition systems unable to discern darker skinned faces to healthcare apps that discriminate against African American patients to recidivism calculators used by courts that skew against certain races. Despite these examples, FICO's State of Responsible AI report shows business leaders are putting little effort into ensuring that the AI systems they use are both fair and safe for widespread use. The survey, conducted in February and March, features the insights of 100 AI-focused leaders from the financial services sector, with 20 executives hailing from the US, Latin America, Europe, the Middle East, Africa, and the Asia Pacific regions.