accountable ai
What Stanford's recent AI conference reveals about the state of AI accountability
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. As AI adoption continues to ramp up exponentially, so is the discussion around -- and concern for -- accountable AI. While tech leaders and field researchers understand the importance of developing AI that is ethical, safe and inclusive, they still grapple with issues around regulatory frameworks and concepts of "ethics washing" or "ethics shirking" that diminish accountability. Perhaps most importantly, the concept is not yet clearly defined. While many sets of suggested guidelines and tools exist -- from the U.S. National Institute of Standards and Technology's Artificial Intelligence Risk Management Framework to the European Commission's Expert Group on AI, for example -- they are not cohesive and are very often vague and overly complex.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
Council Post: How To Build Responsible AI, Step 1: Accountability
The development, deployment and operation of irresponsible AI has done, and will continue to do, significant damage to individuals, business, markets, societies and economies of every scale. Now is the time to be explicit in the processes and systems that we create. In a series of articles, I will explore each one of these elements and its crucial role in building the responsible AI of the future. The first component of responsible AI that I will address in this second article in the series is accountability, which is especially important in areas such as supply chain, finance, national security and intelligence, cyberbalkanization, data protection, data destruction and data/algorithm aggregations. Rather than assume we all mean the same thing when we use the term "accountability," I'll now suggest three critical features for how we distill the term and understand it beyond its etymology.
Council Post: How To Build Responsible AI, Step 1: Accountability
The development, deployment and operation of irresponsible AI has done, and will continue to do, significant damage to individuals, business, markets, societies and economies of every scale. Now is the time to be explicit in the processes and systems that we create. In a series of articles, I will explore each one of these elements and its crucial role in building the responsible AI of the future. The first component of responsible AI that I will address in this second article in the series is accountability, which is especially important in areas such as supply chain, finance, national security and intelligence, cyberbalkanization, data protection, data destruction and data/algorithm aggregations. Rather than assume we all mean the same thing when we use the term "accountability," I'll now suggest three critical features for how we distill the term and understand it beyond its etymology.
#SXSW20 #PanelPicker session WHEN IS #AI NOT A GOOD IDEA? Karl Smith
The concept of open and accountable AI is starting to become the voice of reason in #society and #regulators, #policy makers and #rights organisations are gaining ground towards #transparency. The panel includes both technology and humanity focused professionals who will open the discussion on the ethics and transparency of AI algorithms, their value points in society and the risks they both resolve and create. The concept of open and accountable AI is starting to become the voice of reason in society and regulators, policy makers and rights organisations are gaining ground towards transparency. Will this enable or disable this machine capability or cause a new direction in algorithm design engaged with principals around controlling and limiting cultural bias and recording point in time contextual outputs to validate all decisions. For the last 13 years, Paul Kiernan has worked at C-Level with global professional services firms in Sydney & London, providing market intelligence and managing Mergers & Acquisitions.
- North America > United States > New York (0.07)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.07)
- Information Technology (0.75)
- Government (0.58)
- Banking & Finance (0.52)
Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure
Nushi, Besmira (Microsoft Research) | Kamar, Ece (Microsoft Research) | Horvitz, Eric (Microsoft Research)
As machine learning systems move from computer-science laboratories into the open world, their accountability becomes a high priority problem. Accountability requires deep understanding of system behavior and its failures. Current evaluation methods such as single-score error metrics and confusion matrices provide aggregate views of system performance that hide important shortcomings. Understanding details about failures is important for identifying pathways for refinement, communicating the reliability of systems in different settings, and for specifying appropriate human oversight and engagement. Characterization of failures and shortcomings is particularly complex for systems composed of multiple machine learned components. For such systems, existing evaluation methods have limited expressiveness in describing and explaining the relationship among input content, the internal states of system components, and final output quality. We present Pandora, a set of hybrid human-machine methods and tools for describing and explaining system failures. Pandora leverages both human and system-generated observations to summarize conditions of system malfunction with respect to the input content and system architecture. We share results of a case study with a machine learning pipeline for image captioning that show how detailed performance views can be beneficial for analysis and debugging.
Towards accountable AI in Europe? - The Alan Turing Institute
We are living in the age of Big Data, but data is useless if we do not have algorithms that help us to interpret it. Algorithms are increasingly used in both the public and the private sectors; across industrial sectors for financial trading, recruiting decisions (hiring, firing, and promotions), and for setting insurance premiums. Algorithms help decide whether individuals are desirable candidates for insurance, eligible for a loan or a mortgage, or should be admitted to university. The criminal justice system uses algorithms for sentencing or to decide if someone should be granted parole and to calculate the probability whether someone will commit a crime. Algorithms can – if well-designed and fed unbiased data – make more accurate, efficient, and fairer decisions than humans.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Banking & Finance (1.00)
You better explain yourself, mister: DARPA's mission to make an accountable AI
The US government's mighty DARPA last year kicked off a research project designed to make systems controlled by artificial intelligence more accountable to their human users. The Defense Advanced Research Projects Agency, to use this $2.97bn agency its full name, is the Department of Defense's body responsible for emerging technology for use by the US armed forces. Significantly, it was DARPA's early funding of packet-switching network the Advanced Research Projects Agency Network (ARPANET) more than 40 years ago that helped bring about the internet. Coming bang up to date, the issue at the heart of the Explainable Artificial Intelligence (XAI) programme is that AI is starting to extend into many areas of everyday life yet the internal workings of such systems are often opaque, and could be concealing flaws in their decision-making processes. The field of AI has made great strides in the last several years, thanks to developments in machine learning algorithms and deep learning systems based on artificial neural networks (ANNs).
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.05)
- North America > United States > Tennessee (0.05)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)