ai fairness 360
Applications and Challenges of Fairness APIs in Machine Learning Software
Das, Ajoy, Uddin, Gias, Chowdhury, Shaiful, Akhond, Mostafijur Rahman, Hemmati, Hadi
Machine Learning software systems are frequently used in our day-to-day lives. Some of these systems are used in various sensitive environments to make life-changing decisions. Therefore, it is crucial to ensure that these AI/ML systems do not make any discriminatory decisions for any specific groups or populations. In that vein, different bias detection and mitigation open-source software libraries (aka API libraries) are being developed and used. In this paper, we conduct a qualitative study to understand in what scenarios these open-source fairness APIs are used in the wild, how they are used, and what challenges the developers of these APIs face while developing and adopting these libraries. We have analyzed 204 GitHub repositories (from a list of 1885 candidate repositories) which used 13 APIs that are developed to address bias in ML software. We found that these APIs are used for two primary purposes (i.e., learning and solving real-world problems), targeting 17 unique use-cases. Our study suggests that developers are not well-versed in bias detection and mitigation; they face lots of troubleshooting issues, and frequently ask for opinions and resources. Our findings can be instrumental for future bias-related software engineering research, and for guiding educators in developing more state-of-the-art curricula.
- North America > Canada > Alberta > Census Division No. 6 > Calgary Metropolitan Region > Calgary (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Greece > Attica > Athens (0.04)
- (23 more...)
- Education > Educational Technology > Educational Software > Computer Based Training (0.91)
- Education > Educational Setting (0.65)
- Information Technology > Software (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.92)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.67)
India's Top Ethical AI Advocate: The Journey Of Saishruthi Swaminathan
"I was amazed at how my data can answer all my questions like a magic box that gives whatever you want." For this week's ML practitioner's series, Analytics India Magazine (AIM) got in touch with Saishruthi Swaminathan, Technical Lead and Advisory Data Scientist at IBM. Saishruthi is an active ethical AI practitioner and advocate based out of California and has been a consistent contributor to the field of Ethical AI through open-source contributions. As of today, her work has reached more than 25,000 people around the world and got them exposed to Ethical AI concepts. Saishruthi: I did my undergraduate in Electronics and Instrumentation Engineering, and Master's in Electrical Engineering, specializing in Data Science. Throughout my academic phase, I was figuring out what I liked and wanted to become.
- Asia > India (0.62)
- North America > United States > California (0.35)
- North America > Canada (0.05)
- Europe > Italy (0.05)
Save AI from Human Prejudice -- Retrain Your Mind to Counter Unconscious Bias
AI, the buzz word known as Artificial Intelligence, in practice can be explained as Augmented Intelligence; a tool that has been in development since 1950s to extend human capabilities to complete tasks no human or machine could accomplish on their own. The world has already shifted towards building an AI integrated future and it's our responsibility to ensure it's heading in the right direction. "People are overlooked for a variety of biased reasons and perceived flaws; mathematics cuts straight through them" (Moneyball, 2011) Despite its immense potential, some major barriers still exist for AI hindering its progress. Biased behavior uncovered in current AI models has made us question if AI is the right way forward. To avoid such unfavourable outcomes and consequences, it is imperative to regulate the implementation of AI by an ethical framework assuring the key attributes; Transparency, Accountability, Privacy and Lack of Bias.
More AI Developers Focused on Engineering the Bias Out of AI - AI Trends
With AI systems today determining whether someone can get a job or a loan, it's in the interest of the company running the AI system to make sure the underlying dataset is not so biased that it leads to errors in its conclusions. Cases of biased data leading to biased results have been documented, such as in the research of Joy Buolamwini and Timnit Gebru, authors of a 2018 study that showed facial-recognition algorithms were very good at identifying white males, but recognized Black females only two thirds of the time. If law enforcement is using such a system to identify suspects, that can lead to some serious problems. The stage is set for serious effort to go into reducing biased datasets on which AI systems rely. "It's an opportunity," stated Alexandra Ebert, chief trust officer at Mostly AI, a startup focused on synthetic data based in Vienna, quoted in a recent account in IEEE Spectrum.
- Europe > Austria > Vienna (0.25)
- North America > United States > New York (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
5 Tools to Detect and Eliminate Bias in Your Machine Learning Models
If you have ever developed or worked on any type of machine learning algorithm, then you must have -- at some point -- needed to check if your model is biased and ensure that this bias is removed. Having a biased system will lead to inaccurate results that could jeopardize your entire project. Machine learning algorithms have proven their value in various application fields, from medical applications to self-driving cars and weather predictions. Although machine learning has many advantages, if your machine learning model contains any type of bias, you'll not be able to harness its full potential. Different sources could lead to a bias in a machine learning model.
- Information Technology (0.37)
- Health & Medicine (0.35)
Global Big Data Conference
As real-world AI deployments increase, IBM says the contributions can help ensure they're fair, secure and trustworthy. IBM on Monday announced it's donating a series of open-source toolkits designed to help build trusted AI to a Linux Foundation project, the LF AI Foundation. As real-world AI deployments increase, IBM says the contributions can help ensure they're fair, secure and trustworthy. "Donation of these projects to LFAI will further the mission of creating responsible AI-powered technologies and enable the larger community to come forward and co-create these tools under the governance of Linux Foundation," IBM said in a blog post, penned by Todd Moore, Sriram Raghavan and Aleksandra Mojsilovic. Specifically, IBM is contributing the AI Fairness 360 Toolkit, the Adversarial Robustness 360 Toolbox and the AI Explainability 360 Toolkit.
- South America > Colombia (0.08)
- Oceania > New Zealand (0.06)
- Oceania > Australia (0.06)
- (9 more...)
IBM continues momentum in AI and trust leadership - DevOps.com
IBM continues to serve as an industry leader in advancing what we call Trusted AI, focused on developing diverse approaches that implement elements of fairness, explainability, and accountability across the entire lifecycle of an AI application. Under our Trusted AI efforts, IBM released in 2018 the AI Fairness 360 toolkit (AIF360), which is an extensible, open source toolkit that can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. It contains over 70 fairness metrics and 11 state-of-the-art bias mitigation algorithms developed by the research community, and it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. Now, IBM is adding two new ways in which AIF360 is becoming even more accessible for a wider range of developers, as well as increased functionality: compatibility with scikit-learn and R. AI fairness is an important topic as machine learning models are increasingly used for high-stakes decisions. Machine learning discovers and generalizes patterns in the data and therefore, could replicate systematic advantages of privileged groups.
Google's New ML Fairness Gym To Track Down Bias In AI
Human societies are extremely complex. The cultural, racial and geographical differences around the globe and the lack of curated data make'fairness' in technology a huge challenge. Now, in an attempt to track the long term societal impacts of artificial intelligence, Google researchers recently released a machine learning fairness gym. They have done this by using Google's OpenAI Gym. OpenAI's Gym is a toolkit for developing and comparing reinforcement learning algorithms and is compatible with any numerical computation library, such as TensorFlow or Theano.
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.60)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.47)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.47)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.47)
Machine learning and bias
Bias is a prejudice in favor or against a person, group, or thing that is considered to be unfair. But as machine learning becomes more of an integral part of our lives, the question becomes will it include bias? In this article, I'll dig into this question, its impact, and look at ways of eliminating bias from machine learning models. Machine learning has shown great promise in powering self-driving cars, accurately recognizing cancer in radiographs, and predicting our interests based upon past behavior (to name just a few). But with the benefits from machine learning, there are also challenges.
Fair and Equitable: How IBM Is Removing Bias from AI - DZone AI
As more apps come to market that rely on Artificial Intelligence, software developers and data scientists can unwittingly (or perhaps even knowingly) inject their personal biases into these solutions. This can cause a variety of problems ranging from a poor user experience to major errors in critical decision-making. We at IBM have created a solution specifically to address AI bias. Because flaws and biases may not be easy to detect without the right tool, IBM is deeply committed to delivering services that are unbiased, explainable, value-aligned and transparent. Thus, we are pleased to back up that commitment with the launch of AI Fairness 360, an open-source library to help detect and remove bias in Machine Learning models and data sets.