equitable
Towards Equitable Agile Research and Development of AI and Robotics
Hundt, Andrew, Schuller, Julia, Kacianka, Severin
Machine Learning (ML) and 'Artificial Intelligence' ('AI') methods tend to replicate and amplify existing biases and prejudices, as do Robots with AI. For example, robots with facial recognition have failed to identify Black Women as human, while others have categorized people, such as Black Men, as criminals based on appearance alone. A 'culture of modularity' means harms are perceived as 'out of scope', or someone else's responsibility, throughout employment positions in the 'AI supply chain'. Incidents are routine enough (incidentdatabase.ai lists over 2000 examples) to indicate that few organizations are capable of completely respecting peoples' rights; meeting claimed equity, diversity, and inclusion (EDI or DEI) goals; or recognizing and then addressing such failures in their organizations and artifacts. We propose a framework for adapting widely practiced Research and Development (R&D) project management methodologies to build organizational equity capabilities and better integrate known evidence-based best practices. We describe how project teams can organize and operationalize the most promising practices, skill sets, organizational cultures, and methods to detect and address rights-based fairness, equity, accountability, and ethical problems as early as possible when they are often less harmful and easier to mitigate; then monitor for unforeseen incidents to adaptively and constructively address them. Our primary example adapts an Agile development process based on Scrum, one of the most widely adopted approaches to organizing R&D teams. We also discuss limitations of our proposed framework and future research directions.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.27)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- (30 more...)
- Overview (1.00)
- Research Report > Experimental Study (0.67)
- Research Report > New Finding (0.67)
- Transportation (1.00)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- (5 more...)
How AI Can Make Cancer Treatment More Equitable
Many are aware of the Cancer Moonshot--an ambitious and hopeful initiative of the U.S. government to reduce cancer-related death rates by 50% by the year 2047. It will take an army to achieve this goal, composed of the brightest minds and biggest hearts in healthcare, science, and technology. Many parties will be involved--the federal government, healthcare providers, researchers, patients, caregivers, and advocates, among others in both the public and private sectors. One of the most pivotal tools that can help propel us toward this lofty goal is artificial intelligence (AI), which is poised to revolutionize cancer treatment. The moonshot plan identifies five priority areas, all of which AI has the potential to enhance. Two areas in particular lend themselves to AI: the call to "deliver the latest cancer innovations to patients and communities" and the aim of enhancing "the oncology model to place cancer patients at the center of decision-making."
How Small Businesses Can Help Build More Equitable A.I.
Failing to include a diverse group of people when developing A.I. has serious risks, and leaders of tech companies can play an outsize role in solving the issue. Angle Bush, the founder of Black Women in A.I, which provides Black women in tech with mentorship and educational programs, recently explained the severity of the situation during an interview with NPR. In the segment, Bush pointed to a case where a 43-year-old Black man in Detroit was wrongfully arrested on a shoplifting charge in 2020 because of inaccurate facial recognition software.
Three Steps to Make Tech Companies More Equitable
No technology comes into being by accident, and inequalities are baked in from the outset. Algorithmic bias, for example, reflects the biases and prejudices of its creators. Embedded inequity is rearing its ugly head in everything from video doorbells to driverless cars to smart devices to tech in health care and criminal justice. The long-term implications are far-reaching and scary. Big Tech is belatedly waking up to this reality, as seen from the likes of IBM and Amazon announcing that they're pausing facial recognition deployment because it can enhance bias in police surveillance.