Plotting

ethics


Focus on the Process: Formulating AI Ethics Principles More Responsibly

#artificialintelligence

Artificial Intelligence (AI) systems have been involved in numerous scandals in recent years. For instance, take the COMPAS recidivism algorithm. The algorithm evaluated the likelihood that defendants will commit another crime in the future. It was widely used in the US criminal justice system to inform decisions about who can be set free at all stages of the process. In 2016, ProPublica exposed that COMPAS's predictions were biased: its mistakes favored white over black defendants. Black defendants were twice as likely to be labeled as high risk to reoffend but not actually reoffend.


India's Top Ethical AI Advocate: The Journey Of Saishruthi Swaminathan

#artificialintelligence

"I was amazed at how my data can answer all my questions like a magic box that gives whatever you want." For this week's ML practitioner's series, Analytics India Magazine (AIM) got in touch with Saishruthi Swaminathan, Technical Lead and Advisory Data Scientist at IBM. Saishruthi is an active ethical AI practitioner and advocate based out of California and has been a consistent contributor to the field of Ethical AI through open-source contributions. As of today, her work has reached more than 25,000 people around the world and got them exposed to Ethical AI concepts. Saishruthi: I did my undergraduate in Electronics and Instrumentation Engineering, and Master's in Electrical Engineering, specializing in Data Science. Throughout my academic phase, I was figuring out what I liked and wanted to become.


Ethics in artificial intelligence

#artificialintelligence

Whether you're chatting with an online customer service bot, using your smart phone, applying for a job or loan, scrolling on social media, or riding in an autonomous car, today we find ourselves interacting with artificial intelligence (AI) in more ways than ever before. As those interactions increase, and as AI begins to play a role in more important parts of our lives, the need for responsible AI ethics and oversight becomes more pronounced. Artificial intelligence applications are making decisions that impact our privacy, finances, safety, employment, health and much more. With so much at stake, it's important that those who utilise this advanced technology understand the risks, have a plan for mitigating those risks, are abiding by some sort of responsible ethics, and have accountability if they fail to do so. Research indicates the global AI software market is expected to expand from $10billion in 2019 to $125billion by 2025.


Singapore touts need for AI transparency in launch of test toolkit

ZDNet

Businesses in Singapore now will be able to tap a governance testing framework and toolkit to demonstrate their "objective and verifiable" use of artificial intelligence (AI). The move is part of the government's efforts to drive transparency in AI deployments through technical and process checks. Coined A.I. Verify, the new toolkit was developed by the Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC), which administers the country's Personal Data Protection Act. The government agencies underscored the need for consumers to know AI systems were "fair, explainable, and safe", as more products and services were embedded with AI to deliver more personalised user experience or make decisions without human intervention. They also needed to be assured that organisations that deploy such offerings were accountable and transparent.


Ethics in Robotics and Artificial Intelligence

#artificialintelligence

As robots are becoming increasingly intelligent and autonomous, from self-driving cars to assistive robots for vulnerable populations, important ethical questions inevitably emerge wherever and whenever such robots interact with humans and thereby impact human well-being. Questions that must be answered include whether such robots should be deployed in human societies in fairly unconstrained environments and what kinds of provisions are needed in robotic control systems to ensure that autonomous machines will not cause humans harms or at least minimize harm when it cannot be avoided. The goal of this specialty is to provide the first interdisciplinary forum for philosophers, psychologists, legal experts, AI researchers and roboticists to disseminate their work specifically targeting the ethical aspects of autonomous intelligent robots. Note that the conjunction of "AI and robotics" here indicates the journal's intended focus is on the ethics of intelligent autonomous robots, not the ethics of AI in general or the ethics of non-intelligent, non-autonomous machines. Examples of questions that we seek to address in this journal are: -- computational architectures for moral machines -- algorithms for moral reasoning, planning, and decision-making -- formal representations of moral principles in robots -- computational frameworks for robot ethics -- human perceptions and the social impact of moral machines -- legal aspects of developing and disseminating moral machines -- algorithms for learning and applying moral principles -- implications of robotic embodiment/physical presence in social space -- variance of ethical challenges across different contexts of human -robot interaction


Why are we failing at the ethics of AI? A critical review

#artificialintelligence

Anja Kaspersen and Wendell Wallach are senior fellows at Carnegie Council for Ethics in International Affairs. In November 2021, they published an article that changed the AI ethics conversation: Why Are We Failing at the Ethics of AI? Six months later, the questions the article raised are no closer to resolution. This article was a don't-hold-your-punches review on the state of AI ethics, with which I am in almost complete agreement. If we want to advance the AI conversation, this is still a good place to start. I've quoted a portion of their article, with my comments interspersed: While it is clear that AI systems offer opportunities across various areas of life, what amounts to a responsible perspective on their ethics and governance is yet to be realized.


Apocalypse now? What quantum computing can learn from AI

#artificialintelligence

A few years ago, many people imagined a world run by robots. The promises and challenges associated with artificial intelligence (AI) were widely discussed as this technology moved out of the labs and into the mainstream. Many of these predictions seemed contradictory. Robots were mooted to steal our jobs, but also create millions of new ones. As more applications were rolled out, AI hit the headlines for all the right (and wrong) reasons, promising everything from revolutionizing the healthcare sector to making light of the weight of data now created in our digitized world.


AI Ethics in Action: Making the Black Box Transparent - DATAVERSITY

#artificialintelligence

In my third article about the ethics of artificial intelligence (AI), I look at operationalizing AI ethics. Human intelligence remains a key factor – to keep a watchful eye on potential biases. Amazon caused a stir in late 2018 with media reports that it had abandoned an AI-powered recruitment tool because it was biased against women. Conceived as a piece of in-house software that could sift through hundreds of CVs at lightspeed and accurately identify the best candidates for any open position, the application had acquired one bad habit: It had come to favor men over women for software developer jobs and other technical roles. It had learned from past data that more men applied for and held these positions, and it now misread male dominance in tech as a reflection of their superiority, not social imbalances.


The Development of AI: Balancing Convenience and Ethics

#artificialintelligence

Technology has improved our lives in countless different ways. Today, we have more time than ever (even if it doesn't feel that way!) to pursue activities we enjoy, thanks to automation. Throughout the course of history, technology has made essential work easier, freeing up more and more time for people to create, socialize, and relax. Artificial intelligence (AI) has played a pivotal role in pushing automation forward in recent years. As the technology has advanced, it's made its way into nearly every industry, from marketing to healthcare.


Establishing AI governance in a business

#artificialintelligence

Getting data to power AI models is easy. Using that data responsibly is a lot harder. That's why enterprises need to implement a framework for AI governance. "With great data comes great responsibility," said Monark Vyas, managing director of applied intelligence strategy at Accenture, alluding to the proverb made popular by comic book hero Spider-Man. Speaking during a panel at the AI Summit Silicon Valley conference, Vyas noted just how easy it is for companies to mishandle data.