If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Su Lin Blodgett about defining bias. How do we define bias? Is all bias the same? Is it possible to eliminate bias completely in our AI systems?
The Army of the future will involve humans and autonomous machines working together to accomplish the mission. According to Army researchers, this vision will only succeed if artificial intelligence is perceived to be ethical. Researchers, based at the U.S. Army Combat Capabilities Development Command, now known as DEVCOM, Army Research Laboratory, Northeastern University and the University of Southern California, expanded existing research to cover moral dilemmas and decision making that has not been pursued elsewhere. This research, featured in Frontiers in Robotics and AI, tackles the fundamental challenge of developing ethical artificial intelligence, which, according to the researchers, is still mostly understudied. "Autonomous machines, such as automated vehicles and robots, are poised to become pervasive in the Army," said DEVCOM ARL researcher Dr. Celso de Melo, who is located at the laboratory's ARL West regional site in Playa Vista, California.
Design decisions for AI systems involve value judgements and optimization choices. Some relate to technical considerations like latency and accuracy, others relate to business metrics. But each require careful consideration as they have consequences in the final outcome from the system. To be clear, not everything has to translate into a tradeoff. There are often smart reformulations of a problem so that you can meet the needs of your users and customers while also satisfying internal business considerations.
Conversations around AI and ethics may have started as a preoccupation of activists and academics, but now -- prompted by the increasing frequency of headlines of biased algorithms, black box models, and privacy violations -- boards, C-suites, and data and AI leaders have realized it's an issue for which they need a strategic approach. A solution is hiding in plain sight. Other industries have already found ways to deal with complex ethical quandaries quickly, effectively, and in a way that can be easily replicated. Instead of trying to reinvent this process, companies need to adopt and customize one of health care's greatest inventions: the Institutional Review Board, or IRB. Most discussions of AI ethics follow the same flawed formula, consisting of three moves, each of which is problematic from the perspective of an organization that wants to mitigate the ethical risks associated with AI. Here's how these conversations tend to go. First, companies move to identify AI ethics with "fairness" in AI, or sometimes more generally, "fairness, equity, and inclusion."
Professor Shannon Vallor, an expert in the challenging relationship between ethics and technology, reminds us that artificial intelligence is "human all the way down" - and therefore reflects the positives and negatives of human nature. Prof Vallor, Baillie Gifford Chair in the Ethics of Data and AI at the Edinburgh Futures Institute, insists self-aware machines are not about to take over the world. She says: "We have gone through a period where people like Stephen Hawking and Elon Musk have perhaps unwittingly misled the public about machines becoming self-aware or hyper-intelligent and enslaving humanity - and from a scientific perspective, that's just a complete fantasy at this point. "There is nothing mysterious or magical about AI - it's something that is transforming our world but completely reflective of our own human strengths and weaknesses." Professor Vallor is joined on the podcast by Nick Thomas and Kyle McEnery of Baillie Gifford. Nick Thomas highlights how "access to data is going to be a key competitive advantage for business in the future, while Kyle McEnery describes his work on harnessing data and AI to make better decisions about where Baillie Gifford invests its clients money - and the potential for greater targeting of ethical investment.
Father Paolo Benanti is an expert in ethics, digital ethics, and technology. He is a Franciscan monk and Professor of Moral Theology, Bioethics, and Neuroethics at the Gregorian Pontifical University in Rome. I discuss with Father Benanti the controversial aspects of AI in healthcare and how the digital transformation changes us – human beings. Father Benanti, two years ago, there was a morally ambiguous case in the USA – a doctor used a virtual presence system to tell a patient he would die. With the broad adoption of telemedicine and medical workforce shortages, this practice may become an everyday reality. From the beginning of human history, we have understood medicine as a scientific discipline. There was a time when a priest and doctor was the same person. We've always picked up someone special from the human community to hold the position of a doctor.
As more companies adopt AI, the risks posed by AI are becoming clearer to business leaders. That is driving many companies to hire AI ethicists to help guide them through an ethical minefield. But just as data scientists proved to be as elusive as unicorns, qualified AI ethics are also in very short supply, says Beena Ammanath, executive director of Deloitte's AI Institute. "We've seen different models evolving. It's still very nascent," Ammanath tells Datanami.
Global executives fear the increased utilization of AI as the industry is still uncertain about the intelligent technology's full capabilities and potential With the accelerated pace of digital marketing, enterprises have already begun to make decisions about adopting AI-driven solutions. A recent US Policy report'National Security Commission on Artificial Intelligence' states that Americans have not yet understood the exact consequences of AI adoption on their national security, welfare and economy. The 756 pages report underlines the fact that while AI should benefit the country, it should also defend it against AI's destructive capabilities. True, there is undiscovered knowledge that might reveal overwhelming possibilities. Experts also reflect on Open AI's CLIP and Facebook's new AI-model SEER.
Joy Buolamwini is a researcher at the MIT Media Lab who pioneered research into bias that's built into artificial intelligence and facial recognition. And the way she came to this work is almost a little too on the nose. As a graduate student at MIT, she created a mirror that would project aspirational images onto her face, like a lion or tennis star Serena Williams. But the facial-recognition software she installed wouldn't work on her Black face, until she literally put on a white mask. Buolamwini is featured in a documentary called "Coded Bias," airing tonight on PBS.
My first loves were international relations, economics and politics. I grew up around cigar smoking patriarchs that talked at length about the US, Russia, China and all the innovations, wars, conflicts and progress in between. I marveled at our global world and I was fascinated by where influence originated and what it could do. I was frightened by the scale of the world's superpowers and the myriad ways they affect the lives of innocent people all over the world. Nothing much has changed since those early days.