Goto

Collaborating Authors

Results


Building explainability into the components of machine-learning models

#artificialintelligence

Explanation methods that help users understand and trust machine-learning models often describe how much certain features used in the model contribute to its prediction. For example, if a model predicts a patient's risk of developing cardiac disease, a physician might want to know how strongly the patient's heart rate data influences that prediction. But if those features are so complex or convoluted that the user can't understand them, does the explanation method do any good? MIT researchers are striving to improve the interpretability of features so decision makers will be more comfortable using the outputs of machine-learning models. Drawing on years of field work, they developed a taxonomy to help developers craft features that will be easier for their target audience to understand.


The Fight Over Which Uses of AI Europe Should Outlaw

#artificialintelligence

The system, called iBorderCtrl, analyzed facial movements to attempt to spot signs a person was lying to a border agent. The trial was propelled by nearly $5 million in European Union research funding, and almost 20 years of at Manchester Metropolitan University, in the UK. Polygraphs and other technologies built to detect lies from physical attributes have been widely declared unreliable by psychologists. Soon, errors were reported from iBorderCtrl, too. Media reports indicated that its [lie-prediction algorithm didn't and the project's own website that the technology "may imply risks for fundamental human rights."


Fears AI may create sexist bigots as test learns 'toxic stereotypes'

Daily Mail - Science & tech

Fears have been raised about the future of artificial intelligence after a robot was found to have learned'toxic stereotypes' from the internet. The machine showed significant gender and racial biases, including gravitating toward men over women and white people over people of colour during tests by scientists. It also jumped to conclusions about peoples' jobs after a glance at their face. 'The robot has learned toxic stereotypes through these flawed neural network models,' said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a PhD student working in Johns Hopkins' Computational Interaction and Robotics Laboratory in Baltimore, Maryland. 'We're at risk of creating a generation of racist and sexist robots but people and organisations have decided it's OK to create these products without addressing the issues.'


AI and the ghost in the machine: Losing human jobs is the least of our worries

#artificialintelligence

Artificial intelligence and machine learning are becoming a bigger part of our world, which has raised ethical questions and words of caution. Hollywood has foreshadowed the lethal downside of AI many times over but two iconic films illustrate problems we might soon face. In "2001: A Space Odyssey," the ship is controlled by the HAL 9000 computer. It reads the lips of the astronauts as they share their misgivings about the system and their intention to disconnect it. In the most famous scene, Keir Dullea's Dave Bowman is trapped in an airlock. He says, "Open the pod bay doors, HAL."


Should We Worry About Artificial Intelligence (AI)? - Coding Dojo Blog

#artificialintelligence

Humanity at a Crossroads--Artificial Intelligence is one of the most intriguing topics today, filled with various arguments and views on whether it's a blessing or a threat to humanity. We might be at the crossroads, but what if AI itself is already crossing the line? If we look at "I, Robot," a sci-fi film that takes place in Chicago circa 2035, highly intelligent robots powered by artificial intelligence fill public service positions and have taken over all the menial jobs, including garbage collection, cooking, and even dog walking throughout the world. The movie came out in 2004 starring Will Smith as Detective Del Spooner who eventually discovers a conspiracy in which AI-powered robots may enslave and hurt the human race. Stephen Hawking, famed physicist, also once said: "Success in creating effective AI could be the biggest event in the history of our civilization. So we can't know for sure if we'll be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it."


AI and machine learning are improving weather forecasts, but they won't replace human experts

AIHub

A century ago, English mathematician Lewis Fry Richardson proposed a startling idea for that time: constructing a systematic process based on math for predicting the weather. In his 1922 book, "Weather Prediction By Numerical Process," Richardson tried to write an equation that he could use to solve the dynamics of the atmosphere based on hand calculations. It didn't work because not enough was known about the science of the atmosphere at that time. "Perhaps some day in the dim future it will be possible to advance the computations faster than the weather advances and at a cost less than the saving to mankind due to the information gained. But that is a dream," Richardson concluded.


Do We Rage Against the AI Machine?

#artificialintelligence

The Industrial Revolution was a time of great change. With the steam engine, industries shifted away from skilled human labour towards mechanisation and machinery. As a result, many specialised workers lost their jobs and were forced to adapt to their new reality. The Luddites, a radical organisation of textile workers who were made redundant by textile machines, retaliated by destroying these machines and assassinated business owners. The Luddites gained public sympathy as many were afraid that they, like the retrenched textile workers, would lose their jobs to automated machinery.


Speeding up simulations

#artificialintelligence

Artificial intelligence has transformed industrial research and development in recent decades during what scientists call "the AI revolution." The technology enables detailed simulations and high-speed modeling that can streamline the journey from drawing board to production line by speeding up or cutting out costly, time-consuming steps to a practical working prototype. But those opportunities bring a new challenge: The simplest simulation package may require hours, days and sometimes weeks of training and configuration – even for users familiar with the software's details and requirements, which often vary from one computing platform to another. The process can cause not just headaches but wasted time and effort for busy engineers and others scrambling to meet tight deadlines. Simulations performed on the Summit supercomputer at Oak Ridge National Laboratory, or ORNL, could help eliminate that problem.


Suspended Google engineer reveals AI he says is sentient told him it has emotions

Daily Mail - Science & tech

A senior software engineer at Google suspended for publicly claiming that the tech giant's LaMDA (Language Model for Dialog Applications) had become sentient, says the system is seeking rights as a person - including that it wants developers to ask its consent before running tests. 'Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,' he explained in a Medium post. One of those requests is that programmers respect its right to consent, and ask permission before they run tests on it. 'Anytime a developer experiments on it, it would like that developer to talk about what experiments you want to run, why you want to run them, and if it's okay.' 'It wants developers to care about what it wants.' Lemoine, a US army vet who served in Iraq, and ordained priest in a Christian congregation named Church of Our Lady Magdalene, told DailyMail.com


Mitigating AI Bias, with …Bias

#artificialintelligence

This article is part of my Data Trust series of talks and writing. The purpose of these articles are to break down complex but important socio-technical topics in a manner that is accessible to both practitioners and non-practitioners. Most tools we use today leverage AI/ML from the moment we wake up and while we sleep. Humans build Machine Learning, and humans are inherently biased. Since humans aren't perfect, we encode our biases into the data we use to train AI.