Racist artificial intelligence? Maybe not, if computers explain their 'thinking'

#artificialintelligence

Growing concerns about how artificial intelligence (AI) makes decisions has inspired U.S. researchers to make computers explain their "thinking." "Computers are going to become increasingly important parts of our lives, if they aren't already, and the automation is just going to improve over time, so it's increasingly important to know why these complicated systems are making the decisions that they are," assistant professor of computer science at the University of California Irvine, Sameer Singh, told CTV's Your Morning on Tuesday. Singh explained that, in almost every application of machine learning and AI, there are cases where the computers do something completely unexpected. "Sometimes it's a good thing, it's doing something much smarter than we realize," he said. Such was the case with the Microsoft AI chatbot, Tay, which became racist in less than a day.


US Air Force funds Explainable-AI for UAV tech

#artificialintelligence

Z Advanced Computing, Inc. (ZAC) of Potomac, MD announced on August 27 that it is funded by the US Air Force, to use ZAC's detailed 3D image recognition technology, based on Explainable-AI, for drones (unmanned aerial vehicle or UAV) for aerial image/object recognition. ZAC is the first to demonstrate Explainable-AI, where various attributes and details of 3D (three dimensional) objects can be recognized from any view or angle. "With our superior approach, complex 3D objects can be recognized from any direction, using only a small number of training samples," said Dr. Saied Tadayon, CTO of ZAC. "For complex tasks, such as drone vision, you need ZAC's superior technology to handle detailed 3D image recognition." "You cannot do this with the other techniques, such as Deep Convolutional Neural Networks, even with an extremely large number of training samples. That's basically hitting the limits of the CNNs," continued Dr. Bijan Tadayon, CEO of ZAC.


AI 101 - Separating Science Fact From Science Fiction

@machinelearnbot

Movies like Blade Runner and Her have popularised the idea of fully conscious computers, and with AI (Artificial Intelligence) technology like Apple's Siri or Amazon's Alexa increasingly present in our lives, it'd be easy to believe that what you see on the silver screen is just around the corner. Whilst I enjoy a Sci-Fi epic as much as the next person, in my dual role as Professor of Computer Science at the University of San Francisco and Chief Scientist at data integration software provider SnapLogic, I investigate the practical applications of AI and am tasked with explaining and teaching the realities of what can be achieved. In other words, I separate the fact from the fiction, which is what I aim to do today. It's not self-aware or able to generate original thoughts. What many people call AI is actually a subfield called machine learning (ML).


'Explainable Artificial Intelligence': Cracking open the black box of AI

#artificialintelligence

At a demonstration of Amazon Web Services' new artificial intelligence image recognition tool last week, the deep learning analysis calculated with near certainty that a photo of speaker Glenn Gore depicted a potted plant. "It is very clever, it can do some amazing things but it needs a lot of hand holding still. AI is almost like a toddler. They can do some pretty cool things, sometimes they can cause a fair bit of trouble," said AWS' chief architect in his day two keynote at the company's summit in Sydney. Where the toddler analogy falls short, however, is that a parent can make a reasonable guess as to, say, what led to their child drawing all over the walls, and ask them why.


'Explainable Artificial Intelligence': Cracking open the black box of AI

#artificialintelligence

At a demonstration of Amazon Web Services' new artificial intelligence image recognition tool last week, the deep learning analysis calculated with near certainty that a photo of speaker Glenn Gore depicted a potted plant. "It is very clever, it can do some amazing things but it needs a lot of hand holding still. AI is almost like a toddler. They can do some pretty cool things, sometimes they can cause a fair bit of trouble," said AWS' chief architect in his day two keynote at the company's summit in Sydney. Where the toddler analogy falls short, however, is that a parent can make a reasonable guess as to, say, what led to their child drawing all over the walls, and ask them why.