U.S. Air Force invests in Explainable-AI for unmanned aircraft

#artificialintelligence

Software star-up, Z Advanced Computing, Inc. (ZAC), has received funding from the U.S. Air Force to incorporate the company's 3D image recognition technology into unmanned aerial vehicles (UAVs) and drones for aerial image and object recognition. ZAC's in-house image recognition software is based on Explainable-AI (XAI), where computer-generated image results can be understood by human experts. ZAC – based in Potomac, Maryland – is the first to demonstrate XAI, where various attributes and details of 3D objects can be recognized from any view or angle. "With our superior approach, complex 3D objects can be recognized from any direction, using only a small number of training samples," says Dr. Saied Tadayon, CTO of ZAC. "You cannot do this with the other techniques, such as deep Convolutional Neural Networks (CNNs), even with an extremely large number of training samples. That's basically hitting the limits of the CNNs," adds Dr. Bijan Tadayon, CEO of ZAC.


Explainable-AI (Artificial Intelligence) Image Recognition Startup Pilots Smart Appliance with Bosch

#artificialintelligence

Z Advanced Computing, Inc. (ZAC), an AI (Artificial Intelligence) software startup, is developing its Smart Home product line through a paid-pilot for smart appliances for BSH Home Appliances, the largest manufacturer of home appliances in Europe and one of the largest in the world. BSH Home Appliances Corporation is a subsidiary of the Bosch Group, originally a joint venture between Robert Bosch GmbH and Siemens AG. ZAC Smart Home product line uses ZAC Explainable-AI Image Recognition. ZAC is the first to apply Explainable-AI in Machine Learning. "You cannot do this with other techniques, such as Deep Convolutional Neural Networks," said Dr. Saied Tadayon, CTO of ZAC.


The quest for artificial intelligence that can outsmart hackers

#artificialintelligence

In the future, will artificial intelligence be so sophisticated that it will be able to tell when someone is trying to deceive it? A Carnegie Mellon University professor and his team is working on technology that could move this idea from the realm of science fiction to reality. Their work -- rooted in game theory and machine learning -- is part of a larger push for more advanced AI. As AI becomes more commonplace in the technology we use every day, detractors and supporters are becoming more vocal about its potential risks and benefits. For some, smarter AI sets up a dangerous precedent for a future too reliant on machines to make decisions about everything from medical diagnoses to the operation of self-driving cars.


The quest for artificial intelligence that can outsmart hackers

#artificialintelligence

In the future, will artificial intelligence be so sophisticated that it will be able to tell when someone is trying to deceive it? A Carnegie Mellon University professor and his team is working on technology that could move this idea from the realm of science fiction to reality. Their work -- rooted in game theory and machine learning -- is part of a larger push for more advanced AI. As AI becomes more commonplace in the technology we use every day, detractors and supporters are becoming more vocal about its potential risks and benefits. For some, smarter AI sets up a dangerous precedent for a future too reliant on machines to make decisions about everything from medical diagnoses to the operation of self-driving cars.


Following Drones Controversy, Google Publishes Ethical AI Principles

#artificialintelligence

MOUNTAIN VIEW, CA – Following significant internal backlash at Google against the firm's participation in a U.S. military drone surveillance program, CEO Sundar Pichai has published a list of seven key ethical principles to guide the company's use of AI. Back in April, over 3,000 Google employees – including senior figures – signed an open letter in protest of the search giant's participation in the Pentagon-run Project Maven. Project Maven saw Google machine vision technology being leveraged to'improve' the targeting of U.S. drone strikes, in what the open letter referred to as a'biased and weaponized' use of AI. "This plan will irreparably damage Google's brand and its ability to compete for talent," the letter said. "Google is already struggling to keep the public's trust. Less than two months later, Google CEO Sundar Pichai has responded publicly by setting out core ethical principles for the company's applications of AI and machine learning going forward.