U.S. Air Force invests in Explainable-AI for unmanned aircraft

#artificialintelligence

Software star-up, Z Advanced Computing, Inc. (ZAC), has received funding from the U.S. Air Force to incorporate the company's 3D image recognition technology into unmanned aerial vehicles (UAVs) and drones for aerial image and object recognition. ZAC's in-house image recognition software is based on Explainable-AI (XAI), where computer-generated image results can be understood by human experts. ZAC – based in Potomac, Maryland – is the first to demonstrate XAI, where various attributes and details of 3D objects can be recognized from any view or angle. "With our superior approach, complex 3D objects can be recognized from any direction, using only a small number of training samples," says Dr. Saied Tadayon, CTO of ZAC. "You cannot do this with the other techniques, such as deep Convolutional Neural Networks (CNNs), even with an extremely large number of training samples. That's basically hitting the limits of the CNNs," adds Dr. Bijan Tadayon, CEO of ZAC.


This software startup just made artificial intelligence breakthrough using General-AI for 3D Object Recognition from any direction Technology Startups News Tech News

#artificialintelligence

The recognition of objects is one of the main goals for computer vision research. Some of the applications include: the automation on the assembly line, inspection of integrated circuit chips to detect defects in them, security in face and fingerprint recognition, medical diagnosis and detection of abnormal cells that may indicate cancer, remote sensing for automated recognition of possible hostile terrain to generate maps and aids for the visually impaired of mechanical guide dogs. However, 3D object recognition has been one of the challenging processes facing computer vision systems. One Maryland-based startup may finally have the answer to the problem. The startup, Z Advanced Computing, announced today that it has made technical and scientific breakthrough towards Machine Learning and Artificial Intelligence (AI), where the various attributes and details of 3D (three dimensional) objects can be recognized from any view or angle, using its novel General-AI techniques.


What do we need to build explainable AI systems for the medical domain?

arXiv.org Machine Learning

Artificial intelligence (AI) generally and machine learning (ML) specifically demonstrate impressive practical success in many different application domains, e.g. in autonomous driving, speech recognition, or recommender systems. Deep learning approaches, trained on extremely large data sets or using reinforcement learning methods have even exceeded human performance in visual tasks, particularly on playing games such as Atari, or mastering the game of Go. Even in the medical domain there are remarkable results. The central problem of such models is that they are regarded as black-box models and even if we understand the underlying mathematical principles, they lack an explicit declarative knowledge representation, hence have difficulty in generating the underlying explanatory structures. This calls for systems enabling to make decisions transparent, understandable and explainable. A huge motivation for our approach are rising legal and privacy aspects. The new European General Data Protection Regulation entering into force on May 25th 2018, will make black-box approaches difficult to use in business. This does not imply a ban on automatic learning approaches or an obligation to explain everything all the time, however, there must be a possibility to make the results re-traceable on demand. In this paper we outline some of our research topics in the context of the relatively new area of explainable-AI with a focus on the application in medicine, which is a very special domain. This is due to the fact that medical professionals are working mostly with distributed heterogeneous and complex sources of data. In this paper we concentrate on three sources: images, *omics data and text. We argue that research in explainable-AI would generally help to facilitate the implementation of AI/ML in the medical domain, and specifically help to facilitate transparency and trust.


SETHspeak on AI Episode 2: "TRUST" : Where's the Elevator Music?

#artificialintelligence

Episode 2 looks at why #TRUST is important in #technologyadoption, why it is normal for humans to be wary of new technology and what is being done to develop trust in #artificialintelligence: #explainableai Keep the feedback coming. If you want to delve deeper into explainable AI check out my LinkedIn article: https://lnkd.in/eF76sAa Want to get a better understanding of difference between #AI vs #machinelearning vs #deeplearning vs #datasciences: https://lnkd.in/egeinvs


New AI tool claims to 'change the landscape of online ads' by connecting shoppers to goods using images - TechRepublic

#artificialintelligence

Imagine that you are searching for a brown leather sandal online. You know what it should look like, but don't know how to describe it. You search "brown sandal" in Google, which serves up many results--but none of them are it. In 2015, GE inaugurated a new, Multi-Modal manufacturing facility in Chakan, India. If the company's ambitions for the space are realized, it could drive a massive change in global manufacturing.