Goto

Collaborating Authors

Results


Government Deep Tech 2022 Top Funding Focus Explainable AI, Photonics, Quantum

#artificialintelligence

DARPA, In-Q-Tel, US National Laboratories (examples: Argonne, Oak Ridge) are famous government funding agencies for deep tech on the forward boundaries, the near impossible, that have globally transformative solutions. The Internet is a prime example where more than 70% of the 7.8 billion population are online in 2022, closing in on 7 hours daily mobile usage, and global wealth of $500 Trillion is powered by the Internet. There is convergence between the early bets led by government funding agencies and the largest corporations and their investments. An example is from 2015, where I was invited to help the top 100 CEOs, representing nearly $100 Trillion in assets under management, to look ten years into the future for their investments. The resulting working groups, and private summits resulted in the member companies investing in all the areas identified: quantum computing, block chain, cybersecurity, big data, privacy and data, AI/ML, future in fintech, financial inclusion, ...


Government Deep Tech 2022 Top Funding Focus Explainable AI, Photonics, Quantum

#artificialintelligence

DARPA, In-Q-Tel, US National Laboratories (examples: Argonne, Oak Ridge) are famous government funding agencies for deep tech on the forward boundaries, the near impossible, that have globally transformative solutions. The Internet is a prime example where more than 70% of the 7.8 billion population are online in 2022, closing in on 7 hours daily mobile usage, and global wealth of $500 Trillion is powered by the Internet. There is convergence between the early bets led by government funding agencies and the largest corporations and their investments. An example is from 2015, where I was invited to help the top 100 CEOs, representing nearly $100 Trillion in assets under management, to look ten years into the future for their investments. The resulting working groups, and private summits resulted in the member companies investing in all the areas identified: quantum computing, block chain, cybersecurity, big data, privacy and data, AI/ML, future in fintech, financial inclusion, ...


DARPA's explainable AI (XAI) program: A retrospective

#artificialintelligence

Dramatic success in machine learning has created an explosion of new AI capabilities. Continued advances promise to produce autonomous systems that perceive, learn, decide, and act on their own. These systems offer tremendous benefits, but their effectiveness will be limited by the machine's inability to explain its decisions and actions to human users. This issue is especially important for the United States Department of Defense (DoD), which faces challenges that require the development of more intelligent, autonomous, and reliable systems. XAI will be essential for users to understand, appropriately trust, and effectively manage this emerging generation of artificially intelligent partners.


Explainable AI for B5G/6G: Technical Aspects, Use Cases, and Research Challenges

arXiv.org Artificial Intelligence

When 5G began its commercialisation journey around 2020, the discussion on the vision of 6G also surfaced. Researchers expect 6G to have higher bandwidth, coverage, reliability, energy efficiency, lower latency, and, more importantly, an integrated "human-centric" network system powered by artificial intelligence (AI). Such a 6G network will lead to an excessive number of automated decisions made every second. These decisions can range widely, from network resource allocation to collision avoidance for self-driving cars. However, the risk of losing control over decision-making may increase due to high-speed data-intensive AI decision-making beyond designers and users' comprehension. The promising explainable AI (XAI) methods can mitigate such risks by enhancing the transparency of the black box AI decision-making process. This survey paper highlights the need for XAI towards the upcoming 6G age in every aspect, including 6G technologies (e.g., intelligent radio, zero-touch network management) and 6G use cases (e.g., industry 5.0). Moreover, we summarised the lessons learned from the recent attempts and outlined important research challenges in applying XAI for building 6G systems. This research aligns with goals 9, 11, 16, and 17 of the United Nations Sustainable Development Goals (UN-SDG), promoting innovation and building infrastructure, sustainable and inclusive human settlement, advancing justice and strong institutions, and fostering partnership at the global level.


Cognitive Explainable Artificial Intelligence (AI) breakthroughs in Machine Learning (ML) for US Air Force: 3D Image Recognition using few training samples on CPU (without GPU)

#artificialintelligence

Z Advanced Computing, Inc. (ZAC), the pioneer Cognitive Explainable-AI (Artificial Intelligence) (Cognitive XAI) software startup, has made AI and Machine Learning (ML) breakthroughs: ZAC has achieved 3D Image Recognition using only a few training samples, and using only an average laptop with low power CPU, for both training and recognition, for the US Air Force (USAF). This is in sharp contrast to the other algorithms in industry that require thousands to billions of samples, being trained on large GPU servers. "ZAC requires much less computing power and much less electrical power to run, which is great for mobile and edge computing, as well as environment, with less Carbon footprint," emphasized Dr. Saied Tadayon, CTO of ZAC. ZAC is the first to demonstrate the novel and superior algorithms Cognition-based Explainable-AI (XAI), where various attributes and details of 3D (three dimensional) objects are recognized from any view or angle. "You cannot do this task with the other algorithms, such as Deep Convolutional Neural Networks (CNN) or ResNets, even with an extremely large number of training samples, on GPU servers. That's basically hitting the limitations of CNNs or Neural Nets, which all other companies are using now," said Dr. Bijan Tadayon, CEO of ZAC.


Explainable AI for Intelligence Augmentation in Multi-Domain Operations

arXiv.org Artificial Intelligence

Central to the concept of multi-domain operations (MDO) is the utilization of an intelligence, surveillance, and reconnaissance (ISR) network consisting of overlapping systems of remote and autonomous sensors, and human intelligence, distributed among multiple partners. Realising this concept requires advancement in both artificial intelligence (AI) for improved distributed data analytics and intelligence augmentation (IA) for improved human-machine cognition. The contribution of this paper is threefold: (1) we map the coalition situational understanding (CSU) concept to MDO ISR requirements, paying particular attention to the need for assured and explainable AI to allow robust human-machine decision-making where assets are distributed among multiple partners; (2) we present illustrative vignettes for AI and IA in MDO ISR, including human-machine teaming, dense urban terrain analysis, and enhanced asset interoperability; (3) we appraise the state-of-the-art in explainable AI in relation to the vignettes with a focus on human-machine collaboration to achieve more rapid and agile coalition decision-making. The union of these three elements is intended to show the potential value of a CSU approach in the context of MDO ISR, grounded in three distinct use cases, highlighting how the need for explainability in the multi-partner coalition setting is key. Introduction Multi-domain operations (MDO) require the capacity, capability, and endurance to operate across multiple domains -- from dense urban terrain to space and cyberspace -- in contested environments against near-peer adversaries (U.S. Army 2018).


US Air Force funds Explainable-AI for UAV tech

#artificialintelligence

Z Advanced Computing, Inc. (ZAC) of Potomac, MD announced on August 27 that it is funded by the US Air Force, to use ZAC's detailed 3D image recognition technology, based on Explainable-AI, for drones (unmanned aerial vehicle or UAV) for aerial image/object recognition. ZAC is the first to demonstrate Explainable-AI, where various attributes and details of 3D (three dimensional) objects can be recognized from any view or angle. "With our superior approach, complex 3D objects can be recognized from any direction, using only a small number of training samples," said Dr. Saied Tadayon, CTO of ZAC. "For complex tasks, such as drone vision, you need ZAC's superior technology to handle detailed 3D image recognition." "You cannot do this with the other techniques, such as Deep Convolutional Neural Networks, even with an extremely large number of training samples. That's basically hitting the limits of the CNNs," continued Dr. Bijan Tadayon, CEO of ZAC.


U.S. Air Force invests in Explainable-AI for unmanned aircraft

#artificialintelligence

Software star-up, Z Advanced Computing, Inc. (ZAC), has received funding from the U.S. Air Force to incorporate the company's 3D image recognition technology into unmanned aerial vehicles (UAVs) and drones for aerial image and object recognition. ZAC's in-house image recognition software is based on Explainable-AI (XAI), where computer-generated image results can be understood by human experts. ZAC – based in Potomac, Maryland – is the first to demonstrate XAI, where various attributes and details of 3D objects can be recognized from any view or angle. "With our superior approach, complex 3D objects can be recognized from any direction, using only a small number of training samples," says Dr. Saied Tadayon, CTO of ZAC. "You cannot do this with the other techniques, such as deep Convolutional Neural Networks (CNNs), even with an extremely large number of training samples. That's basically hitting the limits of the CNNs," adds Dr. Bijan Tadayon, CEO of ZAC.


A 20-Year Community Roadmap for Artificial Intelligence Research in the US

arXiv.org Artificial Intelligence

Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.


Understanding artificial intelligence ethics and safety

arXiv.org Artificial Intelligence

A remarkable time of human promise has been ushered in by the convergence of the ever-expanding availability of big data, the soaring speed and stretch of cloud computing platforms, and the advancement of increasingly sophisticated machine learning algorithms. Innovations in AI are already leaving a mark on government by improving the provision of essential social goods and services from healthcare, education, and transportation to food supply, energy, and environmental management. These bounties are likely just the start. The prospect that progress in AI will help government to confront some of its most urgent challenges is exciting, but legitimate worries abound. As with any new and rapidly evolving technology, a steep learning curve means that mistakes and miscalculations will be made and that both unanticipated and harmful impacts will occur. This guide, written for department and delivery leads in the UK public sector and adopted by the British Government in its publication, 'Using AI in the Public Sector,' identifies the potential harms caused by AI systems and proposes concrete, operationalisable measures to counteract them. It stresses that public sector organisations can anticipate and prevent these potential harms by stewarding a culture of responsible innovation and by putting in place governance processes that support the design and implementation of ethical, fair, and safe AI systems. It also highlights the need for algorithmically supported outcomes to be interpretable by their users and made understandable to decision subjects in clear, non-technical, and accessible ways. Finally, it builds out a vision of human-centred and context-sensitive implementation that gives a central role to communication, evidence-based reasoning, situational awareness, and moral justifiability.