Goto

Collaborating Authors

Maryland


Engineering the Future of Artificial Intelligence

#artificialintelligence

To connect our people with the latest ideas, a vibrant firmwide network links Booz Allen with outside research leaders from across the global academic community. We actively collaborate with computer science labs and math departments at Harvard University, Syracuse University, the Montreal-based Mila institute, and other organizations. And our university-wide master collaboration agreement with the University of Maryland Baltimore County lets Booz Allen practitioners work on interdisciplinary projects with any professor from any department. Open access to academic environments enables our people to hone their expertise, often at the Ph.D. level, while continuing to build careers in industry. We support emerging researchers by providing them with mentoring and ongoing opportunities to explore transformational AI concepts.


American University: Using Statistics to Aid in the Fight Against Misinformation

#artificialintelligence

An American University math professor and his team created a statistical model that can be used to detect misinformation in social posts. The model also avoids the problem of black boxes that occur in machine learning. With the use of algorithms and computer models, machine learning is increasingly playing a role in helping to stop the spread of misinformation, but a main challenge for scientists is the black box of unknowability, where researchers don't understand how the machine arrives at the same decision as human trainers. Using a Twitter dataset with misinformation tweets about COVID-19, Zois Boukouvalas, assistant professor in AU's Department of Mathematics and Statistics in the College of Arts and Sciences, shows how statistical models can detect misinformation in social media during events like a pandemic or a natural disaster. In newly published research, Boukouvalas and his colleagues, including AU student Caitlin Moroney and Computer Science Prof. Nathalie Japkowicz, also show how the model's decisions align with those made by humans.


HPRN: Holistic Prior-embedded Relation Network for Spectral Super-Resolution

arXiv.org Artificial Intelligence

Spectral super-resolution (SSR) refers to the hyperspectral image (HSI) recovery from an RGB counterpart. Due to the one-to-many nature of the SSR problem, a single RGB image can be reprojected to many HSIs. The key to tackle this illposed problem is to plug into multi-source prior information such as the natural RGB spatial context-prior, deep feature-prior or inherent HSI statistical-prior, etc., so as to improve the confidence and fidelity of reconstructed spectra. However, most current approaches only consider the general and limited priors in their designing the customized convolutional neural networks (CNNs), which leads to the inability to effectively alleviate the degree of ill-posedness. To address the problematic issues, we propose a novel holistic prior-embedded relation network (HPRN) for SSR. Basically, the core framework is delicately assembled by several multi-residual relation blocks (MRBs) that fully facilitate the transmission and utilization of the low-frequency content prior of RGB signals. Innovatively, the semantic prior of RGB input is introduced to identify category attributes and a semantic-driven spatial relation module (SSRM) is put forward to perform the feature aggregation among the clustered similar characteristics using a semantic-embedded relation matrix. Additionally, we develop a transformer-based channel relation module (TCRM), which breaks the habit of employing scalars as the descriptors of channel-wise relations in the previous deep feature-prior and replaces them with certain vectors, together with Transformerstyle feature interactions, supporting the representations to be more discriminative. In order to maintain the mathematical correlation and spectral consistency between hyperspectral bands, the second-order prior constraints (SOPC) are incorporated into the loss function to guide the HSI reconstruction process.


Bridging the Gap: Using Deep Acoustic Representations to Learn Grounded Language from Percepts and Raw Speech

arXiv.org Artificial Intelligence

Learning to understand grounded language, which connects natural language to percepts, is a critical research area. Prior work in grounded language acquisition has focused primarily on textual inputs. In this work we demonstrate the feasibility of performing grounded language acquisition on paired visual percepts and raw speech inputs. This will allow interactions in which language about novel tasks and environments is learned from end users, reducing dependence on textual inputs and potentially mitigating the effects of demographic bias found in widely available speech recognition systems. We leverage recent work in self-supervised speech representation models and show that learned representations of speech can make language grounding systems more inclusive towards specific groups while maintaining or even increasing general performance.


AI mathematician and a planetary diet -- the week in infographics

#artificialintelligence

An unprecedented number of first-time investigators have secured viewing time on NASA's Hubble Space Telescope in the years since the agency overhauled the application process to reduce systemic biases. In 2018, NASA changed the way it evaluates requests for observing time on Hubble by introducing a'double-blind' system, in which neither the applicants nor the reviewers assessing their proposals know each other's identities. All the agency's other telescopes followed suit the next year. The move was intended to cut discrimination on the basis of gender and other factors, including bias against scientists who are at small research institutions, or who haven't received NASA grants before. Data from the Space Telescope Science Institute (STScI) in Baltimore, Maryland, which manages Hubble, show that since the change was introduced, more first-time principal investigators have been securing viewing time on Hubble. How do mathematicians come up with new theories?


How statistics can aid in fight against misinformation

#artificialintelligence

An American University math professor and his team created a statistical model that can be used to detect misinformation in social posts. The model also avoids the problem of black boxes that occur in machine learning. With the use of algorithms and computer models, machine learning is increasingly playing a role in helping to stop the spread of misinformation, but a main challenge for scientists is the black box of unknowability, where researchers don't understand how the machine arrives at the same decision as human trainers. Using a Twitter dataset with misinformation tweets about COVID-19, Zois Boukouvalas, assistant professor in AU's Department of Mathematics and Statistics, College of Arts and Sciences, shows how statistical models can detect misinformation in social media during events like a pandemic or a natural disaster. In newly published research, Boukouvalas and his colleagues, including AU student Caitlin Moroney and Computer Science Prof. Nathalie Japkowicz, also show how the model's decisions align with those made by humans.


How statistics can aid in the fight against misinformation: Machine learning model detects misinformation, is inexpensive and is transparent

#artificialintelligence

With the use of algorithms and computer models, machine learning is increasingly playing a role in helping to stop the spread of misinformation, but a main challenge for scientists is the black box of unknowability, where researchers don't understand how the machine arrives at the same decision as human trainers. Using a Twitter dataset with misinformation tweets about COVID-19, Zois Boukouvalas, assistant professor in AU's Department of Mathematics and Statistics, College of Arts and Sciences, shows how statistical models can detect misinformation in social media during events like a pandemic or a natural disaster. In newly published research, Boukouvalas and his colleagues, including AU student Caitlin Moroney and Computer Science Prof. Nathalie Japkowicz, also show how the model's decisions align with those made by humans. "We would like to know what a machine is thinking when it makes decisions, and how and why it agrees with the humans that trained it," Boukouvalas said. "We don't want to block someone's social media account because the model makes a biased decision."


Booz Allen opens 5G R&D lab in Maryland

ZDNet

Booz Allen Hamilton revealed a major expansion of its 5G research capabilities with the launch of a new development lab in Annapolis Junction, Maryland. The facility will be tasked with furthering Booz's exploration of 5G integration and deployment for both public and private customers. On-site capabilities include a 5G Standalone (SA) carrier-grade network, a SA mobile core, and Radio Access Network (RAN) hardware, as well support for edge computing multi-band testing. Booz Allen plans to combine the lab's assets with its existing cloud, network, and security offerings to provide a "testbed for cyber resiliency, artificial intelligence and machine learning (AI/ML) modeling, and integrated internet-of-things (IoT), immersive, and emerging applications development." The strategy at the Maryland facility includes helping customers plan for 5G technical readiness, focusing on the creation of specific use cases for real-world implementation while also accounting for technological readiness and developing network policies.


Fed2: Feature-Aligned Federated Learning

arXiv.org Artificial Intelligence

Federated learning learns from scattered data by fusing collaborative models from local nodes. However, the conventional coordinate-based model averaging by FedAvg ignored the random information encoded per parameter and may suffer from structural feature misalignment. In this work, we propose Fed2, a feature-aligned federated learning framework to resolve this issue by establishing a firm structure-feature alignment across the collaborative models. Fed2 is composed of two major designs: First, we design a feature-oriented model structure adaptation method to ensure explicit feature allocation in different neural network structures. Applying the structure adaptation to collaborative models, matchable structures with similar feature information can be initialized at the very early training stage. During the federated learning process, we then propose a feature paired averaging scheme to guarantee aligned feature distribution and maintain no feature fusion conflicts under either IID or non-IID scenarios. Eventually, Fed2 could effectively enhance the federated learning convergence performance under extensive homo- and heterogeneous settings, providing excellent convergence speed, accuracy, and computation/communication efficiency.


How Deep Are the Fakes? Focusing on Audio Deepfake: A Survey

arXiv.org Artificial Intelligence

Deepfake is content or material that is synthetically generated or manipulated using artificial intelligence (AI) methods, to be passed off as real and can include audio, video, image, and text synthesis. This survey has been conducted with a different perspective compared to existing survey papers, that mostly focus on just video and image deepfakes. This survey not only evaluates generation and detection methods in the different deepfake categories, but mainly focuses on audio deepfakes that are overlooked in most of the existing surveys. This paper critically analyzes and provides a unique source of audio deepfake research, mostly ranging from 2016 to 2020. To the best of our knowledge, this is the first survey focusing on audio deepfakes in English. This survey provides readers with a summary of 1) different deepfake categories 2) how they could be created and detected 3) the most recent trends in this domain and shortcomings in detection methods 4) audio deepfakes, how they are created and detected in more detail which is the main focus of this paper. We found that Generative Adversarial Networks(GAN), Convolutional Neural Networks (CNN), and Deep Neural Networks (DNN) are common ways of creating and detecting deepfakes. In our evaluation of over 140 methods we found that the majority of the focus is on video deepfakes and in particular in the generation of video deepfakes. We found that for text deepfakes there are more generation methods but very few robust methods for detection, including fake news detection, which has become a controversial area of research because of the potential of heavy overlaps with human generation of fake content. This paper is an abbreviated version of the full survey and reveals a clear need to research audio deepfakes and particularly detection of audio deepfakes.