Goto

Collaborating Authors

Results


AI reveals link between family history and type 1 diabetes risks - Futurity

#artificialintelligence

You are free to share this article under the Attribution 4.0 International license. A new data-driven approach is offering insight into people with type 1 diabetes, who account for about 5-10% of all diabetes diagnoses. The researchers gathered information through health informatics and applied artificial intelligence (AI) to better understand the disease. In the study, they analyzed publicly available, real-world data from about 16,000 participants enrolled in the T1D Exchange Clinic Registry. By applying a contrast pattern mining algorithm, researchers were able to identify major differences in health outcomes among people living with type 1 diabetes who do or do not have an immediate family history of the disease.


Mizzou team uses AI to advance knowledge of Type 1 diabetes

#artificialintelligence

An interdisciplinary team of researchers from the University of Missouri, Children's Mercy Kansas City, and Texas Children's Hospital has used a new data-driven approach to learn more about persons with Type 1 diabetes, who account for about 5-10% of all diabetes diagnoses. The team gathered its information through health informatics and applied artificial intelligence (AI) to better understand the disease. In the study, the team analyzed publicly available, real-world data from about 16,000 participants enrolled in the T1D Exchange Clinic Registry. By applying a contrast pattern mining algorithm developed at the MU College of Engineering, the team was able to identify major differences in health outcomes among people living with Type 1 diabetes who do or do not have an immediate family history of the disease. Chi-Ren Shyu, the director of the MU Institute for Data Science and Informatics (MUIDSI), led the AI approach used in the study and said the technique is exploratory.


Weakly Correlated Knowledge Integration for Few-shot Image Classification - Machine Intelligence Research

#artificialintelligence

Colored figures are available in the online version at https://link.springer.com/journal/11633 He is currently a faculty member with School of Computer and Communication Engineering, University of Science and Technology Beijing, China. His research interests include pattern recognition, classifier ensemble, and document analysis and recognition. Chang Liu received the B. Sc. degree in computer science from University of Science and Technology Beijing, China in 2016, where he is a Ph. His research interests include text detection, few-shot learning, and text recognition.


Mental Stress Detection using Data from Wearable and Non-wearable Sensors: A Review

arXiv.org Artificial Intelligence

This paper presents a comprehensive review of methods covering significant subjective and objective human stress detection techniques available in the literature. The methods for measuring human stress responses could include subjective questionnaires (developed by psychologists) and objective markers observed using data from wearable and non-wearable sensors. In particular, wearable sensor-based methods commonly use data from electroencephalography, electrocardiogram, galvanic skin response, electromyography, electrodermal activity, heart rate, heart rate variability, and photoplethysmography both individually and in multimodal fusion strategies. Whereas, methods based on non-wearable sensors include strategies such as analyzing pupil dilation and speech, smartphone data, eye movement, body posture, and thermal imaging. Whenever a stressful situation is encountered by an individual, physiological, physical, or behavioral changes are induced which help in coping with the challenge at hand. A wide range of studies has attempted to establish a relationship between these stressful situations and the response of human beings by using different kinds of psychological, physiological, physical, and behavioral measures. Inspired by the lack of availability of a definitive verdict about the relationship of human stress with these different kinds of markers, a detailed survey about human stress detection methods is conducted in this paper. In particular, we explore how stress detection methods can benefit from artificial intelligence utilizing relevant data from various sources. This review will prove to be a reference document that would provide guidelines for future research enabling effective detection of human stress conditions.


Memory Efficient Tries for Sequential Pattern Mining

arXiv.org Artificial Intelligence

Sequential Pattern Mining (SPM) is a prominent topic in unsupervised learning that aims at finding frequent patterns of events in sequential datasets. Frequent patterns have a wide range of applications and are used, for example, to develop novel association rules, aid supervised learners in prediction tasks, and design effective recommender systems. While supervised learning algorithms have enjoyed great success in using large-size datasets for better prediction accuracy, unsupervised algorithms such as SPM are still faced with challenges in scalability and memory requirement. In particular, the two dominant SPM methodologies, Apriori (Agrawal et al., 1994) and prefix-projection (Han et al., 2001), suffer from the explosion of candidate patterns or require to fit in memory the entire large-size training dataset. This memory bottleneck is aggravated by the steady increase of dataset size in recent years, which may contain a larger and richer set of frequent patterns to be investigated. It is thus vital for the success of SPM algorithms that they adapt to their rapidly growing data environment. This paper investigates the role of dataset models in the time and memory efficiency of SPM algorithms.


Image Captioning as an Assistive Technology: Lessons Learned from VizWiz 2020 Challenge

Journal of Artificial Intelligence Research

Image captioning has recently demonstrated impressive progress largely owing to the introduction of neural network algorithms trained on curated dataset like MS-COCO. Often work in this field is motivated by the promise of deployment of captioning systems in practical applications. However, the scarcity of data and contexts in many competition datasets renders the utility of systems trained on these datasets limited as an assistive technology in real-world settings, such as helping visually impaired people navigate and accomplish everyday tasks. This gap motivated the introduction of the novel VizWiz dataset, which consists of images taken by the visually impaired and captions that have useful, task-oriented information. In an attempt to help the machine learning computer vision field realize its promise of producing technologies that have positive social impact, the curators of the VizWiz dataset host several competitions, including one for image captioning. This work details the theory and engineering from our winning submission to the 2020 captioning competition. Our work provides a step towards improved assistive image captioning systems. This article appears in the special track on AI & Society.


Multi-Graph Fusion Networks for Urban Region Embedding

arXiv.org Artificial Intelligence

Learning the embeddings for urban regions from human mobility data can reveal the functionality of regions, and then enables the correlated but distinct tasks such as crime prediction. Human mobility data contains rich but abundant information, which yields to the comprehensive region embeddings for cross domain tasks. In this paper, we propose multi-graph fusion networks (MGFN) to enable the cross domain prediction tasks. First, we integrate the graphs with spatio-temporal similarity as mobility patterns through a mobility graph fusion module. Then, in the mobility pattern joint learning module, we design the multi-level cross-attention mechanism to learn the comprehensive embeddings from multiple mobility patterns based on intra-pattern and inter-pattern messages. Finally, we conduct extensive experiments on real-world urban datasets. Experimental results demonstrate that the proposed MGFN outperforms the state-of-the-art methods by up to 12.35% improvement.


Observing how deep neural networks understand physics through the energy spectrum of one-dimensional quantum mechanics

arXiv.org Machine Learning

In recent times, deep neural networks (DNNs) have made remarkable progress in image recognition, natural language processing, voice recognition, and anomaly detection through numerous technological breakthroughs. Among these breakthroughs, the residual connection prevents gradient loss, even when neural networks (NNs) have deep layers, contributing to the very high image recognition capability[1]. Another example is the attention mechanism, which has succeeded in connecting neurons in distant locations, a shortcoming of convolutional neural networks (CNNs), and has significantly advanced fields such as translation, where relationships between distant words are essential[2]. In addition to accuracy improvement, NNs have begun generating new images or sentences by themselves, and their performance has been rapidly improving [3, 4]. In a slightly different field, the combination of DNNs and reinforcement learning has rendered humans incapable of competing with computers in table games such as Go and Shogi, where human intuition was previously superior[5, 6].


Realizing Machine Learning's Promise in Geoscience Remote Sensing - Eos

#artificialintelligence

In recent years, machine learning and pattern recognition methods have become common in Earth and space sciences. This is especially true for remote sensing applications, which often rely on massive archives of noisy data and so are well suited to such artificial intelligence (AI) techniques. As the data science revolution matures, we can assess its impact on specific research disciplines. We focus here on imaging spectroscopy, also known as hyperspectral imaging, as a data-centric remote sensing discipline expected to benefit from machine learning. Imaging spectroscopy involves collecting spectral data from airborne and satellite sensors at hundreds of electromagnetic wavelengths for each pixel in the sensors' viewing area.


A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges

arXiv.org Artificial Intelligence

This is Part II of the two-part comprehensive survey devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and vector distributed representations. Holographic Reduced Representations is an influential HDC/VSA model that is well-known in the machine learning domain and often used to refer to the whole family. However, for the sake of consistency, we use HDC/VSA to refer to the area. Part I of this survey covered foundational aspects of the area, such as historical context leading to the development of HDC/VSA, key elements of any HDC/VSA model, known HDC/VSA models, and transforming input data of various types into high-dimensional vectors suitable for HDC/VSA. This second part surveys existing applications, the role of HDC/VSA in cognitive computing and architectures, as well as directions for future work. Most of the applications lie within the machine learning/artificial intelligence domain, however we also cover other applications to provide a thorough picture. The survey is written to be useful for both newcomers and practitioners.