Goto

Collaborating Authors

Results


A Hybrid Deep Learning Model for Predictive Flood Warning and Situation Awareness using Channel Network Sensors Data

arXiv.org Machine Learning

The objective of this study is to create and test a hybrid deep learning model, FastGRNN-FCN (Fast, Accurate, Stable and Tiny Gated Recurrent Neural Network-Fully Convolutional Network), for urban flood prediction and situation awareness using channel network sensors data. The study used Harris County, Texas as the testbed, and obtained channel sensor data from three historical flood events (e.g., 2016 Tax Day Flood, 2016 Memorial Day flood, and 2017 Hurricane Harvey Flood) for training and validating the hybrid deep learning model. The flood data are divided into a multivariate time series and used as the model input. Each input comprises nine variables, including information of the studied channel sensor and its predecessor and successor sensors in the channel network. Precision-recall curve and F-measure are used to identify the optimal set of model parameters. The optimal model with a weight of 1 and a critical threshold of 0.59 are obtained through one hundred iterations based on examining different weights and thresholds. The test accuracy and F-measure eventually reach 97.8% and 0.792, respectively. The model is then tested in predicting the 2019 Imelda flood in Houston and the results show an excellent match with the empirical flood. The results show that the model enables accurate prediction of the spatial-temporal flood propagation and recession and provides emergency response officials with a predictive flood warning tool for prioritizing the flood response and resource allocation strategies.


High Temporal Resolution Rainfall Runoff Modelling Using Long-Short-Term-Memory (LSTM) Networks

arXiv.org Machine Learning

Accurate and efficient models for rainfall runoff (RR) simulations are crucial for flood risk management. Most rainfall models in use today are process-driven; i.e. they solve either simplified empirical formulas or some variation of the St. Venant (shallow water) equations. With the development of machine-learning techniques, we may now be able to emulate rainfall models using, for example, neural networks. In this study, a data-driven RR model using a sequence-to-sequence Long-short-Term-Memory (LSTM) network was constructed. The model was tested for a watershed in Houston, TX, known for severe flood events. The LSTM network's capability in learning long-term dependencies between the input and output of the network allowed modeling RR with high resolution in time (15 minutes). Using 10-years precipitation from 153 rainfall gages and river channel discharge data (more than 5.3 million data points), and by designing several numerical tests the developed model performance in predicting river discharge was tested. The model results were also compared with the output of a process-driven model Gridded Surface Subsurface Hydrologic Analysis (GSSHA). Moreover, physical consistency of the LSTM model was explored. The model results showed that the LSTM model was able to efficiently predict discharge and achieve good model performance. When compared to GSSHA, the data-driven model was more efficient and robust in terms of prediction and calibration. Interestingly, the performance of the LSTM model improved (test Nash-Sutcliffe model efficiency from 0.666 to 0.942) when a selected subset of rainfall gages based on the model performance, were used as input instead of all rainfall gages.


Machine learning for protein folding and dynamics

arXiv.org Machine Learning

Frank Noé Department of Mathematics and Computer Science, Freie Universität Berlin, Arnimallee 6, 14195 Berlin, Germany Gianni De Fabritiis Computational Science Laboratory, Universitat Pompeu Fabra, Barcelona Biomedical Research Park (PRBB), Doctor Aiguader 88, 08003 Barcelona, Spain, and Institucio Catalana de Recerca i Estudis Avanats (ICREA), Passeig Lluis Companys 23, Barcelona 08010, Spain Cecilia Clementi Center for Theoretical Biological Physics, and Department of Chemistry, Rice University, 6100 Main Street, Houston, Texas 77005, United StatesAbstract Many aspects of the study of protein folding and dynamics have been affected by the recent advances in machine learning. Methods for the prediction of protein structures from their sequences are now heavily based on machine learning tools. The way simulations are performed to explore the energy landscape of protein systems is also changing as force-fields are started to be designed by means of machine learning methods. These methods are also used to extract the essential information from large simulation datasets and to enhance the sampling of rare events such as folding/unfolding transitions. While significant challenges still need to be tackled, we expect these methods to play an important role on the study of protein folding and dynamics in the near future. We discuss here the recent advances on all these fronts and the questions that need to be addressed for machine learning approaches to become mainstream in protein simulation.Introduction During the last couple of decades advances in artificial intelligence and machine learning have revolutionized many application areas such as image recognition and language translation. The key of this success has been the design of algorithms that can extract complex patterns and highly nontrivial relationships from large amount of data and abstract this information in the evaluation of new data.


Deep Learning Networks Can't Generalize--But They're Learning from the Brain

#artificialintelligence

"Bias" in AI is often treated as a dirty word. But to Dr. Andreas Tolias at the Baylor College of Medicine in Houston, Texas, bias may also be the solution to smarter, more human-like AI. I'm not talking about societal biases--racial or gender, for example--that are passed onto our machine creations. Rather, it's a type of "beneficial" bias present in the structure of a neural network and how it learns. Similar to genetic rules that help initialize our brains well before birth, "inductive bias" may help narrow down the infinite ways artificial minds develop; for example, guiding them down a "developmental" path that eventually makes them more flexible.


Sr. Acoustic Modeling and Machine Learning Engineer - NeoSensory Jobs on AngelList

#artificialintelligence

Tensor Flow, Theano, Torch, Kaldi, etc.) - 3 years experience in commercial R&D experience relating to acoustic/speech modeling and/or audio DSP - 2 years graduate-level academic research experience in acoustic/speech modeling and/or audio DSP Suggested experience: - Statistical and Audio Digital Signal Processing (Linear Systems) - Mathematical optimization (designing cost functions, adversarial/perceptual methods to improve audio quality, etc.) - Familiarity with hardware/embedded systems- - Linguistics and speech modeling Neosensory provides a competitive compensation package, stock options, benefits, and a fun work environment. We're located within a 5 min walk from the California Ave Caltrain station in a nice area of Palo Alto. We are also about to launch a second office in Houston, Texas. Team Neosensory is made of an awesome group of intellectual individuals who value hard work and enjoy sharing a diverse set of hobbies.


Google Says It's AI Can Detect Breast Cancer With 99% Accuracy

#artificialintelligence

Houston, Texas, USA: Google on Friday claimed that its AI algorithm can assist doctors in metastatic breast cancer detection with 99 percent accuracy, according to their papers published in the Archives of Pathology and Laboratory Medicine and The American Journal of Surgical Pathology. The algorithm technology, known as Lymph Node Assistant, or LYNA, is taught to check the abnormality in the pathology slides and accurately pinpoint the location of both cancers and other suspicious regions since some of the potential risks are too small to be spotted by the doctors. In their latest research, Google applied LYNA to a de-identified dataset from both Camelyon Challenge and an independent dataset from the Naval Medical Center San Diego for picking up the cancer cells from the tissue images. Metastatic tumors -- cancerous cells which break away from their tissue of origin, travel through the body through the circulatory or lymph systems, and form new tumors in other parts of the body -- are notoriously difficult to detect. A 2009 study of 102 breast cancer patients at two Boston health centers found that one in four were affected by the "process of care" failures such as inadequate physical examinations and incomplete diagnostic tests.