Goto

Collaborating Authors

Water Management


Using Big Data Analytics for Transboundary Water Management

#artificialintelligence

Southern Africa has experienced drought-flood cycles for the past decade that strain the ability of any country to properly manage water resources. This dynamic is exacerbated by human drivers such as the heavy reliance of sectors such as mining and agriculture on groundwater and surface water, as well as subsistence agriculture in rural areas along rivers. These factors have progressively depleted natural freshwater systems and contributed to an accumulation of sediment in river systems. In a region where two or more countries share many of the groundwater and surface resources, water security cuts across the socioeconomic divide and is both a rural and urban issue. For example, the City of Cape Town had to heavily ration all water uses in 2017 and 2018, as its dams were drying up.


Anomaly Detection from Head and Abdominal Fetal ECG -- A Case study of IOT anomaly detection using Generative Adversarial Networks

#artificialintelligence

Waterborne diseases affect more than 2 billion people worldwide, causing substantial economic burden. For example, the treatment of waterborne diseases costs more than $2 billion annually in the United States alone, with 90 million cases recorded per year. Among waterborne pathogen-related problems, one of the most common public health concerns is the presence of total coliform bacteria and Escherichia coli (E. Traditional culture-based bacteria detection methods often take 24-48 hours, followed by visual inspection and colony counting by an expert, according to the United States Environmental Protection Agency (EPA) guidelines. Alternatively, molecular detection methods based on, for example, the amplification of nucleic acids, can reduce the detection time to a few hours, but they generally lack the sensitivity for detecting bacteria at very low concentrations, and are not capable of differentiating between live and dead microorganisms.


Simultaneous cross-evaluation of heterogeneous E. coli datasets via mechanistic simulation

Science

Can a bacterial cell model vet large datasets from disparate sources? Macklin et al. explored whether a comprehensive mathematical model can be used to verify or find conflicts in massive amounts of data that have been reported for the bacterium Escherichia coli , produced in thousands of papers from hundreds of labs. Although most data were consistent, there were data that could not accommodate known biological results, such as insufficient output of RNA polymerases and ribosomes to produce measured cell-doubling times. Other analyses showed that for some essential proteins, no RNA may be transcribed or translated in a cell's lifetime, but viability can be maintained without certain enzymes through a pool of stable metabolites produced earlier. Science , this issue p. [eaav3751][1] ### INTRODUCTION The generation of biological data is presenting us with one of the most demanding analysis challenges the world has ever faced, not only in terms of storage and accessibility, but more critically in terms of its extensive heterogeneity and variability. Although issues associated with heterogeneity and variability each represent major analysis problems on their own, the challenges posed by both in combination are even more difficult but also present greater opportunities. The problems arise because assessing the data’s veracity means not only determining whether the data are reproducible but also, and perhaps more deeply, whether they are cross-consistent, meaning that the interpretations of multiple heterogeneous datasets all point to the same conclusion. The opportunities emerge because seemingly discrepant results across multiple studies and measurement modalities may not be due simply to the errors associated with particular techniques, but also to the complex, nonlinear, and highly interconnected nature of biology. Therefore, what is required are analysis methods that can integrate and evaluate multiple data types simultaneously and in the context of biological mechanisms. ### RATIONALE Here, we present a large-scale, integrated modeling approach to simultaneously cross-evaluate millions of heterogeneous data against themselves, based on an extensive computer model of Escherichia coli that accounts for the function of 1214 genes (or 43% of the well-annotated genes). The model incorporates an extensive set of diverse measurements compiled from thousands of reports and accounting for many decades of research performed in laboratories around the world. Curation of these data led to the identification of >19,000 parameter values, which we integrated by creating a computational model that brings molecular signaling and regulation of RNA and protein expression together with carbon and energy metabolism in the context of balanced growth. A major advantage of this modeling approach is that heterogeneous data are linked mechanistically through the simulated interaction of cellular processes, providing the most natural, intuitive interpretation of an integrated dataset. Thus, this model enabled us to assess the cross-consistency of all of these datasets as an integrated whole. ### RESULTS We assessed the cross-consistency of the parameter set and identified areas of inconsistency by populating our model with the literature-derived parameters and by running detailed simulations of cellular life cycles. Although analysis of these simulations showed that most of the data were in fact cross-consistent, we also identified critical areas in which the data incorporated in our model were not. These inconsistencies led to readily observable consequences, including that the total output of the ribosomes and RNA polymerases described by the data are not sufficient for a cell to reproduce measured doubling times, that measured metabolic parameters are neither fully compatible with each other nor with overall growth, and that essential proteins are absent during the cell cycle—and the cell is robust to this absence. After correcting for these inconsistencies, the model is capable of validatable predictions compared with previously withheld data. Finally, considering these data as a whole led to successful predictions in vitro, in this case protein half-lives. ### CONCLUSION Construction of a highly integrative and mechanistic mathematical model provided us with an opportunity to integrate and cross-validate a vast, heterogeneous dataset in E. coli , a process we now call “deep curation” to reflect the multiple layers of curation that we perform (analogous to “deep learning” and “deep sequencing”). By highlighting areas in which studies in E. coli contradict each other, our work suggests lines of fruitful experimental inquiry that may help to resolve discrepancies, leading to both new biological insights and a more coherent understanding of this critical model organism. We hope that this work, by demonstrating the value of a large-scale integrative approach to understanding, interpreting, and cross-validating large datasets, will inspire further efforts to comprehensively characterize other organisms of interest. ![Figure][2] Integrating experimental and computational components, scientists constructed a model of E. coli. Although the model described here resides as software (freely available on GitHub), the model depicted in the photo above is composed of Corning plasticware and filter tips, network cables, and Mac accessories. Art: Erik Jacobsen; Photo: Bernard Andre The extensive heterogeneity of biological data poses challenges to analysis and interpretation. Construction of a large-scale mechanistic model of Escherichia coli enabled us to integrate and cross-evaluate a massive, heterogeneous dataset based on measurements reported by various groups over decades. We identified inconsistencies with functional consequences across the data, including that the total output of the ribosomes and RNA polymerases described by data are not sufficient for a cell to reproduce measured doubling times, that measured metabolic parameters are neither fully compatible with each other nor with overall growth, and that essential proteins are absent during the cell cycle—and the cell is robust to this absence. Finally, considering these data as a whole leads to successful predictions of new experimental outcomes, in this case protein half-lives. [1]: /lookup/doi/10.1126/science.aav3751 [2]: pending:yes


AI-powered smart imaging system for early detection of bacteria in water samples

#artificialintelligence

Early identification of pathogenic bacteria in food, water, and bodily fluids is essential and yet challenging, owing to sample complexities and large sample volumes that need to be rapidly screened. Existing screening methods based on plate counting or molecular analysis present various tradeoffs about the detection time, accuracy/sensitivity, cost, and sample preparation complexity. A team of scientists, led by Professor Aydogan Ozcan from the Electrical and Computer Engineering Department at the University of California, Los Angeles (UCLA), U.S., and co-workers, have presented an AI-powered smart imaging system for early detection and classification of live bacteria in water samples. This computational live bacteria detection system is based on holography. The system is highly sensitive, which continuously captures microscopic images of a whole culture plate, where bacteria grow, to rapidly detect colony growth by analyzing these time-lapse images with a deep neural network.


New AI Enables Rapid Detection of Harmful Bacteria

#artificialintelligence

Testing for pathogens is a critical component of maintaining public health and safety. Having a method to rapidly and reliably test for harmful germs is essential for diagnosing diseases, maintaining clean drinking water, regulating food safety, conducting scientific research, and other important functions of modern society. In recent research, scientists from University of California, Los Angeles (UCLA), have demonstrated that artificial intelligence (AI) can detect harmful bacteria from a water sample up to 12 hours faster than the current gold-standard Environmental Protection Agency (EPA) methods. In a new study published yesterday in Light: Science and Applications, the researchers created a time-lapse imaging platform that uses two separate deep neural networks (DNNs) for the detection and classification of bacteria. The team tested the high-throughput bacterial colony growth detection and classification system using water suspensions with added coliform bacteria of E. coli (including chlorine-stressed E. coli), K. pneumoniae and K. aerogenes, grown on chromogenic agar as the culture medium.


CC7640 Research Associate in Microbial genomics and Bioinformatics (fixed-term post) - Jobs at Bath

#artificialintelligence

We seek to recruit a full-time postdoctoral Research Associate in Microbial genomics and Bioinformatics to work in the laboratory of Dr. Lauren Cowley on an Academy of Medical Sciences springboard scheme funded grant in collaboration with the Gastrointestinal Bacterial reference services at Public Health England (PHE). You will be working on novel machine learning models to predict geographical source attribution from sequencing data of Shiga-toxigenic Escherichia coli and Salmonella. You will be responsible for training, testing and development of prediction models on PHE provided sequencing data to help research the possibilities of using sequencing data to provide automatic prediction of where foodborne disease has originated from; either as a returning traveller, imported food or domestic case. The position is funded at £39,152 and we expect to appoint at this starting salary for a fixed-term period of 15 months. You should hold or be close to completing a PhD in microbiology, genomics, bioinformatics, computer science, applied mathematics or computational biology, with some experience in the development of machine learning prediction models and processing of large microbial sequencing datasets.


Hybrid Attention Networks for Flow and Pressure Forecasting in Water Distribution Systems

arXiv.org Machine Learning

Multivariate geo-sensory time series prediction is challenging because of the complex spatial and temporal correlation. In urban water distribution systems (WDS), numerous spatial-correlated sensors have been deployed to continuously collect hydraulic data. Forecasts of monitored flow and pressure time series are of vital importance for operational decision making, alerts and anomaly detection. To address this issue, we proposed a hybrid dual-stage spatial-temporal attention-based recurrent neural networks (hDS-RNN). Our model consists of two stages: a spatial attention-based encoder and a temporal attention-based decoder. Specifically, a hybrid spatial attention mechanism that employs inputs along temporal and spatial axes is proposed. Experiments on a real-world dataset are conducted and demonstrate that our model outperformed 9 baseline models in flow and pressure series prediction in WDS.


myDevices and éolane Join Forces to Help Wastewater Treatment Facilities Around the World Prevent Sewage Backups Caused by the COVID-19 Crisis

#artificialintelligence

LOS ANGELES--(BUSINESS WIRE)--Sewage systems are impacted as consumers flush disinfectant wipes, paper towels, and napkins, an unintended consequence of the COVID-19 pandemic. Wipes get caught on misaligned pipe joints choking sewer lines and wrapping around pump motors, causing clogs, excessive strain, and infrastructure damage. These clogs can result in overflows of raw sewage into local rivers and lakes, and creating backups into people's homes. Significant blockages often require municipal staff to clear them, at a time when efforts and tax dollars need to be focused on critical services. "Wastewater treatment facilities around the state already are reporting issues with their sewer management collection systems," the California State Water Board said in a statement.


Why next year could be a turning point for project management and AI

#artificialintelligence

Artificial Intelligence hasn't quite arrived in the project management sphere yet, but it's on its way. Gartner forecasts that 80 per cent of project management roles will be eliminated by 2030 as AI takes on traditional project management functions such as data collection, tracking and reporting. The same report highlights that programme and portfolio management (PPM) software is behind the times, and AI-enabled PPM is only just beginning to surface in the market. However, while some tasks will inevitably be automated, it opens up other opportunities for project managers. It's important to know the difference between how AI-enabled automation can change project management and how AI-enabled insights from massive databases can make a difference.


Microrobots made from pollen help remove toxic mercury from wastewater

New Scientist

Tiny robots made using pollen could one day be used to clean contaminated water. Waste water from some factories contains mercury, a metal that can cause illness if consumed. There are techniques to remove mercury in water treatment plants, but they are time consuming and expensive. Martin Pumera at the University of Chemistry and Technology, Prague, in the Czech Republic, and his colleagues are working on a low-cost alternative.