Goto

Collaborating Authors

Results


The rise of AI in medicine

#artificialintelligence

By now, it's almost old news that artificial intelligence (AI) will have a transformative role in medicine. Algorithms have the potential to work tirelessly, at faster rates and now with potentially greater accuracy than clinicians. In 2016, it was predicted that'machine learning will displace much of the work of radiologists and anatomical pathologists'. In the same year, a University of Toronto professor controversially announced that'we should stop training radiologists now'. But is it really the beginning of the end for some medical specialties?


Children with autism saw their learning and social skills boosted after playing with this AI robot

#artificialintelligence

Scientists who designed an artificially intelligent robot that helped children with autism boost their learning and social skills hope such technology could one day aid others with the developmental disorder. The study saw seven children with mild to moderate autism take home what is known as a socially assistive robot, named Kiwi, for a month. According to a statement by the University of Southern California where the team is based, the participants from the Los Angeles area were aged between three and seven years old, and played space-themed games with the robot almost daily. As Kiwi was fitted with machine-learning technology, it was able to provide unique feedback and instructions to the children based on their abilities. For instance, if the child got a question wrong Kiwi would give prompts to help them solve it, and tweak the difficulty levels to challenge the child appropriately.


Deep Learning in Mining Biological Data

arXiv.org Machine Learning

Recent technological advancements in data acquisition tools allowed life scientists to acquire multimodal data from different biological application domains. Broadly categorized in three types (i.e., sequences, images, and signals), these data are huge in amount and complex in nature. Mining such an enormous amount of data for pattern recognition is a big challenge and requires sophisticated data-intensive machine learning techniques. Artificial neural network-based learning systems are well known for their pattern recognition capabilities and lately their deep architectures - known as deep learning (DL) - have been successfully applied to solve many complex pattern recognition problems. Highlighting the role of DL in recognizing patterns in biological data, this article provides - applications of DL to biological sequences, images, and signals data; overview of open access sources of these data; description of open source DL tools applicable on these data; and comparison of these tools from qualitative and quantitative perspectives. At the end, it outlines some open research challenges in mining biological data and puts forward a number of possible future perspectives.


Industry News

#artificialintelligence

Find here a listing of the latest industry news in genomics, genetics, precision medicine, and beyond. Updates are provided on a monthly basis. Sign-Up for our newsletter and never miss out on the latest news and updates. As 2019 came to an end, Veritas Genetics struggled to get funding due to concerns it had previously taken money from China. It was forced to cease US operations and is in talks with potential buyers. The GenomeAsia 100K Project announced its pilot phase with hopes to tackle the underrepresentation of non-Europeans in human genetic studies and enable genetic discoveries across Asia. Veritas Genetics, the start-up that can sequence a human genome for less than $600, ceases US operations and is in talks with potential buyers Veritas Genetics ceases US operations but will continue Veritas Europe and Latin America. It had trouble raising funding due to previous China investments and is looking to be acquired. Illumina loses DNA sequencing patents The European Patent ...


ADEPOS: A Novel Approximate Computing Framework for Anomaly Detection Systems and its Implementation in 65nm CMOS

arXiv.org Machine Learning

To overcome the energy and bandwidth limitations of traditional IoT systems, edge computing or information extraction at the sensor node has become popular. However, now it is important to create very low energy information extraction or pattern recognition systems. In this paper, we present an approximate computing method to reduce the computation energy of a specific type of IoT system used for anomaly detection (e.g. in predictive maintenance, epileptic seizure detection, etc). Termed as Anomaly Detection Based Power Savings (ADEPOS), our proposed method uses low precision computing and low complexity neural networks at the beginning when it is easy to distinguish healthy data. However, on the detection of anomalies, the complexity of the network and computing precision are adaptively increased for accurate predictions. We show that ensemble approaches are well suited for adaptively changing network size. To validate our proposed scheme, a chip has been fabricated in UMC65nm process that includes an MSP430 microprocessor along with an on-chip switching mode DC-DC converter for dynamic voltage and frequency scaling. Using NASA bearing dataset for machine health monitoring, we show that using ADEPOS we can achieve 8.95X saving of energy along the lifetime without losing any detection accuracy. The energy savings are obtained by reducing the execution time of the neural network on the microprocessor.


How AI is Changing the Way We Treat Diseases and Disabilities

#artificialintelligence

The age of artificial intelligence is allowing us to rethink the way that we treat diseases and disabilities. The combination of AI and Big Data, in addition to helping with medical diagnosis, coupled with biological delivery systems, such as gene therapy delivery system can significantly alter the way we treat a host of diseases that are, according to modern science, incurable: cancer, autism, some mental illnesses, and rare genetic illnesses. Specifically, combining AI, big data, robotics, gene therapy, and medical research has unleashed a host of possibilities to cure these types of diseases. At the same time, the combined innovation efforts are helping people with disabilities live their lives better. Here's an overview of some of these advances as we move into the new year.


Conditional Hierarchical Bayesian Tucker Decomposition

arXiv.org Machine Learning

Our research focuses on studying and developing methods for reducing the dimensionality of large datasets, common in biomedical applications. A major problem when learning information about patients based on genetic sequencing data is that there are often more feature variables (genetic data) than observations (patients). This makes direct supervised learning difficult. One way of reducing the feature space is to use latent Dirichlet allocation in order to group genetic variants in an unsupervised manner. Latent Dirichlet allocation is a common model in natural language processing, which describes a document as a mixture of topics, each with a probability of generating certain words. This can be generalized as a Bayesian tensor decomposition to account for multiple feature variables. While we made some progress improving and modifying these methods, our significant contributions are with hierarchical topic modeling. We developed distinct methods of incorporating hierarchical topic modeling, based on nested Chinese restaurant processes and Pachinko Allocation Machine, into Bayesian tensor decompositions. We apply these models to predict whether or not patients have autism spectrum disorder based on genetic sequencing data. We examine a dataset from National Database for Autism Research consisting of paired siblings -- one with autism, and the other without -- and counts of their genetic variants. Additionally, we linked the genes with their Reactome biological pathways. We combine this information into a tensor of patients, counts of their genetic variants, and the membership of these genes in pathways. Once we decompose this tensor, we use logistic regression on the reduced features in order to predict if patients have autism. We also perform a similar analysis of a dataset of patients with one of four common types of cancer (breast, lung, prostate, and colorectal).


Epileptic Seizure Prediction becomes much easier with the new Artificial Intelligence technology

#artificialintelligence

Recently, Hisham Daoud and Magdy Bayoumi of the University of Louisiana at Lafayette have introduced a completely new Artificial Intelligence (AI) system that predicts epilepsy seizures. According to the World Health Organization's reports, around 50 million people around the world are suffering from epilepsy and 70% of those patients can control the seizures through medications. The new AI technology shows 99.6% accurate results, and the best thing about it is that it predicts the attacks an hour before it happens. In this way, the patient can gear up for it and take medications that can prevent its occurrence. Having enough time to control the attack is what a patient needs.


Researchers develop an AI system with near-perfect seizure prediction

#artificialintelligence

While it's not a complete fix, the new AI system, developed by Hisham Daoud and Magdy Bayoumi of the University of Louisiana at Lafayette, is a major leap forward from existing prediction methods. Currently, other methods analyze brain activity with an EEG (electroencephalogram) test and apply a predictive model afterwards. The new method does both of those things at once, with the help of a deep learning algorithm that maps brain activity and another that can predict the electrical channels lighting up during a seizure. It'll still be some time before this technique will be available for widespread use -- the team is now working on a custom chip that can help process the necessary algorithms -- but it could be life-changing news for patients with epilepsy.


Researchers develop an AI system with near-perfect seizure prediction

#artificialintelligence

While it's not a complete fix, the new AI system, developed by Hisham Daoud and Magdy Bayoumi of the University of Louisiana at Lafayette, is a major leap forward from existing prediction methods. Currently, other methods analyze brain activity with an EEG (electroencephalogram) test and apply a predictive model afterwards. The new method does both of those things at once, with the help of a deep learning algorithm that maps brain activity and another that can predict the electrical channels lighting up during a seizure. It'll still be some time before this technique will be available for widespread use -- the team is now working on a custom chip that can help process the necessary algorithms -- but it could be life-changing news for patients with epilepsy.