Goto

Collaborating Authors

Results



Making Deep Learning Model Intelligent with Synthetic Neurons

#artificialintelligence

Deep learning, a subset of the broad field of AI, refers to the engineering of developing intelligent machines that can learn, perform and achieve goals as humans do. Over the last few years, deep learning models have been illustrated to outpace conventional machine learning techniques in diverse fields. The technology enables computational models of multiple processing layers to learn and represent data with manifold levels of abstraction, imitating how the human brain senses and understands multimodal information. A team of researchers from TU Wien (Vienna), IST Austria and MIT (USA) has developed a new artificial intelligence system based on the brains of tiny animals like threadworms. This new AI-powered system is said to have the potential to control a vehicle with just a few synthetic neurons. According to the researchers, the system has decisive advantages over previous deep learning models.


New deep learning models: Fewer neurons, more intelligence

#artificialintelligence

An international research team from TU Wien (Vienna), IST Austria and MIT (USA) has developed a new artificial intelligence system based on the brains of tiny animals, such as threadworms. This novel AI-system can control a vehicle with just a few artificial neurons. The team says that system has decisive advantages over previous deep learning models: It copes much better with noisy input, and, because of its simplicity, its mode of operation can be explained in detail. It does not have to be regarded as a complex "black box," but it can be understood by humans. This new deep learning model has now been published in the journal Nature Machine Intelligence.


Ten Research Challenge Areas in Data Science · Harvard Data Science Review

#artificialintelligence

To drive progress in the field of data science, we propose 10 challenge areas for the research community to pursue. Since data science is broad, with methods drawing from computer science, statistics, and other disciplines, and with applications appearing in all sectors, these challenge areas speak to the breadth of issues spanning science, technology, and society. We preface our enumeration with meta-questions about whether data science is a discipline. We then describe each of the 10 challenge areas. The goal of this article is to start a discussion on what could constitute a basis for a research agenda in data science, while recognizing that the field of data science is still evolving. Although data science builds on knowledge from computer science, engineering, mathematics, statistics, and other disciplines, data science is a unique field with many mysteries to unlock: fundamental scientific questions and pressing problems of societal importance.


Deep learning enables identification and optimization of RNA-based tools for myriad applications

#artificialintelligence

DNA and RNA have been compared to "instruction manuals" containing the information needed for living "machines" to operate. But while electronic machines like computers and robots are designed from the ground up to serve a specific purpose, biological organisms are governed by a much messier, more complex set of functions that lack the predictability of binary code. Inventing new solutions to biological problems requires teasing apart seemingly intractable variables--a task that is daunting to even the most intrepid human brains. Two teams of scientists from the Wyss Institute at Harvard University and the Massachusetts Institute of Technology have devised pathways around this roadblock by going beyond human brains; they developed a set of machine learning algorithms that can analyze reams of RNA-based "toehold" sequences and predict which ones will be most effective at sensing and responding to a desired target sequence. As reported in two papers published concurrently today in Nature Communications, the algorithms could be generalizable to other problems in synthetic biology as well, and could accelerate the development of biotechnology tools to improve science and medicine and help save lives.


Going Beyond Human Brains: Deep Learning Takes On Synthetic Biology

#artificialintelligence

Work by Wyss Core Faculty member Peng Yin in collaboration with Collins and others has demonstrated that different toehold switches can be combined to compute the presence of multiple "triggers," similar to a computer's logic board. DNA and RNA have been compared to "instruction manuals" containing the information needed for living "machines" to operate. But while electronic machines like computers and robots are designed from the ground up to serve a specific purpose, biological organisms are governed by a much messier, more complex set of functions that lack the predictability of binary code. Inventing new solutions to biological problems requires teasing apart seemingly intractable variables -- a task that is daunting to even the most intrepid human brains. Two teams of scientists from the Wyss Institute at Harvard University and the Massachusetts Institute of Technology have devised pathways around this roadblock by going beyond human brains; they developed a set of machine learning algorithms that can analyze reams of RNA-based "toehold" sequences and predict which ones will be most effective at sensing and responding to a desired target sequence.


Machine Learning Helps Plasma Physics Researchers Understand Turbulence Transport - Stories Display Page - XSEDE

#artificialintelligence

For more than four decades, University of California, San Diego, Professor of Physics Patrick H. Diamond and his research group have been advancing our understanding of fundamental concepts in plasma physics. Most recently, Diamond worked with graduate student Robin Heinonen on a model reduction study that used the Extreme Science and Engineering Discovery Environment (XSEDE)-allocated Comet supercomputer at the San Diego Supercomputer Center at UC San Diego to showcase how machine learning produced a new model for plasma turbulence. Plasmas have many applications, including fusion energy. When light nuclei fuse together, the mass of the products is less than that of the reactants, and the missing mass becomes energy – hence Albert Einstein's famous E mc2 equation. In order for this to occur, temperatures must literally reach astronomical levels, such as those found in the Sun's core.


10 Best Machine Learning Courses in 2020 - KDnuggets

#artificialintelligence

Taught by: Rachel Thomas is an American computer scientist and founding Director of the Center for Applied Data Ethics at the University of San Francisco. Together with Jeremy Howard, she is co-founder of fast.ai. Course Outcomes: This course is a hands-on introduction to NLP, where you will code a practical NLP application first as the name suggests, then slowly start digging inside the underlying theory in it. Applications covered include topic modeling, classification (identifying whether the sentiment of a review is positive or negative), language modeling, and translation. The course teaches a blend of traditional NLP topics (including regex, SVD, naïve Bayes, tokenization) and recent neural network approaches (including RNNs, seq2seq, attention, and the transformer architecture), as well as addressing urgent ethical issues, such as bias and disinformation.


The Future of AI Part 1

#artificialintelligence

It was reported that Venture Capital investments into AI related startups made a significant increase in 2018, jumping by 72% compared to 2017, with 466 startups funded from 533 in 2017. PWC moneytree report stated that that seed-stage deal activity in the US among AI-related companies rose to 28% in the fourth-quarter of 2018, compared to 24% in the three months prior, while expansion-stage deal activity jumped to 32%, from 23%. There will be an increasing international rivalry over the global leadership of AI. President Putin of Russia was quoted as saying that "the nation that leads in AI will be the ruler of the world". Billionaire Mark Cuban was reported in CNBC as stating that "the world's first trillionaire would be an AI entrepreneur".


Hierarchical Pre-training for Sequence Labelling in Spoken Dialog

arXiv.org Artificial Intelligence

Sequence labelling tasks like Dialog Act and Emotion/Sentiment identification are a key component of spoken dialog systems. In this work, we propose a new approach to learn generic representations adapted to spoken dialog, which we evaluate on a new benchmark we call Sequence labellIng evaLuatIon benChmark fOr spoken laNguagE benchmark (\texttt{SILICONE}). \texttt{SILICONE} is model-agnostic and contains 10 different datasets of various sizes. We obtain our representations with a hierarchical encoder based on transformer architectures, for which we extend two well-known pre-training objectives. Pre-training is performed on OpenSubtitles: a large corpus of spoken dialog containing over $2.3$ billion of tokens. We demonstrate how hierarchical encoders achieve competitive results with consistently fewer parameters compared to state-of-the-art models and we show their importance for both pre-training and fine-tuning.