How does coronavirus testing work and will we have a home test soon?

New Scientist

Because the symptoms of covid-19 are similar to other diseases, testing is the only way to know for sure if someone is infected with the coronavirus. Mass testing is therefore crucial to halting its spread. In the UK, a home test will apparently go on sale very soon. How do you test for coronavirus infections? At present, most tests are based on looking for genetic sequences specific to the covid-19 coronavirus.

Can AI Find a Cure for COVID-19?


The novel coronavirus has been circulating among humans for barely three months, but several bio-tech firms have already created drugs that target the COVID-19 disease. One of the secret weapons for the fast response is artificial intelligence. The Chinese government initially was criticized for downplaying the severity of the coronavirus outbreak that originated in Wuhan last December. However, researchers around the world applauded the quick work of Chinese scientists in decoding the genetic sequence of the virus, dubbed SARS-CoV-2, and posting the results in a public database on January 10. Researchers quickly went to work.

How AI Is Helping Humanity Tackle the Coronavirus Crisis


Like any tool, technology can be used for both good and bad. And sometimes, that bad is inadvertent; tech in the form of airplanes helped expedite the spread of the coronavirus around the world. But fortunately, technology will also aid in stopping this pandemic crisis. A few weeks ago, we wrote about how the San Francisco-based company BlueDot utilized artificial intelligence (AI) to warn the general public about the dangers of COVID-19 well ahead of health officials. In case you missed it, you can read it here.

Top Recent Research Papers On Time Series Modelling


Time series models predominantly, over the years, have focussed on individual time series via local models. This changed with the popularisation of deep learning techniques. This was also supported by the increase of temporal data availability, which led to many deep learning-based time series algorithms. Due to their natural temporal ordering, time-series data are present in almost every task that is registered, taking into account some notion of ordering. From electronic health records and human activity recognition to acoustic scene classification and cyber-security, time series is encountered in many real-world applications.

How to generate text: using different decoding methods for language generation with Transformers


In recent years, there has been an increasing interest in open-ended language generation thanks to the rise of large transformer-based language models trained on millions of webpages, such as OpenAI's famous GPT2 model. The results on conditioned open-ended language generation are impressive, e.g. Besides the improved transformer architecture and massive unsupervised training data, better decoding methods have also played an important role. This blog post gives a brief overview of different decoding strategies and more importantly shows how you can implement them with very little effort using the popular transformers library! All of the following functionalities can be used for auto-regressive language generation (here a refresher).

Semantic Search: Theory And Implementation


It took me a long time to realise that search is the biggest problem in NLP. Just look at Google, Amazon and Bing. These are multi-billion dollar businesses possible only due to their powerful search engines. My initial thoughts on search were centered around unsupervised ML, but I participated in Microsoft Hackathon 2018 for Bing and came to know the various ways a search engine can be made with deep learning. Do you find this in-depth technical education about NLP applications to be useful?

Tight Dimension Independent Lower Bound on the Expected Convergence Rate for Diminishing Step Sizes in SGD

Neural Information Processing Systems

We study the convergence of Stochastic Gradient Descent (SGD) for strongly convex objective functions. We prove for all $t$ a lower bound on the expected convergence rate after the $t$-th SGD iteration; the lower bound is over all possible sequences of diminishing step sizes. It implies that recently proposed sequences of step sizes at ICML 2018 and ICML 2019 are {\em universally} close to optimal in that the expected convergence rate after {\em each} iteration is within a factor $32$ of our lower bound. This factor is independent of dimension $d$. We offer a framework for comparing with lower bounds in state-of-the-art literature and when applied to SGD for strongly convex objective functions our lower bound is a significant factor $775\cdot d$ larger compared to existing work.

Researchers Composed New Protein Based on Sonification Using Deep Learning


Protein is of utmost importance in the human body. It is considered as the building blocks of life. Scientists, for a long, have been studying its properties and functionalities in order to improve proteins and design completely new proteins that perform new functions and processes. Recently, an innovation came into being when researchers in the United States and Taiwan explored how to create new proteins by using machine learning to translate protein structures into musical scores, presenting an unusual way to translate physics concepts across disparate domains, noted APL Bioengineering. A deep learning model has been employed to design de novo proteins, based on the interplay of elementary building blocks via hierarchical patterns.

Online Submodular Set Cover, Ranking, and Repeated Active Learning

Neural Information Processing Systems

We propose an online prediction version of submodular set cover with connections to ranking and repeated active learning. In each round, the learning algorithm chooses a sequence of items. The algorithm then receives a monotone submodular function and suffers loss equal to the cover time of the function: the number of items needed, when items are selected in order of the chosen sequence, to achieve a coverage constraint. We develop an online learning algorithm whose loss converges to approximately that of the best sequence in hindsight. Our proposed algorithm is readily extended to a setting where multiple functions are revealed at each round and to bandit and contextual bandit settings.

Memory-Efficient Backpropagation Through Time

Neural Information Processing Systems

We propose a novel approach to reduce memory consumption of the backpropagation through time (BPTT) algorithm when training recurrent neural networks (RNNs). Our approach uses dynamic programming to balance a trade-off between caching of intermediate results and recomputation. The algorithm is capable of tightly fitting within almost any user-set memory budget while finding an optimal execution policy minimizing the computational cost. Computational devices have limited memory capacity and maximizing a computational performance given a fixed memory budget is a practical use-case. We provide asymptotic computational upper bounds for various regimes.