Stroke currently ranks as the second most common cause of death and the second most common cause of disability worldwide. Motor deficits of the upper extremity (hemiparesis) are the most common and debilitating consequences of stroke, affecting around 80% of patients. These deficits limit the accomplishment of daily activities, affect social participation, are the origin of significant emotional distress, and cause profound detrimental effects on quality of life. Stroke rehabilitation aims to improve and maintain functional ability through restitution, substitution and compensation of functions. The restoration of motor deficits and improvements in motor function typically occurs during the first months following a stroke and therefore, major efforts are devoted to this acute stage.
Models for learning probability distributions such as generative models and density estimators behave quite differently from models for learning functions. One example is found in the memorization phenomenon, namely the ultimate convergence to the empirical distribution, that occurs in generative adversarial networks (GANs). For this reason, the issue of generalization is more subtle than that for supervised learning. For the bias potential model, we show that dimension-independent generalization accuracy is achievable if early stopping is adopted, despite that in the long term, the model either memorizes the samples or diverges.
Doctors are often deluged by signals from charts, test results, and other metrics to keep track of. It can be difficult to integrate and monitor all of these data for multiple patients while making real-time treatment decisions, especially when data is documented inconsistently across hospitals. In a new pair of papers, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) explore ways for computers to help doctors make better medical decisions. One team created a machine-learning approach called "ICU Intervene" that takes large amounts of intensive-care-unit (ICU) data, from vitals and labs to notes and demographics, to determine what kinds of treatments are needed for different symptoms. The system uses "deep learning" to make real-time predictions, learning from past ICU cases to make suggestions for critical care, while also explaining the reasoning behind these decisions.
How significant an impact can we reasonably expect AI to make on digital marketing? Rosy predictions have a knack for either showing up late, floundering or somehow failing to fulfill their promise. At IBM Watson Advertising, CMO Randi Stipes looks at AI as a practical toolset that has been in place for a decade rather than the vague promise of a rosier future. I recently asked Randi to give us an AI update. Paul Talbot: What's happening with AI at IBM that's different from how it's being deployed elsewhere?
A new feature has been added to IBM Watson Assistant called Actions. This new feature allows users to develop dialogs in a rapid fashion. The approach taken with Actions is one of an extreme non-technical nature. The interface is intuitive and requires virtually no prior development knowledge or training. User input (entities) variables are picked up automatically with a descriptive reference.
Bubeck, Sébastien, Eldan, Ronen, Lee, Yin Tat, Mikulincer, Dan
In 1988, Eric B. Baum showed that two-layers neural networks with threshold activation function can perfectly memorize the binary labels of $n$ points in general position in $\mathbb{R}^d$ using only $\ulcorner n/d \urcorner$ neurons. We observe that with ReLU networks, using four times as many neurons one can fit arbitrary real labels. Moreover, for approximate memorization up to error $\epsilon$, the neural tangent kernel can also memorize with only $O\left(\frac{n}{d} \cdot \log(1/\epsilon) \right)$ neurons (assuming that the data is well dispersed too). We show however that these constructions give rise to networks where the magnitude of the neurons' weights are far from optimal. In contrast we propose a new training procedure for ReLU networks, based on complex (as opposed to real) recombination of the neurons, for which we show approximate memorization with both $O\left(\frac{n}{d} \cdot \frac{\log(1/\epsilon)}{\epsilon}\right)$ neurons, as well as nearly-optimal size of the weights.
Liu, Sheng, Niles-Weed, Jonathan, Razavian, Narges, Fernandez-Granda, Carlos
We propose a novel framework to perform classification via deep learning in the presence of noisy annotations. When trained on noisy labels, deep neural networks have been observed to first fit the training data with clean labels during an "early learning" phase, before eventually memorizing the examples with false labels. We prove that early learning and memorization are fundamental phenomena in high-dimensional classification tasks, even in simple linear models, and give a theoretical explanation in this setting. Motivated by these findings, we develop a new technique for noisy classification tasks, which exploits the progress of the early learning phase. In contrast with existing approaches, which use the model output during early learning to detect the examples with clean labels, and either ignore or attempt to correct the false labels, we take a different route and instead capitalize on early learning via regularization. There are two key elements to our approach. First, we leverage semi-supervised learning techniques to produce target probabilities based on the model outputs. Second, we design a regularization term that steers the model towards these targets, implicitly preventing memorization of the false labels. The resulting framework is shown to provide robustness to noisy annotations on several standard benchmarks and real-world datasets, where it achieves results comparable to the state of the art.
On 19 August 2020 IBM Watson Assistant launched autolearning. The tagline from IBM is, Empower your skill to learn automatically with autolearning. This sounds very promising, and is indeed a step in the right direction. The big question of course is to what extend it learns automatically. For a full and detailed report on Watson Assistant's Disambiguation Function, I suggest this article: The ideal chatbot conversation is just that, conversation-like, in natural language and highly unstructured.