Collaborating Authors

Should You Derive, Or Let the Data Drive? An Optimization Framework for Hybrid First-Principles Data-Driven Modeling Machine Learning

Mathematical models are used extensively for diverse tasks including analysis, optimization, and decision making. Frequently, those models are principled but imperfect representations of reality. This is either due to incomplete physical description of the underlying phenomenon (simplified governing equations, defective boundary conditions, etc.), or due to numerical approximations (discretization, linearization, round-off error, etc.). Model misspecification can lead to erroneous model predictions, and respectively suboptimal decisions associated with the intended end-goal task. To mitigate this effect, one can amend the available model using limited data produced by experiments or higher fidelity models. A large body of research has focused on estimating explicit model parameters. This work takes a different perspective and targets the construction of a correction model operator with implicit attributes. We investigate the case where the end-goal is inversion and illustrate how appropriate choices of properties imposed upon the correction and corrected operator lead to improved end-goal insights.

An Error Detection and Correction Framework for Connectomics

Neural Information Processing Systems

We define and study error detection and correction tasks that are useful for 3D reconstruction of neurons from electron microscopic imagery, and for image segmentation more generally. Both tasks take as input the raw image and a binary mask representing a candidate object. For the error detection task, the desired output is a map of split and merge errors in the object. For the error correction task, the desired output is the true object. We call this object mask pruning, because the candidate object mask is assumed to be a superset of the true object.



In an April 12 Books, Jacob Brogan misstated that Philip is the name of Mr. Slate's sloth in the Flintstones comic. Philip is Mr. Slate's turtle. Due to an editing error, an April 12 Politics was originally published without its first paragraph. He is a senator from Massachusetts. She is the vice president of policy and research.

Machines learn to find patterns in quantum chaos

Christian Science Monitor | Science

January 17, 2017 --The dream of useful quantum computing may have just come one step closer. Australian researchers are combining two of the hottest topics in science: quantum computing and machine learning. Specifically, they've succeeded in training an algorithm to predict the evolving state of a simple quantum computer. Such an understanding allows real time stabilization of the system, much as tightrope walker uses a pole for balance, according to a paper published Monday in Nature Communications. That would be a big deal for everyone – from Silicon Valley to Washington, D.C.

Tatistical Context-Dependent Units Boundary Correction for Corpus-based Unit-Selection Text-to-Speech Machine Learning

Unlike conventional techniques for speaker adaptation, which attempt to improve the accuracy of the segmentation using acoustic models that are more robust in the face of the speaker's characteristics, we aim to use only context dependent characteristics extrapolated with linguistic analysis techniques. In simple terms, we use the intuitive idea that context dependent information is tightly correlated with the related acoustic waveform. We propose a statistical model, which predicts correcting values to reduce the systematic error produced by a state-of-the-art Hidden Markov Model (HMM) based speech segmentation. In other words, we can predict how HMM-based Automatic Speech Recognition (ASR) systems interpret the waveform signal determining the systematic error in different contextual scenarios. Our approach consists of two phases: (1) identifying contextdependent phonetic unit classes (for instance, the class which identifies vowels as being the nucleus of monosyllabic words); and (2) building a regression model that associates the mean error value made by the ASR during the segmentation of a single speaker corpus to each class. The success of the approach is evaluated by comparing the corrected boundaries of units and the state-of-the-art HHM segmentation against a reference alignment, which is supposed to be the optimal solution. The results of this study show that the contextdependent correction of units' boundaries has a positive influence on the forced alignment, especially when the misinterpretation of the phone is driven by acoustic properties linked to the speaker's phonetic characteristics. In conclusion, our work supplies a first analysis of a model sensitive to speaker-dependent characteristics, robust to defective and noisy information, and a very simple implementation which could be utilized as an alternative to either more expensive speaker-adaptation systems or of numerous manual correction sessions.