Goto

Collaborating Authors

A Consensual Linear Opinion Pool

AAAI Conferences

An important question when eliciting opinions from experts is how to aggregate the reported opinions. In this paper, we propose a pooling method to aggregate expert opinions. Intuitively, it works as if the experts were continuously updating their opinions in order to accommodate the expertise of others. Each updated opinion takes the form of a linear opinion pool, where the weight that an expert assigns to a peer's opinion is inversely related to the distance between their opinions. In other words, experts are assumed to prefer opinions that are close to their own opinions. We prove that such an updating process leads to consensus, i.e., the experts all converge towards the same opinion. Further, we show that if rational experts are rewarded using the quadratic scoring rule, then the assumption that they prefer opinions that are close to their own opinions follows naturally. We empirically demonstrate the efficacy of the proposed method using real-world data.


Learn the Concept of linearity in Regression Models

@machinelearnbot

This Tutorial talks about basics of Linear regression by discussing in depth about the concept of Linearity and Which type of linearity is desirable. What is the meaning of the term Linear? Linear regression however always means linearity in parameters, irrespective of linearity in explanatory variables. Here the variable X can be non linear i.e X or X² and still we can consider this as a linear regression. However if our parameters are not linear i.e say the regression equation is A function Y f(x) is said to be linear in X if X appears with a power or index of 1 only.


Optimal Sparse Linear Encoders and Sparse PCA

Neural Information Processing Systems

Principal components analysis (PCA) is the optimal linear encoder of data. Sparse linear encoders (e.g., sparse PCA) produce more interpretable features that can promote better generalization. We answer both questions by providing the first polynomial-time algorithms to construct \emph{optimal} sparse linear auto-encoders; additionally, we demonstrate the performance of our algorithms on real data. Papers published at the Neural Information Processing Systems Conference.


Linear regression without correspondence

Neural Information Processing Systems

This article considers algorithmic and statistical aspects of linear regression when the correspondence between the covariates and the responses is unknown. First, a fully polynomial-time approximation scheme is given for the natural least squares optimization problem in any constant dimension. Next, in an average-case and noise-free setting where the responses exactly correspond to a linear function of i.i.d. Finally, lower bounds on the signal-to-noise ratio are established for approximate recovery of the unknown linear function by any estimator. Papers published at the Neural Information Processing Systems Conference.


New Products

Science

Arbor Biosciences' myTXTL Linear DNA Expression Kit allows for the use of linear DNA, including PCR products, gene fragments, and synthesized DNA, as input for transcription (TX) and translation (TL) in an Escherichia coli–based cell-free platform. Removing the requirements for cloning genes into plasmids, transforming cells, and further selecting for clones will greatly accelerate the design–build–test cycle for synthetic biology research and sampling in protein-screening applications. Created for use in pilot-scale studies as well as high-throughput analysis on automated liquid-handling platforms, myTXTL Linear DNA Expression Kit delivers a complete solution for protein research and analysis. The E. coli–based system is a ready-to-use master mix containing all the critical components required for cell-free protein production. End-users simply need to add their linear DNA to the master mix and incubate for as little as 30 min before analyzing the expressed product.