Goto

Collaborating Authors

Theoretical Comparisons of Positive-Unlabeled Learning against Positive-Negative Learning

Neural Information Processing Systems

In PU learning, a binary classifier is trained from positive (P) and unlabeled (U) data without negative (N) data. Although N data is missing, it sometimes outperforms PN learning (i.e., ordinary supervised learning). Hitherto, neither theoretical nor experimental analysis has been given to explain this phenomenon. In this paper, we theoretically compare PU (and NU) learning against PN learning based on the upper bounds on estimation errors. We find simple conditions when PU and NU learning are likely to outperform PN learning, and we prove that, in terms of the upper bounds, either PU or NU learning (depending on the class-prior probability and the sizes of P and N data) given infinite U data will improve on PN learning.


A Comprehensive Learning Path for Deep Learning in 2020

#artificialintelligence

What a time to be working in the deep learning space! Deep learning is ubiquitous right now. From the top research labs in the world to startups looking to design solutions, deep learning is at the heart of the current technological revolution. We are living in a deep learning wonderland! Whether it's Computer Vision applications or breakthroughs in the field of Natural Language Processing (NLP), organizations are looking for a piece of the deep learning pie.


14 Different Types of Learning in Machine Learning

#artificialintelligence

The use of an environment means that there is no fixed training dataset, rather a goal or set of goals that an agent is required to achieve, actions they may perform, and feedback about performance toward the goal. Some machine learning algorithms do not just experience a fixed dataset. For example, reinforcement learning algorithms interact with an environment, so there is a feedback loop between the learning system and its experiences.


Top 10 deep learning Framesworks everyone should know

#artificialintelligence

This is the age of artificial intelligence. Machine Learning and predictive analytics are now established and integral to just about every modern businesses, but artificial intelligence expands the scale of what's possible within those fields. It's what makes deep learning possible. Systems with greater ostensible autonomy and complexity can solve similarly complex problems.


What's Next in AI? Self-supervised Learning

#artificialintelligence

Self-supervised learning is one of those recent ML methods that have caused a ripple effect in the data science network, yet have so far been flying under the radar to the extent Entrepreneurs and Fortunes of the world go; the overall population is yet to find out about the idea yet lots of AI society consider it progressive. The paradigm holds immense potential for enterprises too as it can help handle deep learning's most overwhelming issue: data/sample inefficiency and subsequent costly training. Yann LeCun said that if knowledge was a cake, unsupervised learning would be the cake, supervised learning would be the icing on the cake and reinforcement learning would be the cherry on the cake. We realize how to make the icing and the cherry, however, we don't have a clue how to make the cake." Unsupervised learning won't progress a lot and said there is by all accounts a massive conceptual disconnect with regards to how precisely it should function and that it was the dark issue of ...