In PU learning, a binary classifier is trained from positive (P) and unlabeled (U) data without negative (N) data. Although N data is missing, it sometimes outperforms PN learning (i.e., ordinary supervised learning). Hitherto, neither theoretical nor experimental analysis has been given to explain this phenomenon. In this paper, we theoretically compare PU (and NU) learning against PN learning based on the upper bounds on estimation errors. We find simple conditions when PU and NU learning are likely to outperform PN learning, and we prove that, in terms of the upper bounds, either PU or NU learning (depending on the class-prior probability and the sizes of P and N data) given infinite U data will improve on PN learning.
Understanding how artificial intelligence works may seem to be highly overwhelming, but it all comes down to two concepts, machine learning, and deep learning. These two terms are usually used interchangeably assuming they both mean the same, but they are not. Both the terms are not new to us, but the way they are utilized to describe intelligent machines has always been changing.
Machine learning is a concept that is as old as computers. In 1950, Alan Turing created the Turning Test. It was a test for computers to see if a machine can convince a human it is a human and not a computer. Soon after that, in 1952, Arthur Samuel designed the first computer program where a computer can learn as it ran. This program was a checker game, where the computer learned the player's patterns during the match, and then use this knowledge to improve the computer's next moves.
The use of an environment means that there is no fixed training dataset, rather a goal or set of goals that an agent is required to achieve, actions they may perform, and feedback about performance toward the goal. Some machine learning algorithms do not just experience a fixed dataset. For example, reinforcement learning algorithms interact with an environment, so there is a feedback loop between the learning system and its experiences.