Goto

Collaborating Authors

Inductive Learning


Hot papers on arXiv from the past month – July 2020

AIHub

Here are the most tweeted papers that were uploaded onto arXiv during July 2020. Results are powered by Arxiv Sanity Preserver. Abstract: Massive language models are the core of modern NLP modeling and have been shown to encode impressive amounts of commonsense and factual information. However, that knowledge exists only within the latent parameters of the model, inaccessible to inspection and interpretation, and even worse, factual information memorized from the training corpora is likely to become stale as the world changes. Knowledge stored as parameters will also inevitably exhibit all of the biases inherent in the source materials.


How does Artificial Intelligence Contribute to Robotic System Design?

#artificialintelligence

Artificial intelligence is en route to changing all industries and the robotics industry is not an exception. Presently, the innovative combination of AI and robotics has created a number of futuristic possibilities, in all the industry domains. While most of us will agree that most robots will be humanoids in 10 years from now; in many environments, robots are designed to emulate a range of behaviors and physical abilities will reflect a best fit for those characteristics. An exception will likely be robots that provide medical or other care or companionship for humans, and perhaps service robots that are meant to establish a more personal and'humanized' relationship. Though related, some would argue that the correct term is machine vision or robot vision rather than computer vision, because "robots seeing" involves more than just computer algorithms; engineers and roboticists also have to account for camera hardware that allow robots to process physical data.


Mapping the world to help aid workers, with weakly, semi-supervised learning

#artificialintelligence

When disaster or disease strikes, relief agencies respond more effectively when they have detailed mapping tools to know exactly where to deliver assistance. But extremely reliable and precise maps often are not available. So, our team, composed of artificial intelligence researchers and data scientists in Facebook's Boston office, used our computer vision expertise to create and share population density maps that are more accurate and higher resolution than any of their predecessors. Building on our previous publication of similar high-resolution population maps for 22 countries, we're now releasing new maps of the majority of the African continent, and the project will eventually map nearly the whole world's population. When it is completed, humanitarian agencies will be able to determine how populations are distributed even in remote areas, so that health care workers can better reach households and relief workers can better distribute aid.


Solving fruits classification problem in Python – Sushrut Tendulkar

#artificialintelligence

In this blog post we'll try to understand how to do a simple classification on fruits data. Dataset contains fruit names as target variables and mass, width, height and color score as features. It is a simple data set with less than 100 training examples. To understand the distribution of fruit names let's plot count of each category using seaborn library. Looks like all the fruits are equally distributed except mandarin.


Types of Machine Learning - Supervised, Unsupervised, Reinforcement - TechVidvan

#artificialintelligence

Machine Learning is a very vast subject and every individual field in ML is an area of research in itself. The subject is expanding at a rapid rate due to new areas of studies constantly coming forward. For an overall insight into the subject, we have categorized ML under various segments. In this article, we will be looking at those Types of Machine Learning and we will learn about each one of them. So, let's start learning right away. Here, we will discuss the four basic types of learning that we are all familiar with.


Overcoming Small Minirhizotron Datasets Using Transfer Learning - Alina Zare - Machine Learning and Sensing Lab

#artificialintelligence

Minirhizotron technology is widely used for studying the development of roots. Such systems collect visible-wavelength color imagery of plant roots in-situ by scanning an imaging system within a clear tube driven into the soil. Automated analysis of root systems could facilitate new scientific discoveries that would be critical to address the world's pressing food, resource, and climate issues.A key component of automated analysis of plant roots from imagery is the automated pixel-level segmentation of roots from their surrounding soil. Supervised learning techniques appear to be an appropriate tool for the challenge due to varying local soil and root conditions, however, lack of enough annotated training data is a major limitation due to the error-prone and time-consuming manually labeling process. In this paper, we investigate the use of deep neural networks based on the U-net architecture for automated, precise pixel-wise root segmentation in minirhizotron imagery.


The (Recent) History of Self-Supervised Learning - Security Boulevard

#artificialintelligence

Real unsupervised AI spots security issues sooner and predicts future behavior more accurately than older first- and second-wave solutions. Self-supervised AI technology draws on an understanding of the fundamental nature of the network where it lives, an understanding that isn’t possible with supervised-AI.


Feature Engineering in SQL and Python: A Hybrid Approach - KDnuggets

#artificialintelligence

I knew SQL long before learning about Pandas, and I was intrigued by the way Pandas faithfully emulates SQL. Stereotypically, SQL is for analysts, who crunch data into informative reports, whereas Python is for data scientists, who use data to build (and overfit) models. Although they are almost functionally equivalent, I'd argue both tools are essential for a data scientist to work efficiently. From my experience with Pandas, I've noticed the following: Those problems are naturally solved when I began feature engineering directly in SQL. If you know a little bit of SQL, it's time to put it into good use.


Google Brain's SimCLRv2 Achieves New SOTA in Semi-Supervised Learning

#artificialintelligence

Following on the February release of its contrastive learning framework SimCLR, the same team of Google Brain researchers guided by Turing Award honouree Dr. Geoffrey Hinton has presented SimCLRv2, an upgraded approach that boosts the SOTA results by 21.6 percent. The updated framework takes the "unsupervised pretrain, supervised fine-tune" paradigm popular in natural language processing and applies it to image recognition. Unlabelled data is learned in a task-agnostic way in the pretraining phase, which means the model has no prior classification knowledge. The researchers find that using a deep and wide neural network can be more label-efficient and greatly improve accuracy. Unlike SimCLR, whose largest model is ResNet-50, SimCLRv2's largest model is a 152-layer ResNet, which is three times wider in channels and selective kernels.


Self Supervised Representation Learning in NLP

#artificialintelligence

While Computer Vision is making amazing progress on self-supervised learning only in the last few years, self-supervised learning has been a first-class citizen in NLP research for quite a while. Language Models have existed since the 90's even before the phrase "self-supervised learning" was termed. The Word2Vec paper from 2013 popularized this paradigm and the field has rapidly progressed applying these self-supervised methods across many problems. At the core of these self-supervised methods lies a framing called "pretext task" that allows us to use the data itself to generate labels and use supervised methods to solve unsupervised problems. These are also referred to as "auxiliary task" or "pre-training task". The representations learned by performing this task can be used as a starting point for our downstream supervised tasks.