Goto

Collaborating Authors

representation


Simultaneous clustering and representation learning

AIHub

The success of deep learning over the last decade, particularly in computer vision, has depended greatly on large training data sets. Even though progress in this area boosted the performance of many tasks such as object detection, recognition, and segmentation, the main bottleneck for future improvement is more labeled data. Self-supervised learning is among the best alternatives for learning useful representations from the data. In this article, we will briefly review the self-supervised learning methods in the literature and discuss the findings of a recent self-supervised learning paper from ICLR 2020 [14]. We may assume that most learning problems can be tackled by having clean labeling and more data obtained in an unsupervised way.


Global Big Data Conference

#artificialintelligence

Autoencoders are neural networks that serve machine learning models -- from denoising to dimensionality reduction. Seven use cases explore the practical application of autoencoder technology. Developers frequently turn to autoencoders to organize data for machine learning algorithms to improve the efficiency and accuracy of algorithms with less effort from data scientists. Data scientists can add autoencoders as additional tools to applications which require data denoising, nonlinear dimensionality reduction, sequence-to-sequence prediction and feature extraction. Autoencoders have a special advantage over classic machine learning techniques like principal component analysis for dimensionality reduction in that they can represent data as nonlinear representations -- and work particularly well in feature extraction.


DeText: A deep NLP framework for intelligent text understanding

#artificialintelligence

Natural language processing (NLP) technologies are widely deployed to process rich natural language text data for search and recommender systems. Achieving high-quality search and recommendation results requires that information, such as user queries and documents, be processed and understood in an efficient and effective manner. In recent years, the rapid development of deep learning models has been proven successful for improving various NLP tasks, indicating the vast potential for further improving the accuracy of search and recommender systems. Deep learning-based NLP technologies like BERT (Bidirectional Encoder Representations from Transformers) have recently made headlines for showing significant improvements in areas such as semantic understanding when contrasted with prior NLP techniques. However, exploiting the power of BERT in search and recommender systems is a non-trivial task, due to the heavy computation cost of BERT models. In this blog post, we will introduce DeText, a state-of-the-art open source NLP framework for text understanding.


Proposed voice analysis framework preserves both accuracy and privacy

#artificialintelligence

Imperial College London researchers claim they've developed a voice analysis method that supports applications like speech recognition and identification while removing sensitive attributes such as emotion, gender, and health status. Their framework receives voice data and privacy preferences as auxiliary information and uses the preferences to filter out sensitive attributes which could otherwise be extracted from recorded speech. Voice signals are a rich source of data, containing linguistic and paralinguistic information including age, likely gender, health status, personality, mood, and emotional state. This raises concerns in cases where raw data is transmitted to servers; attacks like attribute inference can reveal attributes not intended to be shared. In fact, the researchers assert attackers could use a speech recognition model to learn further attributes from users, leveraging the model's outputs to train attribute-inferring classifiers. They posit such attackers could achieve attribute inference accuracy ranging from 40% to 99.4% -- three or four times better than guessing at random -- depending on the acoustic conditions of the inputs.


Hot papers on arXiv from the past month – July 2020

AIHub

Here are the most tweeted papers that were uploaded onto arXiv during July 2020. Results are powered by Arxiv Sanity Preserver. Abstract: Massive language models are the core of modern NLP modeling and have been shown to encode impressive amounts of commonsense and factual information. However, that knowledge exists only within the latent parameters of the model, inaccessible to inspection and interpretation, and even worse, factual information memorized from the training corpora is likely to become stale as the world changes. Knowledge stored as parameters will also inevitably exhibit all of the biases inherent in the source materials.


Analyzing the Performance of the Classification Models in Machine Learning

#artificialintelligence

Confusion matrix (also called Error matrix) is used to analyze how well the Classification Models (like Logistic Regression, Decision Tree Classifier, etc.) performs. Why do we analyze the performance of the models? Analyzing the performance of the models helps us to find and eliminate the bias and variance problem if exist and it also helps us to fine-tune the model so that the model produces more accurate results. Confusion Matrix is usually applied to Binary classification problems but can be extended to Multi-class classification problems as well. Concepts are comprehended better when illustrated with examples so let us consider an example.


"What's that? Reinforcement Learning in the Real-world?"

#artificialintelligence

Reinforcement Learning offers a distinctive way of solving the Machine Learning puzzle. It's sequential decision-making ability, and suitability to tasks requiring a trade-off between immediate and long-term returns are some components that make it desirable in settings where supervised-learning or unsupervised learning approaches would, in comparison, not fit as well. By having agents start with zero knowledge then learn qualitatively good behaviour through interaction with the environment, it's almost fair to say Reinforcement Learning (RL) is the closest thing we have to Artificial General Intelligence yet. We can see RL being used in robotics control, treatment design in healthcare, among others; but why aren't we boasting of many RL agents being scaled up to real-world production systems? There's a reason why games, like Atari, are such nice RL benchmarks -- they let us care only about maximizing the score and not worry about designing a reward function.


On Learning Invariant Representations for Domain Adaptation

#artificialintelligence

In domain adaptation the source (training) domain is related to but different from the target (testing) domain. During training, the algorithm can only have access to labeled samples from source domain and unlabeled samples from target domain. The goal is to generalize on the target domain. One of the backbone assumptions underpinning the generalization theory of supervised learning algorithms is that the test distribution should be the same as the training distribution. However in many real-world applications it is usually time-consuming or even infeasible to collect labeled data from all the possible scenarios where our learning system is going to be deployed.


This AI Model Can Predict If You Are A Job Hopper Or Not

#artificialintelligence

Voluntary employee turnover can have a direct financial impact on organisations. And, at the time of this pandemic outbreak where the majority of the organisations are looking to cut down their employee costs, voluntary employee turnover can create a big concern for companies. And thus, the ability to predict this turnover rate of employees can not only help in making informed hiring decisions but can also help in saving a substantial financial crisis in this uncertain time. Acknowledging that, researchers and data scientists from PredictiveHire, a AI recruiting startup, built a language model that can analyse the open-ended interview questions of the candidate to infer the likelihood of a candidate's job-hopping. The study -- led by Madhura Jayaratne, Buddhi Jayatilleke -- was done on the responses of 45,000 job applicants, who used a chatbot to give an interview and also self-rated themselves on their possibility of hopping jobs.


Autoencoders' example uses augment data for machine learning

#artificialintelligence

Until recently, the study of autoencoders had primarily been an academic pursuit, said Nathan White, lead consultant at AIM Consulting. However, there are now many applications where machine learning practitioners should look to autoencoders as their tool of choice. An autoencoder consists of a pair of deep learning networks, an encoder and decoder. The encoder learns an efficient way of encoding input into a smaller dense representation, called the bottleneck layer. After training, the decoder converts this representation back to the original input.