Goto

Collaborating Authors

Results


What is the transformer machine learning model?

#artificialintelligence

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. In recent years, the transformer model has become one of the main highlights of advances in deep learning and deep neural networks. It is mainly used for advanced applications in natural language processing. Google is using it to enhance its search engine results. OpenAI has used transformers to create its famous GPT-2 and GPT-3 models.


Development and Comparison of Scoring Functions in Curriculum Learning

arXiv.org Artificial Intelligence

Curriculum Learning is the presentation of samples to the machine learning model in a meaningful order instead of a random order. The main challenge of Curriculum Learning is determining how to rank these samples. The ranking of the samples is expressed by the scoring function. In this study, scoring functions were compared using data set features, using the model to be trained, and using another model and their ensemble versions. Experiments were performed for 4 images and 4 text datasets. No significant differences were found between scoring functions for text datasets, but significant improvements were obtained in scoring functions created using transfer learning compared to classical model training and other scoring functions for image datasets. It shows that different new scoring functions are waiting to be found for text classification tasks.


Combining Deep Learning and Reasoning for Address Detection in Unstructured Text Documents

arXiv.org Artificial Intelligence

Extracting information from unstructured text documents is a demanding task, since these documents can have a broad variety of different layouts and a non-trivial reading order, like it is the case for multi-column documents or nested tables. Additionally, many business documents are received in paper form, meaning that the textual contents need to be digitized before further analysis. Nonetheless, automatic detection and capturing of crucial document information like the sender address would boost many companies' processing efficiency. In this work we propose a hybrid approach that combines deep learning with reasoning for finding and extracting addresses from unstructured text documents. We use a visual deep learning model to detect the boundaries of possible address regions on the scanned document images and validate these results by analyzing the containing text using domain knowledge represented as a rule based system.


NLP Problem Solving Overview

#artificialintelligence

There are abstract and other pure sciences which have applications in conjunction with some areas to NLP. Some examples of this include the Whale Optimization Techniques, The Particle Swarm Optimization, Genetic Algorithms. All these are however included in optimization techniques but the key science behind these are either in biology or physical sciences. These techniques are for instance used to optimise the parameters of a deep learning model in say a recommender system or a fake news detection NLP task.


BERT for Individual: Tutorial+Baseline

#artificialintelligence

So if you're like me just beginning out at NLP after finishing a few months building Computer Vision models as a beginner then surely this story has something in supply for you. BERT is a deep learning model that has given state-of-the-art results on a wide variety of natural language processing tasks. It stands for Bidirectional Encoder Representations for Transformers. It has been pre-trained on Wikipedia and BooksCorpus and requires (only) task-specific fine-tuning. It has caused a stir in the Machine Learning community by presenting state-of-the-art results in a wide variety of NLP tasks, including Question Answering (SQuAD v1.1), Natural Language Inference (MNLI), and others.


ICLEA: Interactive Contrastive Learning for Self-supervised Entity Alignment

arXiv.org Artificial Intelligence

Self-supervised entity alignment (EA) aims to link equivalent entities across different knowledge graphs (KGs) without seed alignments. The current SOTA self-supervised EA method draws inspiration from contrastive learning, originally designed in computer vision based on instance discrimination and contrastive loss, and suffers from two shortcomings. Firstly, it puts unidirectional emphasis on pushing sampled negative entities far away rather than pulling positively aligned pairs close, as is done in the well-established supervised EA. Secondly, KGs contain rich side information (e.g., entity description), and how to effectively leverage those information has not been adequately investigated in self-supervised EA. In this paper, we propose an interactive contrastive learning model for self-supervised EA. The model encodes not only structures and semantics of entities (including entity name, entity description, and entity neighborhood), but also conducts cross-KG contrastive learning by building pseudo-aligned entity pairs. Experimental results show that our approach outperforms previous best self-supervised results by a large margin (over 9% average improvement) and performs on par with previous SOTA supervised counterparts, demonstrating the effectiveness of the interactive contrastive learning for self-supervised EA.


Machine Learning Model Development and Model Operations: Principles and Practices - KDnuggets

#artificialintelligence

The use of Machine Leaning (ML) has increased substantially in enterprise data analytics scenarios to extract valuable insights from the business data. Hence, it is very important to have an ecosystem to build, test, deploy, and maintain the enterprise grade machine learning models in production environments. The ML model development involves data acquisition from multiple trusted sources, data processing to make suitable for building the model, choose algorithm to build the model, build model, compute performance metrics and choose best performing model. The model maintenance plays critical role once the model is deployed into production. The maintenance of machine learning model includes keeping the model up to date and relevant in tune with the source data changes as there is a risk of model becoming outdated in course of time.


Clustering Text Using Attention

arXiv.org Artificial Intelligence

There are various situations where the need is to group In simple terms, attention mechanism can be thought of an similar texts into same buckets. We do not have enough additional layer somewhere in a network architecture which previous experience or knowledge to run a classification gives the deep learning model extra controlling parameters to algorithm on top of the available data. Clustering is the refine its learning by paying attention to different parts of the fundamental and intuitive solution to such problems.


RabindraNet, Creating Literary Works in the Style of Rabindranath Tagore

arXiv.org Artificial Intelligence

Bengali literature has a rich history of hundreds of years with luminary figures such as Rabindranath Tagore and Kazi Nazrul Islam. However, analytical works involving the most recent advancements in NLP have barely scratched the surface utilizing the enormous volume of the collected works from the writers of the language. In order to bring attention to the analytical study involving the works of Bengali writers and spearhead the text generation endeavours in the style of existing literature, we are introducing RabindraNet, a character level RNN model with stacked-LSTM layers trained on the works of Rabindranath Tagore to produce literary works in his style for multiple genres. We created an extensive dataset as well by compiling the digitized works of Rabindranath Tagore from authentic online sources and published as open source dataset on data science platform Kaggle.


Hyperparameter-free Continuous Learning for Domain Classification in Natural Language Understanding

arXiv.org Artificial Intelligence

Domain classification is the fundamental task in natural language understanding (NLU), which often requires fast accommodation to new emerging domains. This constraint makes it impossible to retrain all previous domains, even if they are accessible to the new model. Most existing continual learning approaches suffer from low accuracy and performance fluctuation, especially when the distributions of old and new data are significantly different. In fact, the key real-world problem is not the absence of old data, but the inefficiency to retrain the model with the whole old dataset. Is it potential to utilize some old data to yield high accuracy and maintain stable performance, while at the same time, without introducing extra hyperparameters? In this paper, we proposed a hyperparameter-free continual learning model for text data that can stably produce high performance under various environments. Specifically, we utilize Fisher information to select exemplars that can "record" key information of the original model. Also, a novel scheme called dynamical weight consolidation is proposed to enable hyperparameter-free learning during the retrain process. Extensive experiments demonstrate that baselines suffer from fluctuated performance and therefore useless in practice. On the contrary, our proposed model CCFI significantly and consistently outperforms the best state-of-the-art method by up to 20% in average accuracy, and each component of CCFI contributes effectively to overall performance.