Not enough data to create a plot.
Try a different view from the menu above.
Tuli, Shikhar (Department of Electrical and Computer Engineering, Princeton University) | Dedhia, Bhishma | Tuli, Shreshth (Department of Computing, Imperial College London) | Jha, Niraj K. (Department of Electrical and Computer Engineering, Princeton University)
The existence of a plethora of language models makes the problem of selecting the best one for a custom task challenging. Most state-of-the-art methods leverage transformer-based models (e.g., BERT) or their variants. However, training such models and exploring their hyperparameter space is computationally expensive. Prior work proposes several neural architecture search (NAS) methods that employ performance predictors (e.g., surrogate models) to address this issue; however, such works limit analysis to homogeneous models that use fixed dimensionality throughout the network. This leads to sub-optimal architectures. To address this limitation, we propose a suite of heterogeneous and flexible models, namely FlexiBERT, that have varied encoder layers with a diverse set of possible operations and different hidden dimensions. For better-posed surrogate modeling in this expanded design space, we propose a new graph-similarity-based embedding scheme. We also propose a novel NAS policy, called BOSHNAS, that leverages this new scheme, Bayesian modeling, and second-order optimization, to quickly train and use a neural surrogate model to converge to the optimal architecture. A comprehensive set of experiments shows that the proposed policy, when applied to the FlexiBERT design space, pushes the performance frontier upwards compared to traditional models. FlexiBERT-Mini, one of our proposed models, has 3% fewer parameters than BERT-Mini and achieves 8.9% higher GLUE score. A FlexiBERT model with equivalent performance as the best homogeneous model has 2.6× smaller size. FlexiBERT-Large, another proposed model, attains state-of-the-art results, outperforming the baseline models by at least 5.7% on the GLUE benchmark.
Lorenz, Robin (a:1:{s:5:"en_US";s:17:"Cambridge Quantum";}) | Pearson, Anna (Quantinuum) | Meichanetzidis, Konstantinos (Quantinuum) | Kartsaklis, Dimitri (Quantinuum) | Coecke, Bob (Quantinuum)
Quantum Natural Language Processing (QNLP) deals with the design and implementation of NLP models intended to be run on quantum hardware. In this paper, we present results on the first NLP experiments conducted on Noisy Intermediate-Scale Quantum (NISQ) computers for datasets of size greater than 100 sentences. Exploiting the formal similarity of the compositional model of meaning by Coecke, Sadrzadeh, and Clark (2010) with quantum theory, we create representations for sentences that have a natural mapping to quantum circuits. We use these representations to implement and successfully train NLP models that solve simple sentence classification tasks on quantum hardware. We conduct quantum simulations that compare the syntax-sensitive model of Coecke et al. with two baselines that use less or no syntax; specifically, we implement the quantum analogues of a "bag-of-words" model, where syntax is not taken into account at all, and of a word-sequence model, where only word order is respected. We demonstrate that all models converge smoothly both in simulations and when run on quantum hardware, and that the results are the expected ones based on the nature of the tasks and the datasets used. Another important goal of this paper is to describe in a way accessible to AI and NLP researchers the main principles, process and challenges of experiments on quantum hardware. Our aim in doing this is to take the first small steps in this unexplored research territory and pave the way for practical Quantum Natural Language Processing.
Sentiment analysis is an important tool to automatically understand the user-generated content on the Web. The most fine-grained sentiment analysis is concerned with the extraction and sentiment classification of aspects and has been extensively studied in recent years. In this work, we provide an overview of the first step in aspect-based sentiment analysis that assumes the extraction of opinion targets or aspects. We define a taxonomy for the extraction of aspects and present the most relevant works accordingly, with a focus on the most recent state-of-the-art methods. The three main classes we use to classify the methods designed for the detection of aspects are pattern-based, machine learning, and deep learning methods.
Hey, have you heard the news? Generative AI is taking over the world and is here to stay! "According to Tractica, the market for AI software, hardware and services is expected to skyrocket from $644 million in 2016 to a whopping $37.3 billion by 2025." The possibilities are endless for AI, and as technology continues to evolve and take new shapes, it can be challenging for businesses to keep up. In this guide, we will break down one exciting application of Ai, Generative Ai, with use cases and future projections to help you determine whether to implement it in your business. So, whether you're a tech enthusiast or simply curious about the future of AI, we're here to keep you up to date on the latest trends and technologies to make your business successful in the digital age. Generative AI refers to a class of artificial intelligence algorithms designed to generate new content, whether images, text, music or some other type of media. These algorithms learn patterns and structures from existing data and then use that knowledge to generate new, previously unseen examples similar in style or content to the original data. These algorithms have been used for many applications, including creating realistic images, writing poetry and fiction, composing music, and generating code.
Artificial Intelligence (AI) has come a long way in recent years, with rapid advancements in machine learning, natural language processing, and deep learning algorithms. These technologies have led to the development of powerful generative AI systems such as ChatGPT, Midjourney, and Dall-E, which have transformed industries and impacted our daily lives. However, alongside this progress, concerns over the potential risks and unintended consequences of AI systems have been growing. In response, the concept of AI capability control has emerged as a crucial aspect of AI development and deployment. In this blog, we will explore what AI capability control is, why it matters, and how organizations can implement it to ensure AI operates safely, ethically, and responsibly.
We describe an approach for unsupervised learning of a generic, distributed sentence encoder. Using the continuity of text from books, we train an encoderdecoder model that tries to reconstruct the surrounding sentences of an encoded passage. Sentences that share semantic and syntactic properties are thus mapped to similar vector representations. We next introduce a simple vocabulary expansion method to encode words that were not seen as part of training, allowing us to expand our vocabulary to a million words. After training our model, we extract and evaluate our vectors with linear models on 8 tasks: semantic relatedness, paraphrase detection, image-sentence ranking, question-type classification and 4 benchmark sentiment and subjectivity datasets. The end result is an off-the-shelf encoder that can produce highly generic sentence representations that are robust and perform well in practice.
Recently, strong results have been demonstrated by Deep Recurrent Neural Networks on natural language transduction problems. In this paper we explore the representational power of these models using synthetic grammars designed to exhibit phenomena similar to those found in real transduction problems such as machine translation. These experiments lead us to propose new memory-based recurrent networks that implement continuously differentiable analogues of traditional data structures such as Stacks, Queues, and DeQues. We show that these architectures exhibit superior generalisation performance to Deep RNNs and are often able to learn the underlying generating algorithms in our transduction experiments.
This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks.
We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network [23] but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch [2] to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering [22] and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.
This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.