Christofidellis, Dimitrios
Z-BERT-A: a zero-shot Pipeline for Unknown Intent detection
Comi, Daniele, Christofidellis, Dimitrios, Piazza, Pier Francesco, Manica, Matteo
Intent discovery is a crucial task in natural language processing, and it is increasingly relevant for various of industrial applications. Identifying novel, unseen intents from user inputs remains one of the biggest challenges in this field. Herein, we propose Zero-Shot-BERT-Adapters, a two-stage method for multilingual intent discovery relying on a Transformer architecture, fine-tuned with Adapters. We train the model for Natural Language Inference (NLI) and later perform unknown intent classification in a zero-shot setting for multiple languages. In our evaluation, we first analyze the quality of the model after adaptive fine-tuning on known classes. Secondly, we evaluate its performance in casting intent classification as an NLI task. Lastly, we test the zero-shot performance of the model on unseen classes, showing how Zero-Shot-BERT-Adapters can effectively perform intent discovery by generating semantically similar intents, if not equal, to the ground-truth ones. Our experiments show how Zero-Shot-BERT-Adapters outperforms various baselines in two zero-shot settings: known intent classification and unseen intent discovery. The proposed pipeline holds the potential for broad application in customer care. It enables automated dynamic triage using a lightweight model that can be easily deployed and scaled in various business scenarios, unlike large language models. Zero-Shot-BERT-Adapters represents an innovative multi-language approach for intent discovery, enabling the online generation of novel intents. A Python package implementing the pipeline and the new datasets we compiled are available at the following link: https://github.com/GT4SD/zero-shot-bert-adapters.
Unifying Molecular and Textual Representations via Multi-task Language Modelling
Christofidellis, Dimitrios, Giannone, Giorgio, Born, Jannis, Winther, Ole, Laino, Teodoro, Manica, Matteo
The recent advances in neural language models have also been successfully applied to the field of chemistry, offering generative solutions for classical problems in molecular design and synthesis planning. These new methods have the potential to fuel a new era of data-driven automation in scientific discovery. However, specialized models are still typically required for each task, leading to the need for problem-specific fine-tuning and neglecting task interrelations. The main obstacle in this field is the lack of a unified representation between natural language and chemical representations, complicating and limiting human-machine interaction. Here, we propose the first multi-domain, multi-task language model that can solve a wide range of tasks in both the chemical and natural language domains. Our model can handle chemical and natural language concurrently, without requiring expensive pre-training on single domains or task-specific models. Interestingly, sharing weights across domains remarkably improves our model when benchmarked against state-of-the-art baselines on single-domain and cross-domain tasks. In particular, sharing information across domains and tasks gives rise to large improvements in cross-domain tasks, the magnitude of which increase with scale, as measured by more than a dozen of relevant metrics. Our work suggests that such models can robustly and efficiently accelerate discovery in physical sciences by superseding problem-specific fine-tuning and enhancing human-model interactions.
Accelerating Material Design with the Generative Toolkit for Scientific Discovery
Manica, Matteo, Born, Jannis, Cadow, Joris, Christofidellis, Dimitrios, Dave, Ashish, Clarke, Dean, Teukam, Yves Gaetan Nana, Giannone, Giorgio, Hoffman, Samuel C., Buchan, Matthew, Chenthamarakshan, Vijil, Donovan, Timothy, Hsu, Hsiang Han, Zipoli, Federico, Schilter, Oliver, Kishimoto, Akihiro, Hamada, Lisa, Padhi, Inkit, Wehden, Karl, McHugh, Lauren, Khrabrov, Alexy, Das, Payel, Takeda, Seiji, Smith, John R.
The rapid technological progress in the last centuries has been largely fueled by the success of the scientific method. However, in some of the most important fields, such as material or drug discovery, the productivity has been decreasing dramatically (Smietana et al., 2016) and by today it can take almost a decade to discover a new material and cost upwards of $10-$100 million. One of the most daunting challenges in materials discovery is hypothesis generation. The reservoir of natural products and their derivatives has been largely emptied (Atanasov et al., 2021) and bottom-up human-driven hypotheses have shown that it is extremely challenging to identify and select novel and useful candidates in search spaces that are overwhelming in size, e.g., the chemical space for drug-like molecules is estimated to contain > 10