Not enough data to create a plot.
Try a different view from the menu above.
Picco, Gabriele
Description Boosting for Zero-Shot Entity and Relation Classification
Picco, Gabriele, Fuchs, Leopold, Galindo, Marcos Martínez, Purpura, Alberto, López, Vanessa, Lam, Hoang Thanh
For entity recognition - including classification Named Entity Recognition (NER) and Relation and linking - and relation classification problems, Extraction (RE) allow for the extraction and categorization recent ZSL methods (Aly et al., 2021; Ledell Wu, of structured data from unstructured 2020; Chen and Li, 2021) rely on textual descriptions text, which in turn enables not only more accurate of entities or relations. Descriptions provide entity recognition and relationship extraction, but the required information about the semantics of entities also getting data from several unstructured sources, (or relations), which help the models to identify helping to build knowledge graphs and the semantic entity mentions in texts without observing them web. However, these methods usually rely on during training. Works such as (Ledell Wu, 2020; labeled data (usually human-annotated data) for a De Cao et al., 2021) and (Aly et al., 2021) show good performance, usually requiring domain experts how effective it is to use textual descriptions to perform for data acquisition and labeling, which may entity recognition tasks in the zero-shot context.
Otter-Knowledge: benchmarks of multimodal knowledge graph representation learning from different sources for drug discovery
Lam, Hoang Thanh, Sbodio, Marco Luca, Galindo, Marcos Martínez, Zayats, Mykhaylo, Fernández-Díaz, Raúl, Valls, Víctor, Picco, Gabriele, Ramis, Cesar Berrospi, López, Vanessa
Recent research on predicting the binding affinity between drug molecules and proteins use representations learned, through unsupervised learning techniques, from large databases of molecule SMILES and protein sequences. While these representations have significantly enhanced the predictions, they are usually based on a limited set of modalities, and they do not exploit available knowledge about existing relations among molecules and proteins. In this study, we demonstrate that by incorporating knowledge graphs from diverse sources and modalities into the sequences or SMILES representation, we can further enrich the representation and achieve state-of-the-art results for drug-target binding affinity prediction in the established Therapeutic Data Commons (TDC) benchmarks. We release a set of multimodal knowledge graphs, integrating data from seven public data sources, and containing over 30 million triples. Our intention is to foster additional research to explore how multimodal knowledge enhanced protein/molecule embeddings can improve prediction tasks, including prediction of binding affinity. We also release some pretrained models learned from our multimodal knowledge graphs, along with source code for running standard benchmark tasks for prediction of biding affinity.
Zshot: An Open-source Framework for Zero-Shot Named Entity Recognition and Relation Extraction
Picco, Gabriele, Galindo, Marcos Martínez, Purpura, Alberto, Fuchs, Leopold, López, Vanessa, Lam, Hoang Thanh
The Zero-Shot Learning (ZSL) task pertains to the identification of entities or relations in texts that were not seen during training. ZSL has emerged as a critical research area due to the scarcity of labeled data in specific domains, and its applications have grown significantly in recent years. With the advent of large pretrained language models, several novel methods have been proposed, resulting in substantial improvements in ZSL performance. There is a growing demand, both in the research community and industry, for a comprehensive ZSL framework that facilitates the development and accessibility of the latest methods and pretrained models.In this study, we propose a novel ZSL framework called Zshot that aims to address the aforementioned challenges. Our primary objective is to provide a platform that allows researchers to compare different state-of-the-art ZSL methods with standard benchmark datasets. Additionally, we have designed our framework to support the industry with readily available APIs for production under the standard SpaCy NLP pipeline. Our API is extendible and evaluable, moreover, we include numerous enhancements such as boosting the accuracy with pipeline ensembling and visualization utilities available as a SpaCy extension.
Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models
Foley, Myles, Rawat, Ambrish, Lee, Taesung, Hou, Yufang, Picco, Gabriele, Zizzo, Giulio
The wide applicability and adaptability of generative large language models (LLMs) has enabled their rapid adoption. While the pre-trained models can perform many tasks, such models are often fine-tuned to improve their performance on various downstream applications. However, this leads to issues over violation of model licenses, model theft, and copyright infringement. Moreover, recent advances show that generative technology is capable of producing harmful content which exacerbates the problems of accountability within model supply chains. Thus, we need a method to investigate how a model was trained or a piece of text was generated and what their pre-trained base model was. In this paper we take the first step to address this open problem by tracing back the origin of a given fine-tuned LLM to its corresponding pre-trained base model. We consider different knowledge levels and attribution strategies, and find that we can correctly trace back 8 out of the 10 fine tuned models with our best method.
Envisioning a Human-AI collaborative system to transform policies into decision models
Lopez, Vanessa, Picco, Gabriele, Vejsbjerg, Inge, Hoang, Thanh Lam, Hou, Yufang, Sbodio, Marco Luca, Segrave-Daly, John, Moga, Denisa, Swords, Sean, Wei, Miao, Carroll, Eoin
Regulations govern many aspects of citizens' daily lives. Governments and businesses routinely automate these in the form of coded rules (e.g., to check a citizen's eligibility for specific benefits). However, the path to automation is long and challenging. To address this, recent global initiatives for digital government, proposing to simultaneously express policy in natural language for human consumption as well as computationally amenable rules or code, are gathering broad public-sector interest. We introduce the problem of semi-automatically building decision models from eligibility policies for social services, and present an initial emerging approach to shorten the route from policy documents to executable, interpretable and standardised decision models using AI, NLP and Knowledge Graphs. Despite the many open domain challenges, in this position paper we explore the enormous potential of AI to assist government agencies and policy experts in scaling the production of both human-readable and machine executable policy rules, while improving transparency, interpretability, traceability and accountability of the decision making.
Ensembling Graph Predictions for AMR Parsing
Lam, Hoang Thanh, Picco, Gabriele, Hou, Yufang, Lee, Young-Suk, Nguyen, Lam M., Phan, Dzung T., López, Vanessa, Astudillo, Ramon Fernandez
In many machine learning tasks, models are trained to predict structure data such as graphs. For example, in natural language processing, it is very common to parse texts into dependency trees or abstract meaning representation (AMR) graphs. On the other hand, ensemble methods combine predictions from multiple models to create a new one that is more robust and accurate than individual predictions. In the literature, there are many ensembling techniques proposed for classification or regression problems, however, ensemble graph prediction has not been studied thoroughly. In this work, we formalize this problem as mining the largest graph that is the most supported by a collection of graph predictions. As the problem is NP-Hard, we propose an efficient heuristic algorithm to approximate the optimal solution. To validate our approach, we carried out experiments in AMR parsing problems. The experimental results demonstrate that the proposed approach can combine the strength of state-of-the-art AMR parsers to create new predictions that are more accurate than any individual models in five standard benchmark datasets.