Semantic Web
LoRA: A Logical Reasoning Augmented Dataset for Visual Question Answering
The capacity to reason logically is a hallmark of human cognition. Humans excel at integrating multimodal information for logical reasoning, as exemplified by the Visual Question Answering (VQA) task, which is a challenging multimodal task. VQA tasks and large vision-and-language models aim to tackle reasoning problems, but the accuracy, consistency and integrity of the generated answers is hard to evaluate in the absence of a VQA dataset that can offer formal, comprehensive and systematic complex logical reasoning questions. To address this gap, we present LoRA, a novel Logical Reasoning Augmented VQA dataset that requires formal and complex description logic reasoning based on a food-and-kitchen knowledge base. Our main objective in creating LoRA is to enhance the complex and formal logical reasoning capabilities of VQA models, which are not adequately measured by existing VQA datasets. We devise strong and flexible programs to automatically generate 200,000 diverse description logic reasoning questions based on the SROIQ Description Logic, along with realistic kitchen scenes and ground truth answers. We fine-tune the latest transformer VQA models and evaluate the zero-shot performance of the state-of-the-art large vision-and-language models on LoRA. The results reveal that LoRA presents a unique challenge in logical reasoning, setting a systematic and comprehensive evaluation standard.
Smart E-commerce Recommendations with Semantic AI
In e-commerce, web mining for page recommendations is widely used but often fails to meet user needs. To address this, we propose a novel solution combining semantic web mining with BP neural networks. We process user search logs to extract five key features: content priority, time spent, user feedback, recommendation semantics, and input deviation. These features are then fed into a BP neural network to classify and prioritize web pages. The prioritized pages are recommended to users. Using book sales pages for testing, our results demonstrate that this solution can quickly and accurately identify the pages users need. Our approach ensures that recommendations are more relevant and tailored to individual preferences, enhancing the online shopping experience. By leveraging advanced semantic analysis and neural network techniques, we bridge the gap between user expectations and actual recommendations. This innovative method not only improves accuracy but also speeds up the recommendation process, making it a valuable tool for e-commerce platforms aiming to boost user satisfaction and engagement. Additionally, our system ability to handle large datasets and provide real-time recommendations makes it a scalable and efficient solution for modern e-commerce challenges.
Semantic Interoperability on Blockchain by Generating Smart Contracts Based on Knowledge Graphs
Van Woensel, William, Seneviratne, Oshani
Background: Health 3.0 allows decision making to be based on longitudinal data from multiple institutions, from across the patient's healthcare journey. In such a distributed setting, blockchain smart contracts can act as neutral intermediaries to implement trustworthy decision making. Objective: In a distributed setting, transmitted data will be structured using standards (such as HL7 FHIR) for semantic interoperability. In turn, the smart contract will require interoperability with this standard, implement a complex communication setup (e.g., using oracles), and be developed using blockchain languages (e.g., Solidity). We propose the encoding of smart contract logic using a high-level semantic Knowledge Graph, using concepts from the domain standard. We then deploy this semantic KG on blockchain. Methods: Off-chain, a code generation pipeline compiles the KG into a concrete smart contract, which is then deployed on-chain. Our pipeline targets an intermediary bridge representation, which can be transpiled into a specific blockchain language. Our choice avoids on-chain rule engines, with unpredictable and likely higher computational cost; it is thus in line with the economic rules of blockchain. Results: We applied our code generation approach to generate smart contracts for 3 health insurance cases from Medicare. We discuss the suitability of our approach - the need for a neutral intermediary - for a number of healthcare use cases. Our evaluation finds that the generated contracts perform well in terms of correctness and execution cost ("gas") on blockchain. Conclusions: We showed that it is feasible to automatically generate smart contract code based on a semantic KG, in a way that respects the economic rules of blockchain. Future work includes studying the use of Large Language Models (LLM) in our approach, and evaluations on other blockchains.
VHAKG: A Multi-modal Knowledge Graph Based on Synchronized Multi-view Videos of Daily Activities
Egami, Shusaku, Ugai, Takahiro, Htun, Swe Nwe Nwe, Fukuda, Ken
Multi-modal knowledge graphs (MMKGs), which ground various non-symbolic data (e.g., images and videos) into symbols, have attracted attention as resources enabling knowledge processing and machine learning across modalities. However, the construction of MMKGs for videos consisting of multiple events, such as daily activities, is still in the early stages. In this paper, we construct an MMKG based on synchronized multi-view simulated videos of daily activities. Besides representing the content of daily life videos as event-centric knowledge, our MMKG also includes frame-by-frame fine-grained changes, such as bounding boxes within video frames. In addition, we provide support tools for querying our MMKG. As an application example, we demonstrate that our MMKG facilitates benchmarking vision-language models by providing the necessary vision-language datasets for a tailored task.
Improving Large Language Model (LLM) fidelity through context-aware grounding: A systematic approach to reliability and veracity
Talukdar, Wrick, Biswas, Anjanava
As Large Language Models (LLMs) become increasingly sophisticated and ubiquitous in natural language processing (NLP) applications, ensuring their robustness, trustworthiness, and alignment with human values has become a critical challenge. This paper presents a novel framework for contextual grounding in textual models, with a particular emphasis on the Context Representation stage. Our approach aims to enhance the reliability and ethical alignment of these models through a comprehensive, context-aware methodology. By explicitly capturing and representing relevant situational, cultural, and ethical contexts in a machine-readable format, we lay the foundation for anchoring a model's behavior within these contexts. Our approach leverages techniques from knowledge representation and reasoning, such as ontologies, semantic web technologies, and logic-based formalisms. We evaluate our framework on real-world textual datasets, demonstrating its effectiveness in improving model performance, fairness, and alignment with human expectations, while maintaining high accuracy. Furthermore, we discuss the other key components of the framework, including context-aware encoding, context-aware learning, interpretability and explainability, and continuous monitoring and adaptation. This research contributes to the growing body of work on responsible AI, offering a practical approach to developing more reliable, trustworthy, and ethically-aligned language models. Our findings have significant implications for the deployment of LLMs in sensitive domains such as healthcare, legal systems, and social services, where contextual understanding is paramount.
A Scalable Tool For Analyzing Genomic Variants Of Humans Using Knowledge Graphs and Machine Learning
Prasanna, Shivika, Kumar, Ajay, Rao, Deepthi, Simoes, Eduardo, Rao, Praveen
The integration of knowledge graphs and graph machine learning (GML) in genomic data analysis offers several opportunities for understanding complex genetic relationships, especially at the RNA level. We present a comprehensive approach for leveraging these technologies to analyze genomic variants, specifically in the context of RNA sequencing (RNA-seq) data from COVID-19 patient samples. The proposed method involves extracting variant-level genetic information, annotating the data with additional metadata using SnpEff, and converting the enriched Variant Call Format (VCF) files into Resource Description Framework (RDF) triples. The resulting knowledge graph is further enhanced with patient metadata and stored in a graph database, facilitating efficient querying and indexing. We utilize the Deep Graph Library (DGL) to perform graph machine learning tasks, including node classification with GraphSAGE and Graph Convolutional Networks (GCNs). Our approach demonstrates significant utility using our proposed tool, VariantKG, in three key scenarios: enriching graphs with new VCF data, creating subgraphs based on user-defined features, and conducting graph machine learning for node classification.
MLtoGAI: Semantic Web based with Machine Learning for Enhanced Disease Prediction and Personalized Recommendations using Generative AI
Dongre, Shyam, Chandra, Ritesh, Agarwal, Sonali
In modern healthcare, addressing the complexities of accurate disease prediction and personalized recommendations is both crucial and challenging. This research introduces MLtoGAI, which integrates Semantic Web technology with Machine Learning (ML) to enhance disease prediction and offer user-friendly explanations through ChatGPT. The system comprises three key components: a reusable disease ontology that incorporates detailed knowledge about various diseases, a diagnostic classification model that uses patient symptoms to detect specific diseases accurately, and the integration of Semantic Web Rule Language (SWRL) with ontology and ChatGPT to generate clear, personalized health advice. This approach significantly improves prediction accuracy and ensures results that are easy to understand, addressing the complexity of diseases and diverse symptoms. The MLtoGAI system demonstrates substantial advancements in accuracy and user satisfaction, contributing to developing more intelligent and accessible healthcare solutions. This innovative approach combines the strengths of ML algorithms with the ability to provide transparent, human-understandable explanations through ChatGPT, achieving significant improvements in prediction accuracy and user comprehension. By leveraging semantic technology and explainable AI, the system enhances the accuracy of disease prediction and ensures that the recommendations are relevant and easily understood by individual patients. Our research highlights the potential of integrating advanced technologies to overcome existing challenges in medical diagnostics, paving the way for future developments in intelligent healthcare systems. Additionally, the system is validated using 200 synthetic patient data records, ensuring robust performance and reliability.
Knowledge-based Drug Samples' Comparison
Guillemin, Sébastien, Roxin, Ana, Dujourdy, Laurence, Journaux, Ludovic
-- Drug sample comparison is a process used by the French National Police to identify drug distribution networks. The current approach is based on a manual comparison done by forensic experts. In this article, we present our approach to acquire, formalise, and specify expert knowledge to improve the current process. We use an ontology coupled with logical rules to model the underlying knowledge. The different steps of our approach are designed to be reused in other application domains. The results obtained are explainable making them usable by experts in different fields. The fight against drug trafficking has been one of the French government's priorities since the end of 2019 and has led to the creation of the National Stup plan. This plan comprises 55 measures, including the use of new indicators to understand consumer habits and dealers' methods. The work described in this article is part of this plan and aims to support scientific experts in the decision-making process for narcotic profiling. As part of the fight against drug trafficking, several arrests may be made, often accompanied by seizures. Forensic experts perform several analyses on samples from a seizure. They aim to correlate different samples from different seizures to identify trafficking networks best. To do so, experts use sample matching to pair samples according to their characteristics. Paired samples constitute an ensemble called a batch. The sample characteristics used are represented by different data, namely: macroscopic data (e.g., sample dimension, drug logos), qualitative data (e.g., list of active substances), quantitative data (e.g., dosage of substances) or non-confidential seizure data (e.g., date, place of seizure). In France, such data is stored in the national STUPS database.
Advancing Manuscript Metadata: Work in Progress at the Jagiellonian University
Miranda, Luiz do Valle, Kutt, Krzysztof, Nalepa, Grzegorz J.
As part of ongoing research projects, three Jagiellonian University units -- the Jagiellonian University Museum, the Jagiellonian University Archives, and the Jagiellonian Library -- are collaborating to digitize cultural heritage documents, describe them in detail, and then integrate these descriptions into a linked data cloud. Achieving this goal requires, as a first step, the development of a metadata model that, on the one hand, complies with existing standards, on the other hand, allows interoperability with other systems, and on the third, captures all the elements of description established by the curators of the collections. In this paper, we present a report on the current status of the work, in which we outline the most important requirements for the data model under development and then make a detailed comparison with the two standards that are the most relevant from the point of view of collections: Europeana Data Model used in Europeana and Encoded Archival Description used in Kalliope.
MST5 -- Multilingual Question Answering over Knowledge Graphs
Srivastava, Nikit, Ma, Mengshi, Vollmers, Daniel, Zahera, Hamada, Moussallem, Diego, Ngomo, Axel-Cyrille Ngonga
Knowledge Graph Question Answering (KGQA) simplifies querying vast amounts of knowledge stored in a graph-based model using natural language. However, the research has largely concentrated on English, putting non-English speakers at a disadvantage. Meanwhile, existing multilingual KGQA systems face challenges in achieving performance comparable to English systems, highlighting the difficulty of generating SPARQL queries from diverse languages. In this research, we propose a simplified approach to enhance multilingual KGQA systems by incorporating linguistic context and entity information directly into the processing pipeline of a language model. Unlike existing methods that rely on separate encoders for integrating auxiliary information, our strategy leverages a single, pretrained multilingual transformer-based language model to manage both the primary input and the auxiliary data. Our methodology significantly improves the language model's ability to accurately convert a natural language query into a relevant SPARQL query. It demonstrates promising results on the most recent QALD datasets, namely QALD-9-Plus and QALD-10. Furthermore, we introduce and evaluate our approach on Chinese and Japanese, thereby expanding the language diversity of the existing datasets.