Goto

Collaborating Authors

 semantic technology


Enhancing SPARQL Query Rewriting for Complex Ontology Alignments

Ondo, Anicet Lepetit, Capus, Laurence, Bousso, Mamadou

arXiv.org Artificial Intelligence

SPARQL query rewriting is a fundamental mechanism for uniformly querying heterogeneous ontologies in the Linked Data Web. However, the complexity of ontology alignments, particularly rich correspondences (c: c), makes this process challenging. Existing approaches primarily focus on simple (s: s) and par tially complex (s: c) alignments, thereby overlooking the challenges posed by more expressive alignments. Moreover, the intricate syntax of SPARQL presents a barrier for non - expert users seeking to fully exploit the knowledge encapsulated in ontologies. T his article proposes an innovative approach for the automatic rewriting of SPARQL queries from a source ontology to a target ontology, based on a user's need expressed in natural language. It leverages the principles of equivalence transitivity as well as the advanced capabilities of large language models such as GPT - 4 . By integrating these elements, this approach stands out for its ability to efficiently handle complex alignments, particularly (c: c) correspondences, by fully exploiting their expressivene ss. Additionally, it facilitates access to aligned ontologies for users unfamiliar with SPARQL, providing a flexible solution for querying heterogeneous data. I n the Linked Data Web, aligned ontologies play a crucial role in facilitating interoperability between different data sources.


An Ontology-Based multi-domain model in Social Network Analysis: Experimental validation and case study

Benítez-Andrades, José Alberto, García-Rodríguez, Isaías, Benavides, Carmen, Aláiz-Moretón, Héctor, Gayo, José Emilio Labra

arXiv.org Artificial Intelligence

The use of social network theory and methods of analysis have been applied to different domains in recent years, including public health. The complete procedure for carrying out a social network analysis (SNA) is a time-consuming task that entails a series of steps in which the expert in social network analysis could make mistakes. This research presents a multi-domain knowledge model capable of automatically gathering data and carrying out different social network analyses in different domains, without errors and obtaining the same conclusions that an expert in SNA would obtain. The model is represented in an ontology called OntoSNAQA, which is made up of classes, properties and rules representing the domains of People, Questionnaires and Social Network Analysis. Besides the ontology itself, different rules are represented by SWRL and SPARQL queries. A Knowledge Based System was created using OntoSNAQA and applied to a real case study in order to show the advantages of the approach. Finally, the results of an SNA analysis obtained through the model were compared to those obtained from some of the most widely used SNA applications: UCINET, Pajek, Cytoscape and Gephi, to test and confirm the validity of the model.


Web 3.0 in 2023: A Look Ahead. The Future of the Internet: How Web 3.0…

#artificialintelligence

As we move closer to the year 2023, the concept of Web 3.0 is gaining more and more attention. Also known as the "Semantic Web," Web 3.0 represents the next generation of the Internet, and has the potential to revolutionize how we interact with and use the web. At its core, the Semantic Web is about adding meaning and context to the vast amount of data that is available online. It does this through the use of semantic technologies, which enable computers to understand the meaning and context of data, rather than just its raw form. One way that the Semantic Web achieves this is through the use of semantic markup languages, such as RDF (Resource Description Framework) and OWL (Web Ontology Language).


Council Post: Web3: More Than Just The Metaverse

#artificialintelligence

Web3 is the latest buzzword in the digital world, but most believe it's synonymous with the metaverse and NFTs. This common misconception exists because the terms seem to be intrinsically linked together any time there is a discussion about any of them. However, Web3 is so much more than the metaverse, and it is growing closer to matching the original vision for the internet than it has ever managed since its inception. Let's brush up on history before we break down essential technical terms and dive deeper into the impact Web3 will have on businesses. Web 1.0, created in 1989, was mainly focused on centralized technology infrastructures that delivered static website content.


Semantic Technology Trends in 2022 - DATAVERSITY

#artificialintelligence

Semantic technology trends are expanding well beyond an interesting, more advanced search engine. Besides providing scientists with a more functional search engine, semantic technology is now being used to improve artificial intelligence and machine learning. Semantic technology uses a variety of tools and methods designed to add "meaning" to a computer's understanding of data. When asked a question, rather than simply searching for keywords, semantic technologies will explore a wide variety of resources for topics, concepts, and relationships. In the financial and science industries, companies have begun to semantically "enrich" content, processing complex data from a variety of sources.


The HaMSE Ontology: Using Semantic Technologies to support Music Representation Interoperability and Musicological Analysis

Poltronieri, Andrea, Gangemi, Aldo

arXiv.org Artificial Intelligence

The use of Semantic Technologies - in particular the Semantic Web - has revealed to be a great tool for describing the cultural heritage domain and artistic practices. However, the panorama of ontologies for musicological applications seems to be limited and restricted to specific applications. In this research, we propose HaMSE, an ontology capable of describing musical features that can assist musicological research. More specifically, HaMSE proposes to address issues that have been affecting musicological research for decades: the representation of music and the relationship between quantitative and qualitative data. To do this, HaMSE allows the alignment between different music representation systems and describes a set of musicological features that can allow the music analysis at different granularity levels.


Understanding Semantic web technologies

#artificialintelligence

In Alex Garland's 2014 sci-fi thriller, when Caleb the plot's anti-hero first meets Ava, an AI-driven humanoid, the first thing he does to test her intelligence is to engage her in a conversation. "So we need to break the ice. Do you know what I mean by that?", he asks. He tests her further, "what do I mean?". "Overcome initial social awkwardness", she quips.


Building the Future of Data Science

#artificialintelligence

We can easily see that it's increasing over time. Semantics in this context means the use of formal semantics to give meaning to the disparate and raw data that surrounds us, and also the relationship between signifiers and what they stand for in reality, their denotation. When we talk about semantics in data we normally mean a combination of ontology, linked data, graphs and knowledge-graphs, the data fabric and more. You can read about all of that in the links at the beginning of the article. The thing is that all data modeling statements (along with everything else) in ontological languages for data are incremental, by their very nature.


Who will speak at Data Day Texas 2020

#artificialintelligence

Take advantage of our discount rooms at the conference hotel. We are beginning to announce speakers for 2020. Want to join us as a speaker? Check out our proposals page. Jesse Anderson is a data engineer, creative engineer, and managing director of the Big Data Institute. He works with companies ranging from startups to Fortune 100 companies on Big Data. This includes training on cutting edge technologies like Apache Kafka, Apache Hadoop and Apache Spark. He has taught over 30,000 people the skills to become data engineers.


Parsa Mirhaji Montefiore Health System - PMWC Precision Medicine World Conference

#artificialintelligence

Dr. Mirhaji was the former director of the Center for Biosecurity and Public Health Informatics Research at the University of Texas at Houston where he developed clinical text understanding, semantic information integration, and EMR interoperability solutions, for public health and disaster preparedness. He is an inventor with several patents covering information integration, biomedical vocabularies and taxonomy services, clinical text understanding and natural language processing, electronic data capture, and knowledge-based information retrieval. Dr. Mirhaji and his fellow researchers were awarded "The Best Practice in Public Health. He is a member of W3C working groups for application of Semantic Technologies in Healthcare and Life Sciences, and organizer and committee member for several national and international conferences on Bio-Ontologies and Semantic Technologies.