Goto

Collaborating Authors

 knowledge engineering


Overview of the 17th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management

Interactive AI Magazine

IC3K 2025 (17th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management) received 163 paper submissions from 40 countries. To evaluate each submission, a double-blind paper review was performed by the Program Committee. After a stringent selection process, 31 papers were published and presented as full papers, i.e. completed work (12 pages/25' oral presentation), 81 papers were accepted as short papers (54 as oral presentation). The organizing committee included the IC3K Conference Chairs: Ricardo da Silva Torres, Artificial Intelligence Group, Wageningen University & Research, Netherlands and Jorge Bernardino, Polytechnic University of Coimbra, Portugal, and the IC3K 2025 Program Chairs: Le Gruenwald, University of Oklahoma, School of Computer Science, United States, Frans Coenen, University of Liverpool, United Kingdom, Jesualdo Tomás Fernández-Breis, University of Murcia, Spain, Lars Nolle, Jade University of Applied Sciences, Germany, Elio Masciari, University of Napoli Federico II, Italy and David Aveiro, University of Madeira, NOVA-LINCS and ARDITI, Portugal. At the closing session, the conference acknowledged a few papers that were considered excellent in their class, presenting a "Best Paper Award", "Best Student Paper Award", and "Best Poster Award" for each of the co-located conferences.


Deep Learning-Based Pneumonia Detection from Chest X-ray Images: A CNN Approach with Performance Analysis and Clinical Implications

Dutta, P K, Chowdhury, Anushri, Bhattacharyya, Anouska, Chakraborty, Shakya, Dey, Sujatra

arXiv.org Artificial Intelligence

Deep learning integration into medical imaging systems has transformed disease detection and diagnosis processes with a focus on pneumonia identification. The study introduces an intricate deep learning system using Convolutional Neural Networks for automated pneumonia detection from chest Xray images which boosts diagnostic precision and speed. The proposed CNN architecture integrates sophisticated methods including separable convolutions along with batch normalization and dropout regularization to enhance feature extraction while reducing overfitting. Through the application of data augmentation techniques and adaptive learning rate strategies the model underwent training on an extensive collection of chest Xray images to enhance its generalization capabilities. A convoluted array of evaluation metrics such as accuracy, precision, recall, and F1 score collectively verify the model exceptional performance by recording an accuracy rate of 91. This study tackles critical clinical implementation obstacles such as data privacy protection, model interpretability, and integration with current healthcare systems beyond just model performance. This approach introduces a critical advancement by integrating medical ontologies with semantic technology to improve diagnostic accuracy. The study enhances AI diagnostic reliability by integrating machine learning outputs with structured medical knowledge frameworks to boost interpretability. The findings demonstrate AI powered healthcare tools as a scalable efficient pneumonia detection solution. This study advances AI integration into clinical settings by developing more precise automated diagnostic methods that deliver consistent medical imaging results.


Standardizing Knowledge Engineering Practices with a Reference Architecture

Allen, Bradley P., Ilievski, Filip

arXiv.org Artificial Intelligence

Knowledge engineering is the process of creating and maintaining knowledge-producing systems. Throughout the history of computer science and AI, knowledge engineering workflows have been widely used given the importance of high-quality knowledge for reliable intelligent agents. Meanwhile, the scope of knowledge engineering, as apparent from its target tasks and use cases, has been shifting, together with its paradigms such as expert systems, semantic web, and language modeling. The intended use cases and supported user requirements between these paradigms have not been analyzed globally, as new paradigms often satisfy prior pain points while possibly introducing new ones. The recent abstraction of systemic patterns into a boxology provides an opening for aligning the requirements and use cases of knowledge engineering with the systems, components, and software that can satisfy them best. This paper proposes a vision of harmonizing the best practices in the field of knowledge engineering by leveraging the software engineering methodology of creating reference architectures. We describe how a reference architecture can be iteratively designed and implemented to associate user needs with recurring systemic patterns, building on top of existing knowledge engineering workflows and boxologies. We provide a six-step roadmap that can enable the development of such an architecture, providing an initial design and outcome of the definition of architectural scope, selection of information sources, and analysis. We expect that following through on this vision will lead to well-grounded reference architectures for knowledge engineering, will advance the ongoing initiatives of organizing the neurosymbolic knowledge engineering space, and will build new links to the software architectures and data science communities.


Knowledge Engineering for Wind Energy

Marykovskiy, Yuriy, Clark, Thomas, Day, Justin, Wiens, Marcus, Henderson, Charles, Quick, Julian, Abdallah, Imad, Sempreviva, Anna Maria, Calbimonte, Jean-Paul, Chatzi, Eleni, Barber, Sarah

arXiv.org Artificial Intelligence

To this end, vast amounts of data generated by various sources, including sensors and other monitoring systems, need to be effectively structured and represented in a way that can be easily understood and processed by both Artificial Intelligence (AI) systems and humans. The digitalisation of the wind energy sector is one of the key drivers for reducing costs and risks over the whole wind energy project life cycle [2]. The digitalisation process encompasses solutions such as digital twins, decision support systems and AI systems, some of which need to still be developed, in order to contribute to reducing operation and maintenance costs, for increasing the amount of energy delivered, as well as for maximising the efficiency of wind energy systems. In this context, the term Knowledge-Based Systems (KBS) refers to AI systems that formalize knowledge as rules, logical expressions, and conceptualisations [3, 4]. Such systems can be realised as AI-enabled digital twins or decision support systems that rely on databases of knowledge (also referred to as knowledge bases or knowledge graphs), which contain machine-readable facts, rules, and logics about a domain of interest, to assist with problem-solving and decision-making [5].


Knowledge Engineering using Large Language Models

Allen, Bradley P., Stork, Lise, Groth, Paul

arXiv.org Artificial Intelligence

Knowledge engineering is a discipline that focuses on the creation and maintenance of processes that generate and apply knowledge. Traditionally, knowledge engineering approaches have focused on knowledge expressed in formal languages. The emergence of large language models and their capabilities to effectively work with natural language, in its broadest sense, raises questions about the foundations and practice of knowledge engineering. Here, we outline the potential role of LLMs in knowledge engineering, identifying two central directions: 1) creating hybrid neuro-symbolic knowledge systems; and 2) enabling knowledge engineering in natural language. Additionally, we formulate key open research questions to tackle these directions.


Using Large Language Models for Knowledge Engineering (LLMKE): A Case Study on Wikidata

Zhang, Bohui, Reklos, Ioannis, Jain, Nitisha, Peñuela, Albert Meroño, Simperl, Elena

arXiv.org Artificial Intelligence

In this work, we explore the use of Large Language Models (LLMs) for knowledge engineering tasks in the context of the ISWC 2023 LM-KBC Challenge. For this task, given subject and relation pairs sourced from Wikidata, we utilize pre-trained LLMs to produce the relevant objects in string format and link them to their respective Wikidata QIDs. We developed a pipeline using LLMs for Knowledge Engineering (LLMKE), combining knowledge probing and Wikidata entity mapping. The method achieved a macro-averaged F1-score of 0.701 across the properties, with the scores varying from 1.00 to 0.328. These results demonstrate that the knowledge of LLMs varies significantly depending on the domain and that further experimentation is required to determine the circumstances under which LLMs can be used for automatic Knowledge Base (e.g., Wikidata) completion and correction. The investigation of the results also suggests the promising contribution of LLMs in collaborative knowledge engineering. LLMKE won Track 2 of the challenge. The implementation is available at https://github.com/bohuizhang/LLMKE.


Optimized Three Deep Learning Models Based-PSO Hyperparameters for Beijing PM2.5 Prediction

Pranolo, Andri, Mao, Yingchi, Wibawa, Aji Prasetya, Utama, Agung Bella Putra, Dwiyanto, Felix Andika

arXiv.org Artificial Intelligence

M-1 with three hidden layers produces the best results of RMSE and MAPE compared to the proposed M-2, M-3, and all the baselines. A recommendation for air pollution management could be generated by using these optimized models. This is an open access article under the CC BY-SA license (https://creativecommons.org/licenses/by-sa/4.0/). I. Introduction In air quality monitoring systems, PM2.5 concentration is a crucial measure. As public awareness rises, analyzing and anticipating pollution levels is vital. Monitoring stations can only perform a small role in PM2.5 pollution control due to the nonlinear character of PM2.5 concentrations in both time and space. As a result, improving PM2.5 concentrations prediction accuracy is crucial for preventing and controlling air pollution. Several studies have been conducted using machine learning techniques, such as neural networks, applied to environmental science issues. As a part of a neural network, deep learning is a technique that achieves high performance for various applications such as natural language processing, visual recognition, and forecasting has recently gained attention in the machine learning field.


Knowledge Graph: Qi, Guilin, Chen, Huajun, Liu, Kang, Wang, Haofen, Ji, Qiu, Wu, Tianxing: 9789811081767: Amazon.com: Books

#artificialintelligence

Dr. Guilin Qi is a professor at Southeast University, China, where he also serves as director of the Institute of Cognitive Intelligence and of the Knowledge Science and Engineering Lab. His research interests include knowledge representation and reasoning, knowledge graphs, uncertainty reasoning, and the semantic web. Prof. Qi is an editorial board member of the Journal of Web Semantics, and has co-edited special issues for the Annals of Mathematics and Artificial Intelligence, International Journal of Approximate Reasoning and Journal of Applied Logic. He has over 20 years of research experiences in knowledge engineering and has led many national and industrial projects on knowledge graphs. Prof. Qi has published more than 100 papers on knowledge engineering and knowledge graphs and holds two patents.


How is Artificial Intelligence involved in Recommender systems??

#artificialintelligence

When you use the internet to purchase a book from Amazon, order a menu, or even watch a movie on Netflix. Then you receive real-time suggestions for new services. Hence, you manipulate one of the intelligent recommendation systems. So how does Artificial Intelligence (AI) influence these systems' performance? Moreover, before that, what are recommender systems (RSs)?


Knowledge Engineering in the Long Game of Artificial Intelligence: The Case of Speech Acts

McShane, Marjorie, English, Jesse, Nirenburg, Sergei

arXiv.org Artificial Intelligence

This paper describes principles and practices of knowledge engineering that enable the development of holistic language-endowed intelligent agents that can function across domains and applications, as well as expand their ontological and lexical knowledge through lifelong learning. For illustration, we focus on dialog act modeling, a task that has been widely pursued in linguistics, cognitive modeling, and statistical natural language processing. We describe an integrative approach grounded in the OntoAgent knowledge-centric cognitive architecture and highlight the limitations of past approaches that isolate dialog from other agent functionalities.