End-to-End Ontology Learning with Large Language Models
–Neural Information Processing Systems
Ontologies are useful for automatic machine processing of domain knowledge as they represent it in a structured format. Yet, constructing ontologies requires substantial manual effort. To automate part of this process, large language models (LLMs) have been applied to solve various subtasks of ontology learning. However, this partial ontology learning does not capture the interactions between subtasks. We address this gap by introducing OLLM, a general and scalable method for building the taxonomic backbone of an ontology from scratch.
Neural Information Processing Systems
Mar-26-2025, 07:08:41 GMT
- Country:
- Europe
- Denmark (0.28)
- France (0.28)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.14)
- North America > United States (0.28)
- Europe
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (0.93)
- Research Report
- Industry:
- Education (0.67)
- Energy (1.00)
- Government
- Regional Government (0.92)
- Voting & Elections (1.00)
- Health & Medicine
- Pharmaceuticals & Biotechnology (0.93)
- Therapeutic Area (0.67)
- Information Technology (0.68)
- Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.92)
- Leisure & Entertainment (0.93)
- Technology: