Automated Skill Decomposition Meets Expert Ontologies: Bridging the Granularity Gap with LLMs
Luyen, Le Ngoc, Abel, Marie-Hélène
–arXiv.org Artificial Intelligence
This paper investigates automated skill decomposition using Large Language Models (LLMs) and proposes a rigorous, ontology-grounded evaluation framework. Our framework standardizes the pipeline from prompting and generation to normalization and alignment with ontology nodes. To evaluate outputs, we introduce two metrics: a semantic F1-score that uses optimal embedding-based matching to assess content accuracy, and a hierarchy-aware F1-score that credits structurally correct placements to assess granularity. We conduct experiments on ROME-ESCO-DecompSkill, a curated subset of parents, comparing two prompting strategies: zero-shot and leakage-safe few-shot with exemplars. Across diverse LLMs, zero-shot offers a strong baseline, while few-shot consistently stabilizes phrasing and granularity and improves hierarchy-aware alignment. A latency analysis further shows that exemplar-guided prompts are competitive - and sometimes faster - than unguided zero-shot due to more schema-compliant completions. Together, the framework, benchmark, and metrics provide a reproducible foundation for developing ontology-faithful skill decomposition systems.
arXiv.org Artificial Intelligence
Oct-14-2025
- Country:
- Europe > France > Hauts-de-France > Oise > Compiègne (0.04)
- Genre:
- Research Report (1.00)
- Industry:
- Education (1.00)
- Technology: