A Study of the Quality of Wikidata
Shenoy, Kartik, Ilievski, Filip, Garijo, Daniel, Schwabe, Daniel, Szekely, Pedro
–arXiv.org Artificial Intelligence
Wikidata has been increasingly adopted by many communities for a wide variety of applications, which demand high-quality knowledge to deliver successful results. In this paper, we develop a framework to detect and analyze low-quality statements in Wikidata by shedding light on the current practices exercised by the community. We explore three indicators of data quality in Wikidata, based on: 1) community consensus on the currently recorded knowledge, assuming that statements that have been removed and not added back are implicitly agreed to be of low quality; 2) statements that have been deprecated; and 3) constraint violations in the data. We combine these indicators to detect low-quality statements, revealing challenges with duplicate entities, missing triples, violated type rules, and taxonomic distinctions. Our findings complement ongoing efforts by the Wikidata community to improve data quality, aiming to make it easier for users and editors to find and correct mistakes.
arXiv.org Artificial Intelligence
Jul-23-2021
- Country:
- North America > United States
- California (0.14)
- South America > Brazil
- Rio de Janeiro (0.14)
- North America > United States
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Government (0.47)
- Health & Medicine (0.46)
- Technology:
- Information Technology
- Artificial Intelligence > Representation & Reasoning
- Constraint-Based Reasoning (1.00)
- Ontologies (0.95)
- Communications > Web (1.00)
- Data Science (1.00)
- Artificial Intelligence > Representation & Reasoning
- Information Technology