Evaluation measures for ontology matchers in supervised matching scenarios

DOI

Precision and Recall, as well as their combination in terms of FMeasure, are widely used measures in computer science and generally used to evaluate the overall performance of ontology matchers in fully automatic, unsupervised scenarios. In this paper, we investigate the case of supervised matching,where automatically created ontology alignments are verified by an expert. We motivate and describe this use case and its characteristics and discuss why traditional, F-measure based evaluation measures are not suitable to choose the best matching system for this task. Therefore, we investigate several alternative evaluation measures and propose the use of Precision@N curves as a means to assess different matching systems for supervised matching. We compare the ranking of ontology matchers from the last OAEI campaign using Precision@N curves to the traditional F-measure based ranking, and discuss means to combine matchers in a way that optimizes the user support in supervised ontology matching.

Identifier
DOI https://doi.org/10.7801/23
Metadata Access https://api.datacite.org/dois/10.7801/23
Provenance
Creator Ritze, Dominique; Paulheim, Heiko; Eckert, Kai
Publisher Mannheim University Library
Publication Year 2013
OpenAccess true
Representation
Resource Type Dataset
Format application/zip; text/plain
Size 5715795; 17869; 12807718; 1759
Version 1
Discipline Social Sciences