Question Answering Benchmark

The aim of the Question Answering Benchmark in HOBBIT is to assist the assessment of QA systems with a fixed set of natural language questions and their respective SPARQL queries for DBpedia. Systems can be assessed in three tasks, each tackling a different number of choke points: multilingual, hybrid and large-scale. As key performance indicators, the usual suspects are used: precision, recall, F1-score and, in the large-scale task, the systems’ time for successfully answered questions while constantly increasing the number of issued questions.

Data en bronnen

Extra Informatie

Veld Waarde
Auteur Bastian Haarmann
Beheerder Giulio Napolitano
Laatst gewijzigd augustus 1, 2017, 07:33 (UTC)
Gecreëerd augustus 1, 2017, 07:21 (UTC)