Question Answering Benchmark

The aim of the Question Answering Benchmark in HOBBIT is to assist the assessment of QA systems with a fixed set of natural language questions and their respective SPARQL queries for DBpedia. Systems can be assessed in three tasks, each tackling a different number of choke points: multilingual, hybrid and large-scale. As key performance indicators, the usual suspects are used: precision, recall, F1-score and, in the large-scale task, the systems’ time for successfully answered questions while constantly increasing the number of issued questions.

Dáta a Dátové zdroje

Doplňujúce informácie

Pole Hodnota
Autor Bastian Haarmann
Správca Giulio Napolitano
Posledná aktualizácia August 1, 2017, 07:33 (UTC)
Vytvorené August 1, 2017, 07:21 (UTC)