Question Answering Benchmark

The aim of the Question Answering Benchmark in HOBBIT is to assist the assessment of QA systems with a fixed set of natural language questions and their respective SPARQL queries for DBpedia. Systems can be assessed in three tasks, each tackling a different number of choke points: multilingual, hybrid and large-scale. As key performance indicators, the usual suspects are used: precision, recall, F1-score and, in the large-scale task, the systems’ time for successfully answered questions while constantly increasing the number of issued questions.

Data e Risorse

Informazioni addizionali

Campo Valore
Autore Bastian Haarmann
Manutentore Giulio Napolitano
Ultimo aggiornamento Agosto 1, 2017, 07:33 (UTC)
Creato Agosto 1, 2017, 07:21 (UTC)