<<

QUANT-Question Answering Benchmark Curator

Ria Hari Gusmita, Rricha Jalota, Daniel Vollmers, Jan Reineke, Axel-Cyrille Ngonga Ngomo, and Ricardo Usbeck

September 10, 2019

Gusmita et al QUANT September 10, 2019 1 / 33 Outline

1 Motivation

2 Approach

3 Evaluation

4 QALD-specific Analysis

5 Conclusion & Future Work

Gusmita et al QUANT September 10, 2019 2 / 33 Motivation Drawback in evaluating Question Answering systems over knowledge bases

Mainly based on benchmark datasets (benchmarks) Challenge in maintaining high-quality and benchmarks

Gusmita et al QUANT September 10, 2019 3 / 33 Motivation Challenge in maintaining high-quality and benchmarks

Change of the underlying DBpedia 2016-04 DBpedia 2016-10 http://dbpedia.org/resource/Surfing http://dbpedia.org/resource/Surfer http://dbpedia.org/ontology/seatingCapacity http://dbpedia.org/property/capacity http://dbpedia.org/property/portrayer http://dbpedia.org/ontology/portrayer http://dbpedia.org/property/establishedDate http://dbpedia.org/ontology/foundingDate

Gusmita et al QUANT September 10, 2019 4 / 33 Motivation Challenge in maintaining high-quality and benchmarks

Metadata annotation errors

Gusmita et al QUANT September 10, 2019 5 / 33 Motivation Degradation QALD benchmarks against various versions of DBpedia

Gusmita et al QUANT September 10, 2019 6 / 33 Contribution

QUANT, a framework for the intelligent creation and curation of QA benchmarks Definition Given B, D, and Q as benchmark, dataset, and questions respectively S represents QUANT’s suggestions

th i version of a QA benchmark Bi as a pair (Di , Qi ) Given a query qij ∈ Qi with zero results on Dk with k > i 0 S : qij −→ qij QUANT aims to ensure that queries from Bi can be reused for Bk to speed up the curation process as compared to the existing one

Gusmita et al QUANT September 10, 2019 7 / 33 What QUANT supports

1 Creation of SPARQL queries 2 The validity of benchmark 3 Spelling and grammatical correctness of questions

Gusmita et al QUANT September 10, 2019 8 / 33 Approach Architecture

Gusmita et al QUANT September 10, 2019 9 / 33 Approach Smart suggestions

1 SPARQL suggestion 2 Metadata suggestion 3 Multilingual Questions and Keywords Suggestion

Gusmita et al QUANT September 10, 2019 10 / 33 Smart suggestion 1. How SPARQL suggestion module works

Gusmita et al QUANT September 10, 2019 11 / 33 1. SPARQL suggestion Missing prefix

The original SPARQL query SELECT ? s WHERE { res:New_Delhi dbo:country ?s . }

Gusmita et al QUANT September 10, 2019 12 / 33 1. SPARQL suggestion Missing prefix

The original SPARQL query SELECT ? s WHERE { res:New_Delhi dbo:country ?s . } The suggested SPARQL query PREFIX dbo: PREFIX res: SELECT ? s WHERE { res:New_Delhi dbo:country ?s . }

Gusmita et al QUANT September 10, 2019 13 / 33 1. SPARQL suggestion Predicate change

The original SPARQL query SELECT ? date WHERE { ?website rdf:type onto:Software . ?website onto:releaseDate ?date . ?website rdfs:label "DBpedia" . }

Gusmita et al QUANT September 10, 2019 14 / 33 1. SPARQL suggestion Predicate change

The suggested SPARQL query SELECT ? date WHERE { ?website rdf:type onto:Software . ?website rdfs:label "DBpedia" . ?website dbp:latestReleaseDate ?date . }

Gusmita et al QUANT September 10, 2019 15 / 33 1. SPARQL suggestion Predicate missing

The original SPARQL query SELECT ? u r i WHERE { ?subject rdfs:label "Tom␣Hanks". ?subject :homepage ?uri }

Gusmita et al QUANT September 10, 2019 16 / 33 1. SPARQL suggestion Predicate missing

The original SPARQL query SELECT ? u r i WHERE { ?subject rdfs:label "Tom␣Hanks". ?subject foaf:homepage ?uri }

The suggested SPARQL query The predicate foaf:homepage is missing in ?subject foaf:homepage ?uri

Gusmita et al QUANT September 10, 2019 17 / 33 1. SPARQL suggestion Entity change

The original SPARQL query SELECT ? u r i WHERE { ?uri rdf:type yago:CapitalsInEurope }

Gusmita et al QUANT September 10, 2019 18 / 33 1. SPARQL suggestion Entity change

The original SPARQL query SELECT ? u r i WHERE { ?uri rdf:type yago:CapitalsInEurope }

The suggested SPARQL query SELECT ? u r i WHERE { ?uri rdf:type yago:WikicatCapitalsInEurope }

Gusmita et al QUANT September 10, 2019 19 / 33 2. Metadata suggestion

Gusmita et al QUANT September 10, 2019 20 / 33 3. Multilingual questions and keywords suggestion

Question with missing keywords and translations

Gusmita et al QUANT September 10, 2019 21 / 33 3. Multilingual questions and keywords suggestion

Generated keywords: state, united, states, america, highest, density Utilizing Trans Shell tool→Generated keywords translations suggestion

Gusmita et al QUANT September 10, 2019 22 / 33 3. Multilingual questions and keywords suggestion

Suggested Question Translations

Gusmita et al QUANT September 10, 2019 23 / 33 Evaluation

Three goals of the evaluation: 1 QUANT vs manual curation Group Inter-rater Agreement Graduate students curated 50 questions using QUANT and another 1st Two-Users 0.97 50-question manually 2nd Two-Users 0.72 23 minutes vs 278 minutes 3rd Two-Users 0.88 2 Effectiveness of smart suggestions 4th Two-Users 0.77 10 expert users got involved in 5th Two-Users 0.96 creating a new joint benchmark, called QALD-9, with 653 questions Average 0.83 3 QUANT’s capability to provide a high-quality benchmark dataset The inter-rater agreement between each two users amounts up to 0.83 on average

Gusmita et al QUANT September 10, 2019 24 / 33 Evaluation Users acceptance rate in %

100

90

80

70

60

50 acceptance rate per user 40 Acceptance rate in % 30

20

10

0

User 1 User 2 User 3 User 4 User 5 User 6 User 7 User 8 User 9 User 10 List of users QUANT provided 2380 suggestions and user acceptance rate on average is 81% The top 4 acceptance-rate are for QALD-7 and QALD-8

Gusmita et al QUANT September 10, 2019 25 / 33 Evaluation Number of accepted suggestions from all users

500

400

300 SPARQL Query Question Translations Out of Scope 200 Onlydbo Keywords Translations Hybrid 100 Answer Type Aggregation Number of accepted suggestion 0

User 1 User 2 User 3 User 4 User 5 User 6 User 7 User 8 User 9 User 10 List of users Most users accepted suggestion for out-of-scope metadata Keyword and question translation suggestions yielded the second and third highest acceptance rates.

Gusmita et al QUANT September 10, 2019 26 / 33 Evaluation Number of users who accepted QUANT’s suggestions for each question’s attribute.

110

100

90

80

70

60

50 Percentage 40

30

20 Number of users accepted suggestion in % 10

0 Hybrid Only Dbo AggregationAnswer Type Out of Scope SPARQL Query Keywords Translations Question Translations Name of attributes 83.75% of the users accepted QUANT’s smart suggestions on average Hybrid and SPARQL suggestions were only accepted by 2 and 5 users respectively.

Gusmita et al QUANT September 10, 2019 27 / 33 Evaluation Number of suggestions provided by users

40

30

20

10

0 Number of provided suggestions User 1 User 2 User 3 User 4 User 5 User 6 User 7 User 8 User 9 User 10 List of users

SPARQL Query Question Translations Out of Scope Onlydbo Keywords Translations Hybrid Answer Type Aggregation

Answer type, onlydbo, out-of-scope, and SPARQL query metadata were attributes whose value redefined by users

Gusmita et al QUANT September 10, 2019 28 / 33 QALD-specific Analysis

There are 1924 questions where 1442 questions are training data and 482 questions are test data

Gusmita et al QUANT September 10, 2019 29 / 33 QALD-specific Analysis

Duplication removal resulted 655 unique questions Removing 2 semantically similar questions produced 653 questions Using QUANT with 10 expert users, we got 558 total benchmark questions → increase QALD-8 size by 110.6% The new benchmark formed QALD-9 dataset Distribution of unique questions in all QALD versions

Gusmita et al QUANT September 10, 2019 30 / 33 Conclusion

QUANT’s evaluation highlights the need for better datasets and their maintenance QUANT speeds up the curation process by up to 91%. Smart suggestions motivate users to engage in more attribute corrections than if there were no hints

Gusmita et al QUANT September 10, 2019 31 / 33 Future Work

There is a need to invest more time into SPARQL suggestions as only 5 users accepted them We plan to support more file formats based on our internal library

Gusmita et al QUANT September 10, 2019 32 / 33 Thank you for your attention!

Ria Hari Gusmita [email protected] https://github.com/dice-group/QUANT DICE Group at Paderborn University https: //dice-research.org/team/profiles/gusmita/

Gusmita et al QUANT September 10, 2019 33 / 33