Published as a conference paper at ICLR 2020 TABFACT:ALARGE-SCALE DATASET FOR TABLE- BASED FACT VERIFICATION Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, William Yang Wang University of California, Santa Barbara, CA, USA Tencent AI Lab, Bellevue, WA, USA fwenhuchen,hongmin wang,yunkai zhang,hongwang600,
[email protected] fshiyangli,
[email protected] [email protected] ABSTRACT The problem of verifying whether a textual hypothesis holds based on the given evidence, also known as fact verification, plays an important role in the study of natural language understanding and semantic representation. However, existing studies are mainly restricted to dealing with unstructured evidence (e.g., natu- ral language sentences and documents, news, etc), while verification under struc- tured evidence, such as tables, graphs, and databases, remains under-explored. This paper specifically aims to study the fact verification given semi-structured data as evidence. To this end, we construct a large-scale dataset called TabFact with 16k Wikipedia tables as the evidence for 118k human-annotated natural lan- guage statements, which are labeled as either ENTAILED or REFUTED. TabFact is challenging since it involves both soft linguistic reasoning and hard symbolic reasoning. To address these reasoning challenges, we design two different mod- els: Table-BERT and Latent Program Algorithm (LPA). Table-BERT leverages the state-of-the-art pre-trained language model to encode the linearized tables and statements into continuous vectors for verification. LPA parses statements into programs and executes them against the tables to obtain the returned binary value for verification. Both methods achieve similar accuracy but still lag far be- hind human performance.