Arxiv:1912.09713V2 [Cs.LG] 25 Jun 2020
Published as a conference paper at ICLR 2020 MEASURING COMPOSITIONAL GENERALIZATION: ACOMPREHENSIVE METHOD ON REALISTIC DATA Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee & Olivier Bousquet Google Research, Brain Team {keysers,schaerli,nkscales,hylke,danielfurrer,sergik,nikola,sinopalnikov, lukstafi,ttihon,tsar,wangxiao,marcvanzee,obousquet}@google.com ABSTRACT State-of-the-art machine learning methods exhibit limited compositional general- ization. At the same time, there is a lack of realistic benchmarks that comprehen- sively measure this ability, which makes it challenging to find and evaluate im- provements. We introduce a novel method to systematically construct such bench- marks by maximizing compound divergence while guaranteeing a small atom di- vergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks. We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures. We find that they fail to generalize compositionally and that there is a surprisingly strong nega- tive correlation between compound divergence and accuracy. We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings. 1 INTRODUCTION Human intelligence exhibits systematic compositionality (Fodor & Pylyshyn, 1988), the capacity to understand and produce a potentially infinite number of novel combinations of known components, i.e., to make “infinite use of finite means” (Chomsky, 1965).
[Show full text]