A Bibliography of Publications of Thomas Ulrich Zeugmann

Thomas Ulrich Zeugmann Associate Professor Research Institute of Fundamental Information Science Fukuoka 812 Japan Tel: ?n/a? FAX: ?n/a? E-mail: thomas@rifis.kyushu-u.ac.jp (Internet) 02 February 2019 Version 1.17

Abstract 5th [AJ94, Hau92].

This bibliography records publications of 60th [CSZZ08]. 6th [JSZ95a, Pit93]. Thomas Ulrich Zeugmann. 7th [AS96].

Title word cross-reference ’89 [Jan89]. 8th [LM97, Yao95].

’93 [STA94]. 9th [RSWZ98a]. & [Zeu92a]. δ [AZ08]. k [RZ98e, RZ98g]. Z/pα [Zeu89b]. A-posteriori [Zeu83b]. ability [SZ97]. abstract [KZ89c]. ACM -delay [AZ08]. -Variable [RZ98e, RZ98g]. [FC90, Hau92, Pit93]. Across [ZL94, ZL95]. Active [FZ14]. AII [Jan89]. AII’92 [Jan92]. 10th [EFW93]. 16th [MT99]. 1st AII’94 [AJ94]. Akademgorodok [BBP96]. [DJS90, DJS91]. Algorithm [Zeu95, Zeu98]. Algorithmic [AJ94, AS96, JMSZ16, JL95, JSZ95a, LM97, 24th [JMSZ16]. 2nd [BJS93, Vit95, Zeu89a]. RSWZ98a, Zeu97, BZ09, HSVZ13, BDCZ07, BSVZ14, CBRZ06, GTZ10, KSZ14]. 3rd [FC90]. algorithmischen [Zeu83a]. Algorithmisches [Zeu93b, Zeu93a]. 4th [AJ94, HS98]. Algorithms [RZ98a, Zeu89a, Zeu90b,

1 2

Zeu92a, Zeu94a, Zeu94b, FZ14, RZ99b]. [LZK92, ZLK92, JZ10a]. classes almost [KZ85]. ALT [JZ12, SZ02, Zeu84a]. Classification [GTZ10, JMSZ16, CBRZ06]. ALT’94 [WSZ94]. Classifying [AJ94]. ALT’95 [JSZ95a]. ALT’96 [AS96]. [SWZ97, WSZ93, WSZ95]. Clustering ALT’97 [LM97]. ALT’98 [RSWZ98a]. [IZZ10]. Co [FZ95, FZ96]. Co-Learning Ames [HS98]. amount [FZ11, JSZ12]. [FZ95]. coherent [AZ08]. Colloquium Analogical [AJ94, Jan87, Jan92, Jan89]. [HS98]. COLT [Zeu94c]. COLT’90 [FC90]. Analysis [RZ99a, RZ16, Zeu89a, Zeu94a, comparators [GJSZ18]. Complete Zeu94b, Zeu95, Zeu98, RZ99c]. Analyzing [RZ99a, RZ99c]. complexity [RZ98a, RZ99b]. Andrei [BBP96]. Annual [FZP13, Zeu18]. compression [IZZ10]. [BM98, EFW93, FC90, Hau92, Pit93]. Computation [RZ16, CDG89]. Anomalies [GKP+95, GLZ00, KGZ+90]. Computational Approach [LZ92d]. approximations [BM98, FC90, Hau92, Pit93, STA94, Vit95]. [LGZ05, SZ99, SZ02]. Artificial [Yao95]. Computations [Zeu92b, Zeu89b]. Asking [ERS+97a, ERS+97c, ERS+01]. Computer [EFW93, MT99, Rov90, Zeu89a]. Aspects [EFW93, MT99]. August Computing [Zeu90c]. Concept [CDG89, Rov90]. Australia [AS96]. [CJLZ97, CJLZ99]. concepts Australian [Yao95]. automata [GLZ00, LGZ05]. Conference [FZP13, SZ97]. Average [BM98, BBP96, CDG89, JMSZ16, Pit93, [ERS+97a, RZ97, RZ98a, RZ98b, RZ99a, RSWZ98a, Vit95, Yao95]. conjecture RZ00, RZ98e, RZ98f, Zeu94a, Zeu94b, Zeu95, [Zeu87]. Conjunctive [RZ98a, RZ99b]. Zeu98, ERS+97c, ERS+01, RZ98c, RZ98d, Consistency [WZ94b, WZ95a]. Consistent RZ99b, RZ99c, RZ98g]. Average-Case [AZ08]. Constraints [LZ94a]. correct [RZ98a, RZ99a, RZ00, Zeu94a, Zeu94b, [KZ85]. Cruz [Pit93]. Czechoslovakia Zeu95, Zeu98, RZ99b, RZ99c]. [Rov90].

Baase [Zeu89a]. Bansk´a [Rov90]. Data [CJLZ97, CJLZ99, FZ95, FZ96, LZ95a, Barcelona [Vit95]. Barzdin [Zeu87]. LZ96b, RZ98e, RZ98f, TZ95a, WZ94a, Based [JL95]. be IZZ10, JSZ12, LZZ08, RZ98g, TZ95b]. [WZ92, WZ94a, WZ95b, Zeu03]. Behavior December [DJS91]. dedicated [RZ98a, RZ99b]. birthday [CSZZ08]. [JMSZ16, CSZZ08]. delay [AZ08]. Boundaries [ZL94, ZL95]. Bounded Demands [LZ95c, LZ95e, LZ95d]. [CJLZ97, CJLZ99, LZ93e, LZ93c, GJSZ18, Dependence [LZ93b, LZ93a]. Design LZ92a, LZ93f]. Bystrica [Rov90]. [Zeu89a]. deterministic [FZP13, SZ97]. developments [BZ09]. Dezember [BJS93]. California [Pit93]. Can distance [IZZ10]. done [Zeu03]. Driven [WZ95b, Zeu03, WZ92]. Carlo [KZ89b]. [LZ94b, LZ96a]. Dual Case [RZ98a, RZ99a, RZ00, Zeu94a, Zeu94b, [ZLK92, LZK92, LZK96, ZLK95]. Zeu95, Zeu98, RZ99b, RZ99c]. Castle [AJ94, Jan89]. Changes [LZ93e, LZ93c, ed [Zeu89a]. Editor [JSZ95b, RSWZ98b, LZ95c, BZ05, LZ92a, LZ93f, LZ95e]. Zeu97, BSVZ14, GTZ10, KSZ14, Zeu18]. Characterization [LZ92c, LZ94a]. Editors’ [ACZ16, JMSZ16, HSVZ13]. Characterizations effective [Zeu84b]. efficiency [LZ95d]. [Zeu83b, ZLK92, ZLK95]. Class Efficient [ERS+96, WZ95b]. Efficiently 3

[ERS+97a, RZ98e, RZ98f, WZ94a, ERS+97c, [WZ94a]. Impact [LZ93d]. Improved ERS+97b, ERS+01, RZ98g, WZ92, Zeu03]. [Zeu89b]. Incremental Eleventh [BM98]. Erasing [CJLZ97, CJLZ99, LZ95a, LZ95b, LZ96b]. [LWZ96a, LWZ96b, JKL+00, LWZ96c]. Independence [LZ93d]. Independent Error [KZ91]. Ershov [BBP96]. Eurocolt [LZ94b, LZ96a]. indexed [LZZ08]. [STA94]. EuroColt’95 [Vit95]. European Indistinguishability [JZ08]. Inductive [Vit95]. everywhere [KZ85]. Examples [AJ94, BJS93, DJS90, Jan92, KZ85, KZ91, [ERS+96]. expressible [JZ09, JZ10b]. LGZ05, TZ95a, Zeu81, Zeu83b, Zeu83d, extended [KZ89c]. Zeu90a, Zeu91, Zeu06b, CSZZ08, DJS91, GJSZ18, Jan87, Jan89, TZ95b]. Inference Factors [Zeu92b]. families [LZZ08]. Fast [AJ94, HS98, Jan92, KZ89b, KZ91, KZ89a, [Zeu81, Zeu90c]. Fastest TZ95a, Zeu81, Zeu83b, Zeu83d, Zeu90a, [Zeu82, Zeu83d, Zeu83c, Zeu84a]. FCT’89 CSZZ08, GJSZ18, Jan87, Jan89, KZ85, [CDG89]. February [EFW93]. feedback KZ89c, LGZ05, TZ95b, Zeu91, Zeu06b]. [BZ06a, BZ11]. final [JL95]. Finite [RZ98e, influence [KKFZ14]. influenza [IZZ10]. RZ98f, Zeu82, RZ98g, SZ97, Zeu83c, Zeu06a]. Informant [LZ92d, LZ94a]. Informatics first [JZ08, JZ10b, JZ12]. first-order [BBP96]. Information [WZ95b, WZ92]. [JZ08, JZ10b, JZ12]. Foreword Intelligence [Yao95]. Internat [BDCZ07, CSZZ08, CBRZ06, Zeu18, ACZ16, [AJ94, Jan92]. International BSVZ14, GTZ10, HSVZ13, JMSZ16]. [AS96, BBP96, BJS93, CDG89, DJS90, formal [JSZ12]. formulae [JZ12]. Forty HS98, JMSZ16, Jan89, JSZ95a, LM97, [CSZZ08]. foundations [Rov90]. four MT99, RSWZ98a, DJS91]. Introduction [JZ10b]. Freivalds [Zeu16]. Frequency [JSZ95b, RSWZ98b, Zeu89a, KSZ14]. Iowa [KZ89b, KZ91, FZP13]. Fukuoka [JSZ95a]. [HS98]. Issue Functions [Zeu97, HSVZ13, JMSZ16, Zeu18]. [Zeu83b, FZ11, FZ14, JKL+00, JKWZ01, JKWZ03, SZ99, SZ02, Zeu84a, ZZ08]. Japan [JSZ95a, LM97]. Joint [Yao95]. Fundamentals [CDG89]. Funktionen June [BBP96]. [Zeu93a, Zeu93b]. k-Variable [RZ98f]. Karlsruhe GDR [Jan89]. generation [Zeu84a]. [DJS90, DJS91]. Knowledge [JL95]. [AJ94, BJS93, DJS90, DJS91, Knowledge-Based [JL95]. EFW93, MT99, RSWZ98a]. GOSLER [JL95]. Grammatical [HS98]. Guest Lange [Zeu95, Zeu98]. Language [BSVZ14, GTZ10, HSVZ13, KSZ14, Zeu97, [LZ92b, LZ92c, LZ92d, LZ93e, LZ93a, LZ94a, ACZ16, JMSZ16, Zeu18]. Guided RZ00, ZLK92, Zeu94a, Zeu94b, Zeu95, Zeu98, [ZL94, ZL95]. LZK92, LZ93g, LZK96, ZLK95, Zeu06b]. Languages Having [Zeu92b, Zeu84a]. help [GJSZ18]. [ERS+96, ERS+97a, FZ95, FZ96, LZ93d, Highly [Zeu92b]. Hungary [CDG89]. LZ93c, LZ93b, LZ94b, RZ97, RZ98b, RZ98e, hypotheses [LZ93a]. Hypothesis [LZ93b]. RZ98f, SWZ97, TZ95a, WSZ93, WSZ94, ZL94, ZL95, ERS+97c, ERS+97b, ERS+01, ICGI’98 [HS98]. Identification JKL+00, JSZ12, LZ92a, LZ93f, LZ96a, LZZ08, [KZ89b, KZ91, Zeu82, Zeu83c]. Ignoring RZ98c, RZ98d, RZ98g, TZ95b, WSZ95]. 4

Large [Zeu90c]. Learn [WZ94a]. Nonboundability [Zeu84b]. Learnability nonconstructivity [FZ11, JSZ12]. [LZ93d, LZ93b, GJSZ18, SZ99]. Learner Nonmonotonic [RZ00]. learners [BJS93, DJS90, DJS91, LZ93g]. normalized [BZ05, BZ06a, BZ06b, BZ11]. Learning [IZZ10]. note [JZ10a]. Novosibirsk [AJ94, AS96, BM98, CJLZ97, CJLZ99, [BBP96]. Number [LZ93e, LZ93c, Zeu92b]. CJR+03, CJR+06, ERS+96, ERS+97c, ERS+97b, ERS+97a, ERS+01, FZ95, FZ96, Obituary [Zeu16]. October [AJ94, Jan89]. FC90, GKP+95, GLZ00, GTZ10, Hau92, One [ERS+96, ERS+97a, KZ91, RZ97, JKL+00, JKWZ01, JMSZ16, JL95, JSZ95a, RZ98b, RZ00, ERS+97c, ERS+97b, ERS+01, KGZ+90, LZ92a, LZ92b, LZ92c, LZ92d, JZ09, RZ98c, RZ98d]. One-Sided [KZ91]. LZ93e, LZ93f, LZ93c, LZ94a, LZ94b, LZ95a, One-Variable [ERS+96, ERS+97a, RZ97, LZ95b, LZ96b, LWZ96a, LWZ96b, LWZ96c, RZ98b, RZ00, ERS+97c, ERS+97b, ERS+01, LZZ08, LM97, Pit93, RZ97, RZ98a, RZ98c, RZ98c, RZ98d]. only [WZ94a, Zeu92b]. RZ98d, RZ98b, RZ99a, RSWZ98a, RZ98e, Open [Zeu90a, Zeu91]. operators RZ98g, RZ98f, STA94, SZ02, Vit95, WZ94b, [Zeu84b, Zeu84a]. Optimal WZ95a, WZ95b, ZLK92, Zeu94a, Zeu94b, [RZ00, Zeu90a, Zeu92a, Zeu91]. Optimizers ZL94, ZL95, Zeu95, Zeu97, Zeu98, ZZ08, [Zeu88]. Order [LZ93d, JZ08, JZ10b, JZ12]. AZ08, BDCZ07, BSVZ14, CBRZ06, FZ11, Otzenhausen [RSWZ98a]. FZ14, HSVZ13, JKWZ03, JSZ12, KSZ14, KKFZ14, LZK92, LZ93a, LZ93g, LZK96, Parallel [ERS+97a, Zeu90c, Zeu90b, LZ96a, RZ99b, RZ99c, SZ97, WZ92, ZLK95, Zeu92b, ERS+97c, ERS+01, Zeu89b]. Zeu03, Zeu06b, Zeu06a, Zeu18]. Lernen Pattern [ERS+96, ERS+97a, RZ97, RZ98b, [Zeu93a, Zeu93b]. limit [Zeu03, Zeu06a]. RZ00, RZ98e, RZ98f, Zeu94a, Zeu94b, Zeu95, Linear [RZ97, RZ98b, RZ98c, RZ98d]. Zeu98, ERS+97c, ERS+97b, ERS+01, RZ98c, Logic [BJS93, DJS90, DJS91, JZ08]. RZ98d, RZ98g]. patterns [CJR+03, CJR+06]. Pennsylvania [Hau92]. M¯artins [Zeu16]. Madison [BM98]. March Perspectives [BBP96]. Pittsburgh [Vit95]. Mathematical [RZ16, Rov90]. [Hau92]. Polynomial Mathematics [RZ16]. may [WZ94a]. [Zeu90c, CJR+03, CJR+06]. Positive Memorial [BBP96]. memory [SZ97]. [ERS+96, FZ95, FZ96, LZ95a, LZ96b, RZ98e, memoryless [BZ06a]. Mind [LZ93e, LZ93c, RZ98f, TZ95a, JSZ12, LZZ08, RZ98g, TZ95b]. LZ95c, BZ05, LZ92a, LZ93f, LZ95e]. Mining posteriori [Zeu83b]. Power [LZ92b, Zeu88]. [CJLZ97, CJLZ99]. Modeling [LZ95a]. Powers [Zeu90c]. Predicates modulo [Zeu92b]. Monomials [SWZ97, WSZ93, WSZ94, WSZ95]. Preface [RZ99a, RZ99c]. Monotonic [JMSZ16, Zeu18]. Preserving [LZ92b, LZ92c, LZ92d, LZ93g, LZK96, [ZLK92, LZK92]. Prime [Zeu92b]. ZLK92, LZK92, ZLK95]. Monotonicity Probabilistic [KZ91]. Problems [LZ94a, LZ95c, LZ95e, LZ95d]. Monte [Zeu90a, Zeu91]. Proc [KZ89b]. Monte-Carlo [KZ89b]. Much [BM98, FC90, Hau92, Pit93]. proceedings [WZ95b, WZ92]. [CDG89, DJS91, EFW93, Jan87, Jan89, Rov90]. processes [KKFZ14]. program Nearly [Zeu92a]. non [SZ99, SZ02]. [Zeu84a]. Programmen [Zeu83a]. non-recursive [SZ99, SZ02]. Programs [Zeu81, Zeu82, Zeu83d, Zeu90a, 5

KZ85, Zeu83c, Zeu91]. properties [AS96]. Symposium [EFW93, MT99]. [JZ09, JZ10b]. Synthese [Zeu83a]. Synthesis [Zeu83d]. System [BBP96]. Systems [JL95]. Szeged quantifier [JZ09]. quantifiers [JZ10b]. [CDG89]. Queries [ERS+97a, GKP+95, ERS+97c, ERS+01, KGZ+90]. Query [KZ89a, KZ89c]. Teaching [BZ05, BZ06a, BZ06b, BZ11, BZ09]. Teams R¯usins [Zeu16]. Ramsey [JZ10a]. [GKP+95, KGZ+90]. technology [KKFZ14]. randomized [BZ06a, BZ06b, BZ11]. testability [JZ10a]. Testable [JZ12, JZ09]. Rearrangement [LZ94b, LZ96a]. their [LZ92c]. Theoretical Rearrangement-Independent [EFW93, MT99]. Theory [LZ94b, LZ96a]. Recursive [AJ94, AS96, BM98, FC90, GTZ10, Hau92, [FZ95, FZ96, LZ93d, LZ93c, LZ93b, LZ94b, JMSZ16, JSZ95a, LM97, Pit93, RSWZ98a, WSZ93, Zeu83b, Zeu84a, Zeu88, ZL94, ZL95, STA94, Vit95, Zeu97, BDCZ07, BSVZ14, FZ11, FZ14, GLZ00, JKWZ01, LZ92a, LZ93f, CBRZ06, CDG89, HSVZ13, KSZ14, Zeu18]. LZ96a, LGZ05, LZZ08, SZ99, SZ02, WSZ95, Tight [RZ99a, RZ99c]. Time ZZ08]. Refined [KZ89c, KZ89a, LZ95b]. [RZ97, RZ98b, Zeu95, Zeu98, CJR+03, refutably [JKWZ01, JKWZ03]. regular CJR+06, RZ98c, RZ98d]. To o [CJR+03, CJR+06]. Reingold [Zeu92a]. [WZ92, WZ95b]. Total Reinhardsbrunn [AJ94, BJS93, Jan89]. [Zeu95, Zeu98, Zeu84b]. Tour [ZL94, ZL95]. Relational [JZ09]. Relations [KZ89b]. Trading [LZ95e, LZ95c, LZ95d]. Trier Reliable [KZ89b, KZ91]. reliably [KZ85]. [MT99]. Two [TZ95a, TZ95b]. Types Report [Zeu94c, JL95]. respect [LZ92c]. [Zeu84a, Zeu95, Zeu98]. restricted [BZ05]. Review [Zeu89a, Zeu92a]. Rietz [Jan87]. ultrametric [FZ14]. Unbounded [Zeu92a]. ring [Zeu89b]. Rolf [CSZZ08]. Russia uniform [SZ99]. Unifying [LZ92d]. [BBP96]. universal [JZ09]. Untestable [JZ10b, JZ12]. USA [BM98, HS98, Pit93]. Santa [Pit93]. schnellen [Zeu83a]. Science [EFW93, MT99, Rov90]. Searching Variable [ERS+96, ERS+97a, RZ97, RZ98b, [Zeu92a]. Second [BBP96]. Sendai [LM97]. RZ00, RZ98e, RZ98f, ERS+97c, ERS+97b, Set [LZ94b, LZ96a]. Set-Driven ERS+01, RZ98c, RZ98d, RZ98g]. [LZ94b, LZ96a]. Shen [Zeu92a]. shot Variations [TZ95a, TZ95b]. various [GJSZ18]. Sided [KZ91]. size [FZP13]. [LZ94a]. verifiers [GJSZ18]. versus Small [Zeu92b]. Space [LZ93b, LZ93a]. [LZ93g, LZ95e, LZ95c, LZ95d, Zeu84a]. Spain [Vit95]. Special Very [ERS+97a, Zeu90c, ERS+97c, [Zeu97, JMSZ16, Zeu18, HSVZ13]. ERS+97b, ERS+01]. Via Sprachen [Zeu93a, Zeu93b]. STACS [GKP+95, KGZ+90]. virus [IZZ10]. [EFW93]. STACS’99 [MT99]. standardisers [GJSZ18]. stochastic Way [WZ94a]. Wendisch [Jan87]. [Zeu06a]. Stochastically Wendisch-Rietz [Jan87]. Wiehagen [RZ98e, RZ98f, RZ98g]. strategies [KZ85]. [CSZZ08, Zeu95, Zeu98]. Wisconsin subclass [CJR+03, CJR+06]. Survey [BM98]. without [BZ06a]. working [KZ85]. [Zeu90a, LZZ08, Zeu91, ZZ08]. Sydney Workshop REFERENCES 6

[AJ94, AS96, BJS93, DJS90, FC90, Hau92, Artificial Intelligence. Springer- Jan89, Jan92, JSZ95a, LM97, DJS91]. Verlag, , Heidelberg, New W¨urzburg [EFW93]. York, Tokyo, October 1996. years [CSZZ08]. Akama:2008:CCL [AZ08] Yohji Akama and Thomas Zeug- Zur [Zeu83a]. mann. Consistent and coher- ent learning with δ-delay. In- form. Comput., 206(11):1362– References 1374, 2008. CODEN INFCEC. Auer:2016:GEF ISSN 0890-5401 (print), 1090- 2651 (electronic). [ACZ16] Peter Auer, Alexander Clark, and Thomas Zeugmann. Guest Bjorner:1996:PSI Editors’ foreword. Theoreti- [BBP96] D. Bjørner, M. Broy, and I. V. cal Computer Science, 650(??): Pottosin, editors. Perspec- 1–3, October 18, 2016. CO- tives of System Informatics, Sec- DEN TCSCDI. ISSN 0304- ond International Andrei Ershov 3975 (print), 1879-2294 (elec- Memorial Conference, Akadem- tronic). URL http://www. gorodok, Novosibirsk, Russia, sciencedirect.com/science/ June 1996, volume 1181 of Lec- article/pii/S0304397516303723. ture Notes in Computer Science. Arikawa:1994:ALT Springer-Verlag, Berlin, Heidel- berg, New York, Tokyo, October [AJ94] S. Arikawa and K. P. Jantke, ed- 1996. itors. Algorithmic Learning The- Ben-David:2007:FAL ory, 4th Internat. Workshop on Analogical and Inductive Infer- [BDCZ07] Shai Ben-David, John Case, ence, AII’94, 5th Internat. Work- and Thomas Zeugmann. Fore- shop on Algorithmic Learning word [Algorithmic learning the- Theory, ALT’94, Reinhardsbrunn ory]. Theoretical Computer Sci- Castle, Germany, October 1994, ence, 382(3):167–169, 2007. CO- volume 872 of Lecture Notes in DEN TCSCDI. ISSN 0304-3975 Artificial Intelligence. Springer- (print), 1879-2294 (electronic). Verlag, Berlin, Heidelberg, New York, Tokyo, October 1994. Brewka:1993:NIL Arikawa:1996:ALT [BJS93] G. Brewka, K. P. Jantke, and P. H. Schmitt, editors. Non- [AS96] S. Arikawa and A. Sharma, edi- monotonic and Inductive Logic, tors. Algorithmic Learning The- 2nd International Workshop, ory, 7th International Workshop, Reinhardsbrunn, Dezember 1991, ALT’96, Sydney, Australia, vol- Germany, volume 659 of Lecture ume 1160 of Lecture Notes in Notes in Artificial Intelligence. REFERENCES 7

Springer-Verlag, Berlin, Heidel- Balbach:2006:TRL berg, New York, Tokyo, 1993. [BZ06b] Frank J. Balbach and Thomas Bartlett:1998:CPA Zeugmann. Teaching random- ized learners. In Learning the- [BM98] P. Bartlett and Y. Mansour, ed- ory, volume 4005 of Lecture Notes itors. Proc. of the Eleventh in Comput. Sci., pages 229– Annual Conference on Computa- 243. Springer-Verlag, Berlin, Ger- tional Learning Theory, Madison, many / Heidelberg, Germany / Wisconsin, USA. ACM Press, London, UK / etc., 2006. New York, NY 10036, USA, 1998.

Bshouty:2014:GEF Balbach:2009:RDA

[BSVZ14] Nader H. Bshouty, Gilles Stoltz, [BZ09] Frank J. Balbach and Thomas Nicolas Vayatis, and Thomas Zeugmann. Recent developments Zeugmann. Guest editors’ fore- in algorithmic teaching. In word [Algorithmic learning the- Language and automata theory ory]. Theoretical Computer Sci- and applications, volume 5457 ence, 558:1–4, 2014. CODEN TC- of Lecture Notes in Comput. SCDI. ISSN 0304-3975 (print), Sci., pages 1–18. Springer-Verlag, 1879-2294 (electronic). Berlin, Germany / Heidelberg, Germany / London, UK / etc., Balbach:2005:TLR 2009.

[BZ05] Frank J. Balbach and Thomas Balbach:2011:TRL Zeugmann. Teaching learners with restricted mind changes. [BZ11] Frank J. Balbach and Thomas In Algorithmic learning theory, Zeugmann. Teaching randomized volume 3734 of Lecture Notes learners with feedback. Inform. in Comput. Sci., pages 474– Comput., 209(3):296–319, 2011. 489. Springer-Verlag, Berlin, Ger- CODEN INFCEC. ISSN 0890- many / Heidelberg, Germany / 5401 (print), 1090-2651 (elec- London, UK / etc., 2005. tronic).

Balbach:2006:TMR Cesa-Bianchi:2006:FAL

[BZ06a] Frank J. Balbach and Thomas [CBRZ06] Nicol`o Cesa-Bianchi, R¨udiger Zeugmann. Teaching memory- Reischuk, and Thomas Zeug- less randomized learners without mann. Foreword [Algorithmic feedback. In Algorithmic learning learning theory (ALT 2002)]. theory, volume 4264 of Lecture Theoretical Computer Science, Notes in Comput. Sci., pages 93– 350(1):1–2, 2006. CODEN TC- 108. Springer-Verlag, Berlin, Ger- SCDI. ISSN 0304-3975 (print), many / Heidelberg, Germany / 1879-2294 (electronic). Held in London, UK / etc., 2006. L¨ubeck, November 24–26, 2002. REFERENCES 8

Csirik:1989:FCT pages 234–246. Springer-Verlag, Berlin, Germany / Heidelberg, [CDG89] J. Csirik, Janos Demetrovics, and Germany / London, UK / etc., F. Gecseg, editors. Fundamen- 2003. tals of computation theory: In- Case:2006:LSR ternational Conference, FCT’89, Szeged, Hungary, August 21– [CJR+06] John Case, Sanjay Jain, R¨udiger 25, 1989: proceedings, volume Reischuk, Frank Stephan, and 380 of Lecture Notes in Com- Thomas Zeugmann. Learning puter Science. Springer-Verlag, a subclass of regular patterns Berlin, Heidelberg, New York, in polynomial time. Theoret- Tokyo, 1989. ISBN 0-387-51498- ical Computer Science, 364(1): 8 (West Germany). ISSN 0302- 115–131, 2006. CODEN TC- 9743 (print), 1611-3349 (elec- SCDI. ISSN 0304-3975 (print), tronic). LCCN QA75.5 I5731 1879-2294 (electronic). 1989. DM69.00. Case:2008:FFY Case:1997:ICL [CSZZ08] John Case, Takeshi Shinohara, [CJLZ97] J. Case, S. Jain, S. Lange, Thomas Zeugmann, and Sandra and T. Zeugmann. Incremen- Zilles. Foreword [Forty years of tal concept learning for bounded inductive inference. Dedicated to data mining. Technical Re- the 60th birthday of Rolf Wieha- port DOI-TR-136, Department of gen]. Theoretical Computer Sci- Informatics, Kyushu University, ence, 397(1-3):1–3, 2008. CO- Kyushu, Japan, 1997. DEN TCSCDI. ISSN 0304-3975 Case:1999:ICL (print), 1879-2294 (electronic). [CJLZ99] John Case, Sanjay Jain, Steffen Dix:1990:NIL Lange, and Thomas Zeugmann. [DJS90] J. Dix, K. P. Jantke, and P. H. Incremental concept learning for Schmitt, editors. Nonmonotonic bounded data mining. Inform. and Inductive Logic, 1st Inter- Comput., 152(1):74–110, 1999. national Workshop, Karlsruhe, CODEN INFCEC. ISSN 0890- Germany, volume 543 of Lecture 5401 (print), 1090-2651 (elec- Notes in Artificial Intelligence. tronic). To appear. Springer-Verlag, Berlin, Heidel- Case:2003:LSR berg, New York, Tokyo, 1990. Dix:1991:NIL [CJR+03] John Case, Sanjay Jain, R¨udiger Reischuk, Frank Stephan, and [DJS91] J. Dix, K. P. Jantke, and P. H. Thomas Zeugmann. Learning a Schmitt, editors. Nonmonotonic subclass of regular patterns in and inductive logic: 1st interna- polynomial time. In Algorithmic tional workshop, Karlsruhe, Ger- learning theory, volume 2842 of many, December 4–7, 1990, pro- Lecture Notes in Comput. Sci., ceedings, volume 543 of Lecture REFERENCES 9

Notes in Computer Science and and by asking queries. In Li and Lecture Notes in Artificial Intel- Maruoka [LM97], pages 260–276. ligence. Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, Erlebach:1997:LOVb 1991. ISBN 3-540-54564-6, 0-387- + 54564-6. ISSN 0302-9743 (print), [ERS 97b] Thomas Erlebach, Peter Ross- 1611-3349 (electronic). LCCN manith, Hans Stadtherr, Ange- QA76.63 .N66 1991. Proceedings lika Steger, and Thomas Zeug- of the First International Work- mann. Learning one-variable pat- shop on Nonmonotonic and In- tern languages very efficiently. ductive Inference. S¯urikaisekikenky¯usho K¯oky¯uroku, (992):47–57, 1997. The theory Enjalbert:1993:SAS of computing and its applications (Japanese) (Kyoto, 1997). [EFW93] P. Enjalbert, A. Finkel, and K. W. Wagner, editors. STACS Erlebach:1997:LOV 93: 10th Annual Symposium on Theoretical Aspects of Computer [ERS+97c] Thomas Erlebach, Peter Ross- Science, W¨urzburg, Germany, manith, Hans Stadtherr, Ange- February 25–27, 1993: proceed- lika Steger, and Thomas Zeug- ings, volume 665 of Lecture Notes mann. Learning one-variable in Computer Science. Springer- pattern languages very efficiently Verlag, Berlin, Heidelberg, New on average, in parallel, and by York, Tokyo, 1993. ISBN 0-387- asking queries. In Algorithmic 56503-5 (U.S.). ISSN 0302-9743 learning theory (Sendai, 1997), (print), 1611-3349 (electronic). volume 1316 of Lecture Notes LCCN QA267.A1 L43 no.665. in Comput. Sci., pages 260– 276. Springer-Verlag, Berlin, Ger- Erlebach:1996:ELO many / Heidelberg, Germany / [ERS+96] T. Erlebach, P. Rossmanith, London, UK / etc., 1997. H. Stadtherr, A. Steger, and Erlebach:2001:LOV T. Zeugmann. Efficient learning of one-variable pattern languages [ERS+01] Thomas Erlebach, Peter Ross- from positive examples. Techni- manith, Hans Stadtherr, Ange- cal Report DOI-TR-128, Depart- lika Steger, and Thomas Zeug- ment of Informatics, Kyushu Uni- mann. Learning one-variable pat- versity, Kyushu, Japan, 1996. tern languages very efficiently on Erlebach:1997:OP average, in parallel, and by asking queries. Theoretical Computer [ERS+97a] T. Erlebach, P. Rossmanith, Science, 261(1):119–156, 2001. H. Stadtherr, A. Steger, and CODEN TCSCDI. ISSN 0304- T. Zeugmann. Learning one- 3975 (print), 1879-2294 (elec- variable pattern languages very tronic). Algorithmic learning the- efficiently on average, in parallel, ory (Sendai, 1997). REFERENCES 10

Fulk:1990:CPA pages 246–257. Springer-Verlag, Berlin, Germany / Heidelberg, [FC90] M. Fulk and J. Case, editors. Germany / London, UK / etc., COLT’90, Proc. of the 3rd An- 2014. nual ACM Workshop on Com- putational Learning Theory. Mor- Freivalds:2013:SCD gan Kaufmann, San Mateo, 1990. Freivalds:1995:CLR [FZP13] R¯usi¸nˇs Freivalds, Thomas Zeug- mann, and Grant R. Pogosyan. [FZ95] R. Freivalds and T. Zeugmann. On the size complexity of de- Co-learning of recursive lan- terministic frequency automata. guages from positive data. Tech- In Language and automata theory nical Report RIFIS-TR-CS-110, and applications, volume 7810 of RIFIS, Kyushu University 33, Lecture Notes in Comput. Sci., Kyushu, Japan, 1995. pages 287–298. Springer-Verlag, Berlin, Germany / Heidelberg, Freivalds:1996:CLR Germany / London, UK / etc., [FZ96] R. Freivalds and T. Zeugmann. 2013. Co–learning of recursive lan- guages from positive data. In Gao:2018:HBS Bjørner et al. [BBP96], pages 122–133. [GJSZ18] Ziyuan Gao, Sanjay Jain, Frank Stephan, and Thomas Zeug- Freivalds:2011:ANL mann. On the help of bounded [FZ11] R¯usi¸nˇs Freivalds and Thomas shot verifiers, comparators and Zeugmann. On the amount standardisers for learnability in of nonconstructivity in learning inductive inference. In Algorith- recursive functions. In The- mic Learning Theory 2018, vol- ory and applications of models ume 83 of Proc. Mach. Learn. of computation, volume 6648 of Res. (PMLR), page 25. Proceed- Lecture Notes in Comput. Sci., ings of Machine Learning Re- pages 332–343. Springer-Verlag, search PMLR, ????, 2018. Berlin, Germany / Heidelberg, Germany / London, UK / etc., Gasarch:1995:LQT 2011. + Freivalds:2014:ALR [GKP 95] William I. Gasarch, Efim B. Kin- ber, Mark G. Pleszkoch, Carl H. [FZ14] R¯usi¸nˇs Freivalds and Thomas Smith, and Thomas Zeugmann. Zeugmann. Active learning of Learning via queries with teams recursive functions by ultramet- and anomalies. Fundamenta ric algorithms. In SOFSEM Informaticae, 23(1):67–89, Jan- 2014: theory and practice of com- uary 1995. CODEN FUMAAJ. puter science, volume 8327 of ISSN 0169-2968 (print), 1875- Lecture Notes in Comput. Sci., 8681 (electronic). REFERENCES 11

Grieser:2000:LRC Hutter:2013:GEF

[GLZ00] Gunter Grieser, Steffen Lange, [HSVZ13] Marcus Hutter, Frank Stephan, and Thomas Zeugmann. Learn- Vladimir Vovk, and Thomas ing recursive concepts with Zeugmann. Guest Editors’ fore- anomalies. In Algorithmic learn- word [Special issue on algorith- ing theory (Sydney, 2000), vol- mic learning theory]. Theoretical ume 1968 of Lecture Notes in Computer Science, 473:1–3, 2013. Comput. Sci., pages 101–115. CODEN TCSDIQ. ISSN 0304- Springer-Verlag, Berlin, Ger- 3975 (print), 1879-2294 (elec- many / Heidelberg, Germany / tronic). London, UK / etc., 2000. Ito:2010:CNC Gyorfi:2010:GEF [IZZ10] Kimihito Ito, Thomas Zeugmann, [GTZ10] L´aszl´o Gy¨orfi, Gy¨orgy Tur´an, and Yu Zhu. Clustering the nor- and Thomas Zeugmann. Guest malized compression distance for editors’ foreword [Algorithmic influenza virus data. In Algo- Learning Theory (ALT 2008)]. rithms and applications, volume Theoretical Computer Science, 6060 of Lecture Notes in Comput. 411(29–30):2629–2631, 2010. CO- Sci., pages 130–146. Springer- DEN TCSCDI. ISSN 0304-3975 Verlag, Berlin, Germany / Hei- (print), 1879-2294 (electronic). delberg, Germany / London, Held in , October 13– UK / etc., 2010. 16, 2008. Jantke:1987:AII Haussler:1992:PAA [Jan87] K. P. Jantke, editor. Analogical [Hau92] D. Haussler, editor. Proc. of the and inductive inference: proceed- 5th Annual ACM Workshop on ings [Wendisch-Rietz, 1986], vol- Computational Learning Theory, ume 265 of Lecture Notes in Com- Pittsburgh, Pennsylvania.ACM puter Science. Springer-Verlag, Press, New York, NY 10036, Berlin, Heidelberg, New York, USA, 1992. Tokyo, 1987. ISBN 0-387-18081- 8 (paperback). ISSN 0302-9743 Honavar:1998:ICGI (print), 1611-3349 (electronic). [HS98] V. Honavar and G. Slutzki, LCCN Q334 .I5771 1986. editors. Grammatical Infer- Jantke:1989:AII ence, 4th International Collo- quium, ICGI’98, Ames, Iowa, [Jan89] K. P. Jantke, editor. Analogical USA, volume 1433 of Lecture and inductive inference: Interna- Notes in Artificial Intelligence. tional Workshop AII ’89, Rein- Springer-Verlag, Berlin, Heidel- hardsbrunn Castle, GDR, Octo- berg, New York, Tokyo, July ber, 1989: proceedings, volume 1998. 397 of Lecture Notes in Computer REFERENCES 12

Science and Lecture Notes in Jain:2003:LFR Artificial Intelligence. Springer- Verlag, Berlin, Heidelberg, New [JKWZ03] Sanjay Jain, Efim Kinber, Rolf York, Tokyo, 1989. ISBN 3-540- Wiehagen, and Thomas Zeug- 51734-0 (Berlin), 0-387-51734-0 mann. On learning of functions (New York). ISSN 0302-9743 refutably. Theoretical Computer (print), 1611-3349 (electronic). Science, 298(1):111–143, 2003. LCCN Q334 .I577 1989. CODEN TCSCDI. ISSN 0304- 3975 (print), 1879-2294 (elec- Jantke:1992:AII tronic). Selected papers in hon- our of Setsuo Arikawa. [Jan92] K. P. Jantke, editor. Analogi- Jantke:1995:ALK cal and Inductive Inference, In- ternat. Workshop AII’92, volume [JL95] K. P. Jantke and S. Lange, 642 of Lecture Notes in Artifi- editors. Algorithmic Learning cial Intelligence. Springer-Verlag, for Knowledge-Based Systems, Berlin, Heidelberg, New York, GOSLER final report, volume Tokyo, 1992. 961 of Lecture Notes in Artifi- cial Intelligence. Springer-Verlag, Jain:2000:LLF Berlin, Heidelberg, New York, Tokyo, 1995. [JKL+00] Sanjay Jain, Efim Kinber, Stef- fen Lange, Rolf Wiehagen, and Jain:2016:GEF Thomas Zeugmann. Learn- [JMSZ16] Sanjay Jain, R´emi Munos, Frank ing languages and functions by Stephan, and Thomas Zeug- erasing. Theoretical Computer mann. Guest Editors’ fore- Science, 241(1-2):143–189, 2000. word [preface to the special is- CODEN TCSCDI. ISSN 0304- sue dedicated to the 24th Interna- 3975 (print), 1879-2294 (elec- tional Conference on Algorithmic tronic). Algorithmic learning the- Learning Theory (ALT 2013)]. ory (Sydney, 1996). Theoretical Computer Science, Jain:2001:LRF 620(??):1–3, March 21, 2016. CODEN TCSCDI. ISSN 0304- [JKWZ01] Sanjay Jain, Efim Kinber, Rolf 3975 (print), 1879-2294 (elec- Wiehagen, and Thomas Zeug- tronic). URL http://www. mann. Learning recursive func- sciencedirect.com/science/ tions refutably. In Algorith- article/pii/S0304397515009391. mic learning theory (Washing- Held at the National University ton, DC, 2001), volume 2225 of of Singapore, Singapore, October Lecture Notes in Comput. Sci., 6–9, 2013. pages 283–298. Springer-Verlag, Jantke:1995:ALT Berlin, Germany / Heidelberg, Germany / London, UK / etc., [JSZ95a] K. P. Jantke, T. Shinohara, and 2001. T. Zeugmann, editors. Algo- REFERENCES 13

rithmic Learning Theory, 6th In- Jordan:2009:RPE ternational Workshop, ALT’95, Fukuoka, Japan, volume 997 of [JZ09] Charles Jordan and Thomas Lecture Notes in Artificial Intel- Zeugmann. Relational proper- ligence. Springer-Verlag, Berlin, ties expressible with one univer- Heidelberg, New York, Tokyo, sal quantifier are testable. In October 1995. Stochastic algorithms: founda- tions and applications, volume Jantke:1995:EI 5792 of Lecture Notes in Comput. Sci., pages 141–155. Springer- [JSZ95b] K. P. Jantke, T. Shinohara, Verlag, Berlin, Germany / Hei- and T. Zeugmann. Editors’ in- delberg, Germany / London, troduction. In Proceedings of UK / etc., 2009. the 6th International Workshop on Algorithmic Learning The- Jordan:2010:NTR ory, ALT’95, Fukuoka, Japan [JZ10a] Charles Jordan and Thomas [JSZ95a], pages ix–xv. Zeugmann. A note on the testa- bility of Ramsey’s class. In The- Jain:2012:ANL ory and applications of models of computation, volume 6108 of [JSZ12] Sanjay Jain, Frank Stephan, Lecture Notes in Comput. Sci., and Thomas Zeugmann. On pages 296–307. Springer-Verlag, the amount of nonconstructiv- Berlin, Germany / Heidelberg, ity in learning formal languages Germany / London, UK / etc., from positive data. In The- 2010. ory and applications of models of computation, volume 7287 of Jordan:2010:UPE Lecture Notes in Comput. Sci., [JZ10b] Charles Jordan and Thomas pages 423–434. Springer-Verlag, Zeugmann. Untestable properties Berlin, Germany / Heidelberg, expressible with four first-order Germany / London, UK / etc., quantifiers. In Language and au- 2012. tomata theory and applications, Jordan:2008:IFO volume 6031 of Lecture Notes in Comput. Sci., pages 333– 343. Springer-Verlag, Berlin, Ger- [JZ08] Skip Jordan and Thomas Zeug- many / Heidelberg, Germany / mann. Indistinguishability and London, UK / etc., 2010. first-order logic. In Theory and applications of models of compu- Jordan:2012:TUC tation, volume 4978 of Lecture Notes in Comput. Sci., pages 94– [JZ12] Charles Jordan and Thomas 104. Springer-Verlag, Berlin, Ger- Zeugmann. Testable and many / Heidelberg, Germany / untestable classes of first-order London, UK / etc., 2008. formulae. Journal of Computer REFERENCES 14

and System Sciences, 78(5):1557– Kinber:1993:RQI 1578, 2012. CODEN JCSSBM. ISSN 0022-0000 (print), 1090- [KZ89a] E. B. Kinber and T. Zeug- 2724 (electronic). mann. Refined query infer- ence. In Jantke [Jan89], pages Kinber:1990:LQT 148–160. ISBN 3-540-51734- 0 (Berlin), 0-387-51734-0 (New [KGZ+90] E. B. Kinber, W. I. Gasarch, York). ISSN 0302-9743 (print), T. Zeugmann, M. K. Pleszkoch, 1611-3349 (electronic). LCCN and C. H. Smith. Learning via Q334 .I577 1989. queries with teams and anoma- lies. In Fulk and Case [FC90], Kinber:1989:MCI pages 327–337. [KZ89b] Efim Kinber and Thomas Zeug- mann. Monte-Carlo inference Kucevalovs:2014:ITL and its relations to reliable fre- [KKFZ14] Ilja Kucevalovs, Oj¯ars Krasts, quency identification. In Csirik R¯usi¸nˇs Freivalds, and Thomas et al. [CDG89], pages 257–266. Zeugmann. On the influence of ISBN 0-387-51498-8 (West Ger- technology on learning processes. many). ISSN 0302-9743 (print), Parallel Processing Letters, 24(2): 1611-3349 (electronic). LCCN 1440003:1–1440003:17, 2014. CO- QA75.5 I5731 1989. DM69.00. DEN PPLTEE. ISSN 0129-6264. Kinber:1989:RQI

Kivinen:2014:GEI [KZ89c] Efim B. Kinber and Thomas Zeugmann. Refined query infer- [KSZ14] Jyrki Kivinen, Csaba Szepesv´ari, ence (extended abstract). In Ana- and Thomas Zeugmann. Guest logical and inductive inference editors’ introduction [Algorith- (Reinhardsbrunn Castle, 1989), mic learning theory]. Theoretical volume 397 of Lecture Notes in Computer Science, 519:1–3, 2014. Artificial Intelligence, pages 148– CODEN TCSCDI. ISSN 0304- 160. Springer-Verlag, Berlin, Ger- 3975 (print), 1879-2294 (elec- many / Heidelberg, Germany / tronic). London, UK / etc., 1989.

Kinber:1985:IIA Kinber:1991:OSE

[KZ85] Efim B. Kinber and Thomas [KZ91] Efim B. Kinber and Thomas Zeugmann. Inductive infer- Zeugmann. One-sided error ence of almost everywhere cor- probabilistic inductive inference rect programs by reliably working and reliable frequency identifica- strategies. Elektronische Infor- tion. Inform. Comput., 92(2): mationsverarbeitung und Kyber- 253–284, 1991. CODEN IN- netik, 21(3):91–100, 1985. CO- FCEC. ISSN 0890-5401 (print), DEN EIVKAX. ISSN 0013-5712. 1090-2651 (electronic). REFERENCES 15

Lange:2005:IIA Lange:1992:LRL

[LGZ05] Steffen Lange, Gunter Grieser, [LZ92a] S. Lange and T. Zeugmann. and Thomas Zeugmann. Induc- Learning recursive languages tive inference of approximations with bounded mind changes. for recursive concepts. Theoret- Technical Report 16/92, GOSLER- ical Computer Science, 348(1): Report, FB Mathematik und In- 15–40, 2005. CODEN TCSCDI. formatik, TH , Leipzig, ISSN 0304-3975 (print), 1879- Germany, 1992. 2294 (electronic). Lange:1992:PML Li:1997:ALT [LZ92b] S. Lange and T. Zeugmann. On [LM97] M. Li and A. Maruoka, edi- the power of monotonic lan- tors. Algorithmic Learning The- guage learning. Technical Re- ory, 8th International Workshop, port 5/92, GOSLER-Report, FB ALT’97, Sendai, Japan, volume Mathematik und Informatik, TH 1316 of Lecture Notes in Artifi- Leipzig, Leipzig, Germany, 1992. cial Intelligence. Springer-Verlag, Berlin, Heidelberg, New York, Lange:1992:TML Tokyo, October 1997. [LZ92c] S. Lange and T. Zeugmann. Lange:1996:LBE Types of monotonic language learning and their characteriza- [LWZ96a] S. Lange, R. Wiehagen, and tion. In Haussler [Hau92], pages T. Zeugmann. Learning by eras- 377–390. ing. In Arikawa and Sharma [AS96], pages 228–241. Lange:1992:UAM Lange:1996:LEa [LZ92d] S. Lange and T. Zeugmann. A unifying approach to monotonic [LWZ96b] S. Lange, R. Wiehagen, and language learning on informant. T. Zeugmann. Learning by eras- In Jantke [Jan92], pages 244–259. ing. Technical Report RIFIS-TR- CS-122, RIFIS, Kyushu Univer- Lange:1993:LLD sity 33, Kyushu, Japan, 1996. [LZ93a] S. Lange and T. Zeugmann. Lan- Lange:1996:LEb guage learning in dependence on [LWZ96c] Steffen Lange, Rolf Wiehagen, the space of hypotheses. In Pitt and Thomas Zeugmann. Learn- [Pit93], pages 127–136. ing by erasing. In Algorithmic Lange:1993:LRLb learning theory (Sydney, 1996), volume 1160 of Lecture Notes [LZ93b] S. Lange and T. Zeugmann. in Comput. Sci., pages 228– The learnability of recursive lan- 241. Springer-Verlag, Berlin, Ger- guages in dependence on the hy- many / Heidelberg, Germany / pothesis space. Technical Re- London, UK / etc., 1996. port 20/93, GOSLER-Report, REFERENCES 16

FB Mathematik und Informatik, Lange:1993:MVN TH Leipzig, Leipzig, Germany, 1993. [LZ93g] Steffen Lange and Thomas Zeug- mann. Monotonic versus non- Lange:1993:LRLa monotonic language learning. In [LZ93c] S. Lange and T. Zeugmann. Brewka et al. [BJS93], pages 254– Learning recursive languages 269. with a bounded number of mind Lange:1994:CLL changes. Internat. Journal of Foundations of Computer Sci- [LZ94a] S. Lange and T. Zeugmann. ence, 4:157–178, 1993. CODEN Characterization of language IFCSEN. ISSN 0129-0541 (print), learning from informant un- 1793-6373 (electronic). der various monotonicity con- straints. Journal of Experimen- Lange:1993:IOI tal and Artificial Intelligence,6: 73–94, 1994. CODEN JEAIEL. [LZ93d] S. Lange and T. Zeugmann. On ISSN 0952-813x (print), 1362- the impact of order indepen- 3079 (electronic). dence to the learnability of re- cursive languages. Technical Re- Lange:1994:SDR port ISIS-RR-93-17E, Research [LZ94b] S. Lange and T. Zeugmann. Report, FUJITSU Laboratories Set-driven and rearrangement- Ltd., Numazu, Numazu, Japan, independent learning of recursive 1993. languages. In Arikawa and Jan- Lange:1993:LLB tke [AJ94], pages 453–468. [LZ93e] Steffen Lange and Thomas Zeug- Lange:1995:MIL mann. Language learning with a bounded number of mind [LZ95a] S. Lange and T. Zeugmann. Mod- changes. In Enjalbert et al. eling incremental learning from [EFW93], pages 682–691. ISBN positive data. Technical Re- 0-387-56503-5 (U.S.). ISSN 0302- port RIFIS-TR-CS-117, RIFIS, 9743 (print), 1611-3349 (elec- Kyushu University 33, Kyushu, tronic). LCCN QA267.A1 L43 Japan, 1995. no.665. Lange:1995:RIL Lange:1993:LRL [LZ95b] S. Lange and T. Zeugmann. Re- [LZ93f] Steffen Lange and Thomas Zeug- fined incremental learning. In mann. Learning recursive lan- Yao [Yao95], pages 147–154. guages with bounded mind Lange:1995:TMDa changes. Internat. Journal of Foundations of Computer Sci- [LZ95c] S. Lange and T. Zeugmann. ence, 4(2):157–178, 1993. CO- Trading monotonicity demands DEN IFCSEN. ISSN 0129-0541 versus mind changes. In Vit´anyi (print), 1793-6373 (electronic). [Vit95], pages 125–139. REFERENCES 17

Lange:1995:TMDb FB Mathematik und Informatik, TH Leipzig, Leipzig, Germany, [LZ95d] Steffen Lange and Thomas Zeug- 1992. mann. Trading monotonicity de- mands versus efficiency. Bulletin Lange:1996:MDM of Informatics and Cybernetics, [LZK96] Steffen Lange, Thomas Zeug- 27(1):53–83, 1995. ISSN 0286- mann, and Shyam Kapur. Mono- 522X. tonic and dual monotonic lan- Lange:1995:TMD guage learning. Theoretical Computer Science, 155(2):365– [LZ95e] Steffen Lange and Thomas Zeug- 410, 1996. CODEN TCSCDI. mann. Trading monotonicity ISSN 0304-3975 (print), 1879- demands versus mind changes. 2294 (electronic). In Computational learning theory (Barcelona, 1995), volume 904 of Lange:2008:LIF Lecture Notes in Comput. Sci., pages 125–139. Springer-Verlag, [LZZ08] Steffen Lange, Thomas Zeug- Berlin, Germany / Heidelberg, mann, and Sandra Zilles. Learn- Germany / London, UK / etc., ing indexed families of recursive 1995. languages from positive data: a survey. Theoretical Computer Lange:1996:SDR Science, 397(1-3):194–232, 2008. [LZ96a] S. Lange and T. Zeugmann. CODEN TCSCDI. ISSN 0304- Set-driven and rearrangement- 3975 (print), 1879-2294 (elec- independent learning of recursive tronic). languages. Math. Systems The- Meinel:1999:STACS ory, 29(6):599–634, 1996. CO- DEN MASTBA. ISSN 0025-5661. [MT99] C. Meinel and S. Tison, edi- Lange:1996:ILP tors. STACS’99, 16th Interna- tional Symposium on Theoreti- [LZ96b] Steffen Lange and Thomas Zeug- cal Aspects of Computer Science, mann. Incremental learning from Trier, Germany, volume 1563 positive data. Journal of Com- of Lecture Notes in Computer puter and System Sciences,53 Science. Springer-Verlag, Berlin, (1):88–103, 1996. CODEN JC- Heidelberg, New York, Tokyo, SSBM. ISSN 0022-0000 (print), March 1999. 1090-2724 (electronic). Pitt:1993:PAA Lange:1992:CPM [Pit93] L. Pitt, editor. Proc. of the [LZK92] S. Lange, T. Zeugmann, and 6th Annual ACM Conference on S. Kapur. Class preserving Computational Learning Theory, monotonic and dual monotonic Santa Cruz, California, USA. language learning. Technical ACM Press, New York, NY Report 14/92, GOSLER-Report, 10036, USA, 1993. REFERENCES 18

Rovan:1990:MFC tern languages in linear average time. Technical Report DOI-TR- [Rov90] B. Rovan, editor. Mathemati- 140, Department of Informat- cal foundations of computer sci- ics, Kyushu University, Kyushu, ence, 1990: Bansk´a Bystrica, Japan, 1997. Czechoslovakia, August 27–31, 1990: proceedings, volume 452 Reischuk:1998:AAC of Lecture Notes in Computer Science. Springer-Verlag, Berlin, [RZ98a] R. Reischuk and T. Zeugmann. Heidelberg, New York, Tokyo, Analyzing the average-case be- 1990. ISBN 3-540-52953-5, 0-387- havior of conjunctive learning 52953-5. ISSN 0302-9743 (print), algorithms. Technical Report 1611-3349 (electronic). LCCN DOI-TR-153, Department of In- QA76.9.M35 M386 1990. Papers formatics, Kyushu University, selected for presentation at the Kyushu, Japan, 1998. 15th Symposium on Mathemati- cal Foundations of Computer Sci- Reischuk:1998:OP ence - MFCS ’90. [RZ98b] R. Reischuk and T. Zeugmann. Richter:1998:ALT Learning one-variable pattern [RSWZ98a] M. M. Richter, C. H. Smith, languages in linear average time. R. Wiehagen, and T. Zeugmann, In Bartlett and Mansour [BM98], editors. Algorithmic Learning pages 198–208. Theory, 9th International Con- Reischuk:1998:LOV ference, ALT’98, Otzenhausen, Germany, volume 1501 of Lecture [RZ98c] R¨udiger Reischuk and Thomas Notes in Artificial Intelligence. Zeugmann. Learning one-variable Springer-Verlag, Berlin, Heidel- pattern languages in linear av- berg, New York, Tokyo, October erage time. In Proceedings of 1998. the Eleventh Annual Conference Richter:1998:EI on Computational Learning The- ory (Madison, WI, 1998), pages [RSWZ98b] M. M. Richter, C. H. Smith, 198–208 (electronic). ACM Press, R. Wiehagen, and T. Zeugmann. New York, NY 10036, USA, 1998. Editors’ introduction. In Proceed- ings of the 9th International Con- Reischuk:1998:LOVb ference on Algorithmic Learning Theory, ALT’98, Otzenhausen, [RZ98d] R¨udiger Reischuk and Thomas Germany [RSWZ98a], pages 1– Zeugmann. Learning one-variable 10. pattern languages in linear aver- Reischuk:1997:LOV age time. S¯urikaisekikenky¯usho K¯oky¯uroku, (1041):95–102, 1998. [RZ97] R. Reischuk and T. Zeug- Algorithms and theory of com- mann. Learning one-variable pat- puting (Japanese) (Kyoto, 1998). REFERENCES 19

Rossmanith:1998:LVPa S¯urikaisekikenky¯usho K¯oky¯uroku, (1093):123–129, 1999. Models [RZ98e] P. Rossmanith and T. Zeugmann. of computation and algorithms Learning k-variable pattern lan- (Japanese) (Kyoto, 1999). guages efficiently stochastically finite on average from positive Reischuk:1999:CTA data. Technical Report DOI-TR- 145, Department of Informat- [RZ99c] R¨udiger Reischuk and Thomas ics, Kyushu University, Kyushu, Zeugmann. A complete and tight Japan, 1998. average-case analysis of learning monomials. In STACS 99 (Trier), Rossmanith:1998:PL volume 1563 of Lecture Notes in Comput. Sci., pages 414– [RZ98f] P. Rossmanith and T. Zeugmann. 423. Springer-Verlag, Berlin, Ger- Learning k-variable pattern lan- many / Heidelberg, Germany / guages efficiently stochastically London, UK / etc., 1999. finite on average from positive data. In Honavar and Slutzki Reischuk:2000:ACO [HS98], pages 13–24. [RZ00] R¨udiger Reischuk and Thomas Rossmanith:1998:LVPb Zeugmann. An average-case op- [RZ98g] Peter Rossmanith and Thomas timal one-variable pattern lan- Zeugmann. Learning k-variable guage learner. Journal of Com- pattern languages efficiently puter and System Sciences,60 stochastically finite on average (2):302–335, April 2000. CO- from positive data. In Gram- DEN JCSSBM. ISSN 0022- matical inference (Ames, IA, 0000 (print), 1090-2724 (elec- 1998), volume 1433 of Lecture tronic). URL http://www. Notes in Comput. Sci., pages 13– sciencedirect.com/science/ 24. Springer-Verlag, Berlin, Ger- article/pii/S0022000099916690. many / Heidelberg, Germany / Eleventh Annual Conference on London, UK / etc., 1998. Computational Learning Theory (Madison, WI, 1998). Reischuk:1999:AV Romisch:2016:MAM [RZ99a] R. Reischuk and T. Zeugmann. A complete and tight average- [RZ16] Werner R¨omisch and Thomas case analysis of learning monomi- Zeugmann. Mathematical Anal- als. In Meinel and Tison [MT99], ysis and the Mathematics of pages 414–423. Computation. Springer, Cham, Switzerland, 2016. ISBN 3- Reischuk:1999:AAC 319-42753-9 (hardcover), 3-319- [RZ99b] R¨udiger Reischuk and Thomas 42755-5 (e-book). xxiii + Zeugmann. Analyzing the 704 pp. URL http://link. average-case behavior of con- springer.com/10.1007/978-3- junctive learning algorithms. 319-42755-3. REFERENCES 20

Shawe-Taylor:1994:CLT Stephan:2002:LCA

[STA94] J. Shawe-Taylor and M. Anthony, [SZ02] Frank Stephan and Thomas editors. Computational Learn- Zeugmann. Learning classes of ing Theory: Eurocolt ’93, volume approximations to non-recursive New Series Number 53 of The functions. Theoretical Computer Institute of Mathematics and its Science, 288(2):309–341, 2002. Applications Conference Series. CODEN TCSCDI. ISSN 0304- Oxford University Press, Walton 3975 (print), 1879-2294 (elec- Street, Oxford OX2 6DP, UK, tronic). Algorithmic learning the- 1994. ISBN 0-19-853492-2. ory (Tokyo, 1999). Smith:1997:CPL Tabe:1995:TVIa [TZ95a] T. Tabe and T. Zeugmann. Two [SWZ97] C. H. Smith, R. Wiehagen, and variations of inductive inference T. Zeugmann. Classifying pred- of languages from positive data. icates and languages. Internat. Technical Report RIFIS-TR-CS- Journal of Foundations of Com- 105, RIFIS, Kyushu University puter Science, 8:15–41, 1997. CO- 33, Kyushu, Japan, 1995. DEN IFCSEN. ISSN 0129-0541 (print), 1793-6373 (electronic). Tabe:1995:TVIb [TZ95b] Takashi Tabe and Thomas Sakamoto:1997:LAD Zeugmann. Two variations [SZ97] Hiroshi Sakamoto and Thomas of inductive inference of lan- Zeugmann. The learning guages from positive data. ability of deterministic au- S¯urikaisekikenky¯usho K¯oky¯uroku, tomata with finite memory. (906):73–80, 1995. Algorithms S¯urikaisekikenky¯usho K¯oky¯uroku, and computational complexity (992):36–41, 1997. The theory theory (Japanese) (Kyoto, 1995). of computing and its applications Vitanyi:1995:CLT (Japanese) (Kyoto, 1997). [Vit95] P. Vit´anyi, editor. Computa- MR1788537 tional Learning Theory, 2nd Eu- ropean Conference, EuroColt’95, [SZ99] Frank Stephan and Thomas Barcelona, Spain, March 1995, Zeugmann. On the uniform learn- volume 904 of Lecture Notes in ability of approximations to non- Artificial Intelligence. Springer- recursive functions. In Algo- Verlag, Berlin, Heidelberg, New rithmic learning theory (Tokyo, York, Tokyo, March 1995. 1999), volume 1720 of Lecture Wiehagen:1993:CRP Notes in Comput. Sci., pages 276–290. Springer-Verlag, Berlin, [WSZ93] R. Wiehagen, C. H. Smith, Germany / Heidelberg, Ger- and T. Zeugmann. Classi- many / London, UK / etc., 1999. fying recursive predicates and REFERENCES 21

languages. Technical Report Wiehagen:1994:LC 21/93, GOSLER-Report, FB Mathematik und Informatik, TH [WZ94b] R. Wiehagen and T. Zeugmann. Leipzig, Leipzig, Germany, 1993. Learning and consistency. Tech- nical Report 28/94, GOSLER- Wiehagen:1994:CPL Report, FB Mathematik und In- formatik, TH Leipzig, Leipzig, [WSZ94] R. Wiehagen, C. H. Smith, Germany, 1994. and T. Zeugmann. Classifica- Wiehagen:1995:LC tion of predicates and languages. In Shawe-Taylor and Anthony [WZ95a] R. Wiehagen and T. Zeugmann. [STA94], pages 171–181. ISBN 0- Learning and consistency. In Jan- 19-853492-2. tke and Lange [JL95], pages 1–24. Wiehagen:1995:CRP Wiehagen:1995:TMI [WZ95b] R. Wiehagen and T. Zeugmann. [WSZ95] R. Wiehagen, C. H. Smith, and Too much information can be too T. Zeugmann. Classifying recur- much for efficient learning. In sive predicates and languages. In Jantke [Jan92], pages 72–86. Jantke and Lange [JL95], pages 177–192. Yao:1995:AJC Wiehagen:1992:TMI [Yao95] Xin Yao, editor. 8th Australian Joint Conference on Artificial In- [WZ92] Rolf Wiehagen and Thomas telligence. World Scientific Pub- Zeugmann. Too much informa- lishing Co., Singapore; Philadel- tion can be too much for learning phia, PA, USA; River Edge, NJ, efficiently. In Analogical and in- USA, 1995. ductive inference (Dagstuhl Cas- Zeugmann:1981:IIF tle, 1992), volume 642 of Lec- ture Notes in Artificial Intel- [Zeu81] T. Zeugmann. Inductive in- ligence, pages 72–86. Springer- ference of fast programs. In Verlag, Berlin, Germany / Hei- Proc. Workshop on Algorithms delberg, Germany / London, and Computing Theory. Techni- UK / etc., 1992. cal University of Poznan, Poznan, Poland, 1981. Wiehagen:1994:IDM Zeugmann:1982:FIF [WZ94a] R. Wiehagen and T. Zeugmann. [Zeu82] T. Zeugmann. On the fi- Ignoring data may be the only nite identification of fastest pro- way to learn efficiently. Journal grams. In Proc. Symposium of Experimental and Artificial In- on Mathematical Foundations of telligence, 6:131–144, 1994. CO- Computer Science. Humboldt- DEN JEAIEL. ISSN 0952-813x Universit¨at, Berlin, Germany, (print), 1362-3079 (electronic). 1982. Seminarbericht Nr. 52. REFERENCES 22

Zeugmann:1983:ASS and logical foundations of pro- gramming, volume 67 of Weiter- [Zeu83a] T. Zeugmann. Zur algorith- bildungszentrum Math. Kybernet. mischen Synthese von schnellen Rechentech. Informationsverar- Programmen. PhD thesis, beitung, pages 75–85. Tech. Univ. Humboldt-Universit¨at zu Berlin, , Dresden, 1984. Sektion Mathematik, Berlin, Germany, 1983. Zeugmann:1984:NTE

Zeugmann:1983:PCI [Zeu84b] Thomas Zeugmann. On the non- boundability of total effective op- [Zeu83b] Thomas Zeugmann. A-posteriori erators. Zeitschr. f. math. Logik characterizations in inductive in- und Grundlagen d. Math. Bd., ference of recursive functions. 30(2):169–172, 1984. CODEN Elektronische Informationsverar- ZMLGAQ. ISSN 0044-3050. beitung und Kybernetik, 19(10– 11):559–594, 1983. CODEN Zeugmann:1987:BC EIVKAX. ISSN 0013-5712. Zeugmann:1983:FIF [Zeu87] Thomas Zeugmann. On Barzdin’s conjecture. In Jantke [Jan87], [Zeu83c] Thomas Zeugmann. On the fi- pages 220–227. ISBN 0-387- nite identification of fastest pro- 18081-8 (paperback). ISSN 0302- grams. In Symposium on mathe- 9743 (print), 1611-3349 (elec- matical foundations of computer tronic). LCCN Q334 .I5771 1986. science (Diedrichshagen, 1982), volume 52 of Seminarberichte, Zeugmann:1988:PRO pages 151–159. Humboldt Univ. Berlin, 1983. [Zeu88] Thomas Zeugmann. On the power of recursive optimizers. Zeugmann:1983:SFP Theoretical Computer Science,62 (3):289–310, 1988. CODEN TC- [Zeu83d] Thomas Zeugmann. On the SCDI. ISSN 0304-3975 (print), synthesis of fastest programs in 1879-2294 (electronic). inductive inference. Elektro- nische Informationsverarbeitung Zeugmann:1989:RBC und Kybernetik, 19(12):625–642, 1983. CODEN EIVKAX. ISSN [Zeu89a] T. Zeugmann. Review of Baase, 0013-5712. Computer Algorithms: Introduc- Zeugmann:1984:ROV tion to Design and Analysis (2nd ed.) (1988). SIGACT News [Zeu84a] Th. Zeugmann. Recursive oper- (ACM Special Interest Group ators versus recursive functions on Automata and Computabil- with respect to the generation ity Theory), 20, 1989. CO- of classes of functions having a DEN SIGNDM. ISSN 0163-5700 fastest program. In Algebraic (print), 1943-5827 (electronic). REFERENCES 23

Zeugmann:1989:IPC logic (Karlsruhe, 1990), volume 543 of Lecture Notes in Comput. [Zeu89b] Thomas Zeugmann. Improved Sci., pages 208–222. Springer- parallel computations in the ring Verlag, Berlin, Germany / Hei- Z/pα. J. Inform. Process. Cy- delberg, Germany / London, bernet., 25(10):543–547, 1989. UK / etc., 1991. ISSN 0863-0593. Colloquium on Computation Theory (CCT ’88) Zeugmann:1992:RRS (Berlin, 1988). [Zeu92a] T. Zeugmann. Review of Rein- Zeugmann:1990:IIO gold & Shen, more nearly op- timal algorithms for unbounded [Zeu90a] T. Zeugmann. Inductive infer- searching (1991). SIGACT News ence of optimal programs: A sur- (ACM Special Interest Group vey and open problems. In Dix on Automata and Computabil- et al. [DJS90], pages 208–222. ity Theory), 23, 1992. CO- Zeugmann:1990:PA DEN SIGNDM. ISSN 0163-5700 (print), 1943-5827 (electronic). [Zeu90b] T. Zeugmann. Parallel algo- rithms. In Encyclopedia of Com- Zeugmann:1992:HPC puter Science and Technology, [Zeu92b] Thomas Zeugmann. Highly volume 21 (Supplement 6), pages parallel computations modulo a 223–244. Marcel Dekker, New number having only small prime York, NY, USA, 1990. factors. Inform. Comput.,96 Zeugmann:1990:CLP (1):95–114, 1992. CODEN IN- FCEC. ISSN 0890-5401 (print), [Zeu90c] Thomas Zeugmann. Computing 1090-2651 (electronic). large polynomial powers very fast in parallel. In Rovan [Rov90], Zeugmann:1993:ALF pages 538–544. ISBN 3-540- 52953-5, 0-387-52953-5. ISSN [Zeu93a] T. Zeugmann. Algorithmis- 0302-9743 (print), 1611-3349 ches Lernen von Funktionen und (electronic). LCCN QA76.9.M35 Sprachen. Technical Report M386 1990. Papers selected for Habilitationsschrift, Technische presentation at the 15th Sym- Hochschule Darmstadt, Fach- posium on Mathematical Foun- bereich Informatik, Darmstadt, dations of Computer Science - Germany, 1993. MFCS ’90. Zeugmann:1993:IAL Zeugmann:1991:IIO [Zeu93b] T. Zeugmann. Algorithmis- [Zeu91] Thomas Zeugmann. Inductive ches Lernen von Funktionen und inference of optimal programs: Sprachen. Technical Report a survey and open problems. Habilitationsschrift, Technische In Nonmonotonic and inductive Hochschule Darmstadt, Fach- REFERENCES 24

bereich Informatik, Darmstadt, (guest editor). Theoretical Com- Germany, 1993. puter Science, 185(1), 1997. CO- DEN TCSCDI. ISSN 0304-3975 Zeugmann:1994:AAP (print), 1879-2294 (electronic). [Zeu94a] T. Zeugmann. Average-case anal- Zeugmann:1998:LWP ysis of pattern language learning algorithms. In Arikawa and Jan- [Zeu98] Thomas Zeugmann. Lange and tke [AJ94], pages 8–9. Wiehagen’s pattern language learning algorithm: An average- Zeugmann:1994:ACA case analysis with respect to its total learning time. Annals of [Zeu94b] T. Zeugmann. Average-case anal- Mathematics and Artificial In- ysis of pattern language learn- telligence, 23(1-2):117–145, 1998. ing algorithms. In Proc. 4th In- CODEN AMAIEC. ISSN 1012- ternat. Workshop on Analogical 2443 (print), 1573-7470 (elec- and Inductive Inference, volume tronic). ALT 94 and AII 94 872 of Lecture Notes in Artificial (Reinhardsbrunn Castle). Intelligence, pages 8–9. Springer- Verlag, Berlin, Heidelberg, New Zeugmann:2003:CLL York, Tokyo, 1994. [Zeu03] Thomas Zeugmann. Can learn- Zeugmann:1994:RC ing in the limit be done effi- ciently? In Algorithmic learning [Zeu94c] T. Zeugmann. Report on COLT theory, volume 2842 of Lecture 1994. SIGACT News (ACM Spe- Notes in Comput. Sci., pages 17– cial Interest Group on Automata 38. Springer-Verlag, Berlin, Ger- and Computability Theory), 25, many / Heidelberg, Germany / 1994. CODEN SIGNDM. ISSN London, UK / etc., 2003. 0163-5700 (print), 1943-5827 (electronic). Zeugmann:2006:LLS

Zeugmann:1995:LWP [Zeu06a] Thomas Zeugmann. From learn- ing in the limit to stochastic finite [Zeu95] T. Zeugmann. Lange and Wieha- learning. Theoretical Computer gen’s pattern language learning Science, 364(1):77–97, 2006. CO- algorithm: An average-case anal- DEN TCSCDI. ISSN 0304-3975 ysis with respect to its total (print), 1879-2294 (electronic). learning time. Technical Re- port RIFIS-TR-CS-111, RIFIS, Zeugmann:2006:IIL Kyushu University 33, Kyushu, [Zeu06b] Thomas Zeugmann. Inductive in- Japan, 1995. ference and language learning. In Zeugmann:1997:SIA Theory and applications of mod- els of computation, volume 3959 [Zeu97] T. Zeugmann. Special issue of Lecture Notes in Comput. Sci., on algorithmic learning theory, pages 464–473. Springer-Verlag, REFERENCES 25

Berlin, Germany / Heidelberg, dual monotonic language learn- Germany / London, UK / etc., ing. Technical Report IRCS 92 2006. - 24, Institute for Research in Zeugmann:2016:ORM Cognitive Science, Univ. Penn- sylvania, Philadelphia, PA, USA, [Zeu16] Thomas Zeugmann. Obituary: 1992. R¯usi¸nˇsM¯arti¸nˇs Freivalds (1942– Zeugmann:1995:CMD 2016). Bull. Eur. Assoc. Theor. Comput. Sci. (EATCS), (118): [ZLK95] Thomas Zeugmann, Steffen 17–20, 2016. ISSN 0252-9742. Lange, and Shyam Kapur. Char- acterizations of monotonic and Zeugmann:2018:GEF dual monotonic language learn- [Zeu18] Thomas Zeugmann. Guest Ed- ing. Inform. Comput., 120(2): itor’s foreword [Preface: special 155–173, 1995. CODEN IN- issue on learning theory and com- FCEC. ISSN 0890-5401 (print), plexity]. Theoretical Computer 1090-2651 (electronic). Science, 733(??):1–3, ???? 2018. Zeugmann:2008:LRF CODEN TCSCDI. ISSN 0304- 3975 (print), 1879-2294 (elec- [ZZ08] Thomas Zeugmann and Sandra tronic). URL http://www. Zilles. Learning recursive func- sciencedirect.com/science/ tions: a survey. Theoretical article/pii/S0304397518302718. Computer Science, 397(1-3):4– 56, 2008. CODEN TCSCDI. Zeugmann:1994:GTA ISSN 0304-3975 (print), 1879- [ZL94] T. Zeugmann and S. Lange. A 2294 (electronic). guided tour across the bound- aries of learning recursive lan- guages. Technical Report 26/94, GOSLER-Report, FB Math- ematik und Informatik, TH Leipzig, Leipzig, Germany, 1994. Zeugmann:1995:GTA [ZL95] T. Zeugmann and S. Lange. A guided tour across the bound- aries of learning recursive lan- guages. In Jantke and Lange [JL95], pages 193–262. Zeugmann:1992:CCP [ZLK92] T. Zeugmann, S. Lange, and S. Kapur. Characterizations of class preserving monotonic and