History and Survey of AI August 31, 2012

Total Page:16

File Type:pdf, Size:1020Kb

History and Survey of AI August 31, 2012 CS671 Artificial General Intelligence History and Survey of AI August 31, 2012 Where did it go wrong and where did it go right. 1 Observation: AI has made steady progress since its inception about 60 years ago. This is contrary to some accounts in the popular press. 2 The \Gullibility Wave" Phenomenon • First reaction to AI: Skepticism • Second (perhaps having seen a robot demo or learned to pro- gram a computer): Wow! Computers can do a lot. Gullibility Doing intelligent task X isn't so hard. When I think of doing it myself, it seems straightforward! Enthusiasm • Third reaction: Computers are impossibly literal-minded. They can never break out of this mold. Disenchantment • Fourth reaction: The brain doesn't do computation anyway. It does holograms, or dynamical systems, or quantum gravity, or neural nets, or soulstuff. Denial • Fifth reaction: There's really no alternative on the horizon to computation (which is what neural nets actually do). Let's get to work! Maturity 3 The Beginning of Computer Science Was the Beginning of AI Alan Turing (1912{1954) invented both. Perhaps it was because he was weird, but even before the end of World War II he discussed machine intelligence with col- leagues at the British Code and Cipher School (\Bletch- ley Park"). 4 1940s: A Gleam In Turing's Eye In 1950 Alan Turing published \Computing Machinery and Intelligence." (Mind 49) He predicted that a machine would pass the Turing Test by 2000 (i.e., would fool a (naive?) judge 70% of the time into thinking it was human after a short interview). Other important British early AI researchers: Christopher Strachey (1916{1975), I.J. Good (1916{2009) 5 1950s: Americans Get Involved, Big- Time As Usual Key researchers: Claude Shannon (1916{2001) (MIT), Donald Michie (1923{2007) (Edinburgh), Allen Newell (1927{1992), Herbert Simon (1916{2001) (CMU), John McCarthy (1927{2011) (Stanford), Oliver Selfridge (1926{ 2008), Marvin Minsky (1927{) Results: Search algorithms*. List processing*. Mini- max with depth cutoff. Alpha-beta pruning. Simon predicts machine will prove new theorem by 1967. *Asterisk means technique once considered outr´e,perhaps specif- ically for duplicating intelligence, now assimilated into routine CS practice. 6 1960s: ARPA Golden Years The huge expansion of science after World War II got huger after Sputnik (1957). Lots of money was show- ered on AI labs, to do whatever they wanted. Researchers: Alan Robinson (1928{), Terry Winograd (1946{), Nils Nilsson (1933{) (Stanford) Results: Recursive symbolic programming*. Garbage collection*. Natural-language processing (parsing, some domain-specific \understanding"). Unification* and res- olution algorithms for theorem proving. 7 1970s: Things Begin to Settle Down Lots of research money around. Labs churn out pro- grams, often hard to evaluate. Researchers: Erik Sandewall (1945{), Robert Kowalski (1941{), Alain Colmerauer (1941{), Tom´asLozano- Perez Results: Logic programming*. Logic-based knowledge representation. Robot motion planning*. Constraint propagation. 8 1980s: Commercialization Fails Several AI-based companies founded; none successful. Symantec is still around, but selling software tools. Researchers: David Rumelhart (1942{2011) (UCSD), Geoffrey Hinton (1947{), Ray Reiter (1939{2002) (Toronto), Erik Horvitz (Microsoft), Frederick Jelinek (1932{2010) (IBM), Lawrence Rabiner (1943{) (AT&T) Results: Neural networks. Hidden-Markov Models, Sta- tistical methods for medical diagnosis, speech recogni- tion, translation, . Non-monotonic reasoning. Major textbook: Charniak & McDermott 9 1990s: Moore's Law Kicks In Exponential growth in processor power begins to have qualitative effect on what's considered a reasonable amount of computation. Researchers: Feng-hsiung Hsu (CMU), Bart Selman (Cornell), Eugene Charniak (Brown), Ernst Dickmanns (U. of Munich), Sebastian Thrun (Stanford) Results: Champion chess-playing programs. SAT solvers. Local search techniques*. Markov-chain Monte Carlo search. Probabilistic phrase-structure grammars. Robot cars. 10 2000s and 2010s: AI Is Everywhere AI is being commercialized again, but industry is still re- luctant to use the name even 30 years after the debacle of the 1980s. And the same stupid overhyping is occurring anyway (cf. iPhone commercials with celebrities pretending to hold actual conversations with Siri). 11 Taking Stock Milestones: • Deep Blue defeats world chess champion* (1997) • Proverb | solves New York Times (non-Sunday) puzzles (1999) • DARPA grand challenge | robots from CMU and Stanford maneuver cars through \urban" environment (2009) • Spin-off from DARPA CALO ("Cognitive Assistant that Learns and Organizes") project becomes Apple's Siri voice interface (2010) • Watson defeats Jeopardy champions (2011) • Robots are replacing many skilled assembly-line workers [New York Times] (2012) *Steroid note: programmers modified Deep Blue between games. 12 . And last but not least | Korean scientists develop way to charge lithium-ion bat- teries 100 times faster than before: http://onlinelibrary.wiley.com/doi/10.1002/anie.201203581/abstract | just increase the power capacity by factor of 10 and world domination is within the robots' grasp! 13 So Why Can't A Program Understand Puzzle Rules? 14 So Why No \Human-Level" Intelligence? AI Assets: • Logic programming • Constraint satisfaction • SAT solvers • Specialized vision algorithms (e.g., road following, 3D recon- struction) • Part-of-speech tagging • Probabilistic context-free parsers • Model checking • Speech recognition • Robot control (cf. robocup robot-soccer competition) • Humanoid robots • Logistic regression • Neural nets; support-vector machines 15 Possible Ways To Proceed 1. Continue as before; perhaps we just need more task- specific modules 2. Build cognitive architecture and plug in a lot of things (but what, if not task-specific modules?) 3. Hit the weak spots: analogy, \usually shallow" KR, NLP-KR interface 4. Rely on super-duper learning algorithm 16 Which Are Being Worked On? 1. Continue as before . Lots of people . 2. Build cognitive architecture and plug in a lot of things: SOAR (Rosenbloom, Newell, and Laird), OpenCOG (Goertzel et al.) 17 Which Are Being Worked On? (cont.) 3. Hit the weak spots: analogy, \usually shallow" KR, NLP-KR interface, KR-neural-net interface Structure-mapping engine (Gentner and Forbus), almost no one else 4. Rely on super-duper learning algorithm Marcus Hutter (AIXI), Bengio and LeCun, Hinton? 18.
Recommended publications
  • John Mccarthy
    JOHN MCCARTHY: the uncommon logician of common sense Excerpt from Out of their Minds: the lives and discoveries of 15 great computer scientists by Dennis Shasha and Cathy Lazere, Copernicus Press August 23, 2004 If you want the computer to have general intelligence, the outer structure has to be common sense knowledge and reasoning. — John McCarthy When a five-year old receives a plastic toy car, she soon pushes it and beeps the horn. She realizes that she shouldn’t roll it on the dining room table or bounce it on the floor or land it on her little brother’s head. When she returns from school, she expects to find her car in more or less the same place she last put it, because she put it outside her baby brother’s reach. The reasoning is so simple that any five-year old child can understand it, yet most computers can’t. Part of the computer’s problem has to do with its lack of knowledge about day-to-day social conventions that the five-year old has learned from her parents, such as don’t scratch the furniture and don’t injure little brothers. Another part of the problem has to do with a computer’s inability to reason as we do daily, a type of reasoning that’s foreign to conventional logic and therefore to the thinking of the average computer programmer. Conventional logic uses a form of reasoning known as deduction. Deduction permits us to conclude from statements such as “All unemployed actors are waiters, ” and “ Sebastian is an unemployed actor,” the new statement that “Sebastian is a waiter.” The main virtue of deduction is that it is “sound” — if the premises hold, then so will the conclusions.
    [Show full text]
  • The Computer Scientist As Toolsmith—Studies in Interactive Computer Graphics
    Frederick P. Brooks, Jr. Fred Brooks is the first recipient of the ACM Allen Newell Award—an honor to be presented annually to an individual whose career contributions have bridged computer science and other disciplines. Brooks was honored for a breadth of career contributions within computer science and engineering and his interdisciplinary contributions to visualization methods for biochemistry. Here, we present his acceptance lecture delivered at SIGGRAPH 94. The Computer Scientist Toolsmithas II t is a special honor to receive an award computer science. Another view of computer science named for Allen Newell. Allen was one of sees it as a discipline focused on problem-solving sys- the fathers of computer science. He was tems, and in this view computer graphics is very near especially important as a visionary and a the center of the discipline. leader in developing artificial intelligence (AI) as a subdiscipline, and in enunciating A Discipline Misnamed a vision for it. When our discipline was newborn, there was the What a man is is more important than what he usual perplexity as to its proper name. We at Chapel Idoes professionally, however, and it is Allen’s hum- Hill, following, I believe, Allen Newell and Herb ble, honorable, and self-giving character that makes it Simon, settled on “computer science” as our depart- a double honor to be a Newell awardee. I am pro- ment’s name. Now, with the benefit of three decades’ foundly grateful to the awards committee. hindsight, I believe that to have been a mistake. If we Rather than talking about one particular research understand why, we will better understand our craft.
    [Show full text]
  • The Dartmouth College Artificial Intelligence Conference: the Next
    AI Magazine Volume 27 Number 4 (2006) (© AAAI) Reports continued this earlier work because he became convinced that advances The Dartmouth College could be made with other approaches using computers. Minsky expressed the concern that too many in AI today Artificial Intelligence try to do what is popular and publish only successes. He argued that AI can never be a science until it publishes what fails as well as what succeeds. Conference: Oliver Selfridge highlighted the im- portance of many related areas of re- search before and after the 1956 sum- The Next mer project that helped to propel AI as a field. The development of improved languages and machines was essential. Fifty Years He offered tribute to many early pio- neering activities such as J. C. R. Lick- leiter developing time-sharing, Nat Rochester designing IBM computers, and Frank Rosenblatt working with James Moor perceptrons. Trenchard More was sent to the summer project for two separate weeks by the University of Rochester. Some of the best notes describing the AI project were taken by More, al- though ironically he admitted that he ■ The Dartmouth College Artificial Intelli- Marvin Minsky, Claude Shannon, and never liked the use of “artificial” or gence Conference: The Next 50 Years Nathaniel Rochester for the 1956 “intelligence” as terms for the field. (AI@50) took place July 13–15, 2006. The event, McCarthy wanted, as he ex- Ray Solomonoff said he went to the conference had three objectives: to cele- plained at AI@50, “to nail the flag to brate the Dartmouth Summer Research summer project hoping to convince the mast.” McCarthy is credited for Project, which occurred in 1956; to as- everyone of the importance of ma- coining the phrase “artificial intelli- sess how far AI has progressed; and to chine learning.
    [Show full text]
  • Pioneers of Computing
    Pioneers of Computing В 1980 IEEE Computer Society учредило Золотую медаль (бронзовую) «Вычислительный Пионер» Пионерами учредителями стали 32 члена IEEE Computer Society, связанных с работами по информатике и вычислительным наукам. 1 Pioneers of Computing 1.Howard H. Aiken (Havard Mark I) 2.John V. Atanasoff 3.Charles Babbage (Analytical Engine) 4.John Backus 5.Gordon Bell (Digital) 6.Vannevar Bush 7.Edsger W. Dijkstra 8.John Presper Eckert 9.Douglas C. Engelbart 10.Andrei P. Ershov (theroretical programming) 11.Tommy Flowers (Colossus engineer) 12.Robert W. Floyd 13.Kurt Gödel 14.William R. Hewlett 15.Herman Hollerith 16.Grace M. Hopper 17.Tom Kilburn (Manchester) 2 Pioneers of Computing 1. Donald E. Knuth (TeX) 2. Sergei A. Lebedev 3. Augusta Ada Lovelace 4. Aleksey A.Lyapunov 5. Benoit Mandelbrot 6. John W. Mauchly 7. David Packard 8. Blaise Pascal 9. P. Georg and Edvard Scheutz (Difference Engine, Sweden) 10. C. E. Shannon (information theory) 11. George R. Stibitz 12. Alan M. Turing (Colossus and code-breaking) 13. John von Neumann 14. Maurice V. Wilkes (EDSAC) 15. J.H. Wilkinson (numerical analysis) 16. Freddie C. Williams 17. Niklaus Wirth 18. Stephen Wolfram (Mathematica) 19. Konrad Zuse 3 Pioneers of Computing - 2 Howard H. Aiken (Havard Mark I) – США Создатель первой ЭВМ – 1943 г. Gene M. Amdahl (IBM360 computer architecture, including pipelining, instruction look-ahead, and cache memory) – США (1964 г.) Идеология майнфреймов – система массовой обработки данных John W. Backus (Fortran) – первый язык высокого уровня – 1956 г. 4 Pioneers of Computing - 3 Robert S. Barton For his outstanding contributions in basing the design of computing systems on the hierarchical nature of programs and their data.
    [Show full text]
  • A Challenge Set for Advancing Language Modeling
    A Challenge Set for Advancing Language Modeling Geoffrey Zweig and Chris J.C. Burges Microsoft Research Redmond, WA 98052 Abstract counts (Kneser and Ney, 1995; Chen and Good- man, 1999), multi-layer perceptrons (Schwenk and In this paper, we describe a new, publicly Gauvain, 2002; Schwenk, 2007) and maximum- available corpus intended to stimulate re- entropy models (Rosenfeld, 1997; Chen, 2009a; search into language modeling techniques Chen, 2009b). Trained on large amounts of data, which are sensitive to overall sentence coher- these methods have proven very effective in both ence. The task uses the Scholastic Aptitude Test’s sentence completion format. The test speech recognition and machine translation applica- set consists of 1040 sentences, each of which tions. is missing a content word. The goal is to select Concurrent with the refinement of N-gram model- the correct replacement from amongst five al- ing techniques, there has been an important stream ternates. In general, all of the options are syn- of research focused on the incorporation of syntac- tactically valid, and reasonable with respect to tic and semantic information (Chelba and Jelinek, local N-gram statistics. The set was gener- ated by using an N-gram language model to 1998; Chelba and Jelinek, 2000; Rosenfeld et al., generate a long list of likely words, given the 2001; Yamada and Knight, 2001; Khudanpur and immediate context. These options were then Wu, 2000; Wu and Khudanpur, 1999). Since in- hand-groomed, to identify four decoys which tuitively, language is about expressing meaning in are globally incoherent, yet syntactically cor- a highly structured syntactic form, it has come as rect.
    [Show full text]
  • ACL Lifetime Achievement Award
    ACL Lifetime Achievement Award The Dawn of Statistical ASR and MT Frederick Jelinek∗ Johns Hopkins University I am very grateful for the award you have bestowed on me. To understand your generosity I have to assume that you are honoring the leadership of three innovative groups that I headed in the last 47 years: at Cornell, IBM, and now at Johns Hopkins. You know my co-workers in the last two teams. The Cornell group was in Information Theory and included Toby Berger, Terrence Fine, and Neil J. A. Sloane (earlier my Ph.D. student), all of whom earned their own laurels. I was told that I should give an acceptance speech and was furnished with example texts by previous recipients. They wrote about the development and impact of their ideas. So I will tell you about my beginnings and motivations and then focus on the contributions of my IBM team. In this way the text will have some historical value and may clear up certain widely held misconceptions. 1. Beginnings Information Theory seemed to be one of the most prestigious disciplines during my years as a student at MIT (1954–1962). The faculty included the founders of the field— Shannon, Fano, Elias, and others. Some of my contemporaries were Viterbi, Jacobs, Kleinrock (founders of Qualcom), Gallagher, Kailath, and Massey. Not daring to ap- proach Shannon himself, I asked Professor Fano to be my thesis adviser. I was making slow progress when in 1961, after three years of trying, I succeeded in extricating my future wife Milena from communist Czechoslovakia (how this was accomplished is another story) and married her.
    [Show full text]
  • W. Brodersen Prof. Michael Dertouzos Lab. Prof. Jerome A. Feldman Dr. Michael Frankel
    * DARPA PRINCIPAL INVESTIGATORS MEETING DOUBLETREE INN MONTEREY, CALIFORNIA APRIL 10 - 13, 1983 ATTENDEES ( Dr. Robert Balzer USC-lnformation Sciences Inst. Prof. Robert W. Brodersen DR. BRUCE BUCHANAN Univ. of Calif.-Berkeley STANFORD UNIVERSITY Dr. John H. Cafarella MIT-Lincoln Lab. Prof. Thomas Cheatham Harvard University Aiken Computation Lab. Dr. Danny Cohen USC-lnformation Sciences Inst. Prof. Michael Dertouzos MIT-Computer Science Lab. Prof. Edward A. Feigenbaum Stanford University Prof. Jerome A. Feldman University of Rochester Dr. Michael Frankel SRI International Prof. Robert Gallager MIT-EE and Computer Science Dr. Corded Green Kestrel Institute Dr. Anthony Hearn Rand Corporation Prof. John L Hennessy Stanford University Dr. Estil V. Hoversten Linkabit Corporation "* 2 Prof. Leonard Kleinrock Univ. of Calif.-Los Angeles Prof. Victor R. Lesser University of Massachusetts Dr. Donald Nielson SRI International Prof. Jon McCarthy Stanford University Mr. Alan McLaughlin MIT-Lincoln Lab. Prof. James Meindl Stanford University Prof. Allen Newell Carnegie Mellon University Dr. Nils J. Nilsson SRI International Prof. Allan Oppenheim MIT-EE and Computer Science Prof. Paul Penfield, Jr. Massachusetts Inst, of Technology Dr. Gerald Popek Univ. of Calif. -Los Angeles Dr. Raj D. Reddy Carnegie Mellon University Dr. Daniel R. Ries Computer Corp. of America Prof. Edward R. Riseman Univ. of Massachusetts Prof. Roger C. Shank Yale University \ 3 Prof. Chuck Seitz Calif. Inst, of Technology Prof. Carlo H. Sequin Univ. of Calif.-Berkeley Prof. Joseph F. Traub Columbia University Mr. Keith W. Uncapher USC-lnformation Sciences Inst. Mr. Albert Vezza MIT-Laboratory for Computer Science Mr. David C. Walden Bolt, Beranek and Newman Prof. Patrick Winston MIT-Artificial Intelligence Lab.
    [Show full text]
  • Introduction
    INTRODUCTION In which we try to explain why we consider artificial intelligence to be a subject most worthy of study, and in which we try to decide what exactly it is, this being a good thing to decide before embarking. INTELLIGENCE We call ourselves _Homo sapiens—man the wise—because our intelligence is so important to us. For thousands of years, we have tried to understand how we think; that is, how a mere handful of matter can perceive, understand, predict, and manipulate a world far larger and ARTIFICIAL NTIIII GERHIE more complicated than itself. The field of artificial intelligence, or Al, goes further still: it attempts not just to understand but also to build intelligent entities. AI is one of the newest fields in science and engineering. Work started in earnest soon after World War II, and the name itself was coined in 1956. Along with molecular biology, AI is regularly cited as the "field I would most like to be in" by scientists in other disciplines. A student in physics might reasonably feel that all the good ideas have already been taken by Galileo, Newton, Einstein, and the rest. AI, on the other hand, still has openings for several full-time Einsteins and Edisons. Al currently encompasses a huge variety of subfields, ranging from the general (learning and perception) to the specific, such as playing chess, proving mathematical theorems, writing poetry, driving a car on a crowded street, and diagnosing diseases. AI is relevant to any intellectual task; it is truly a universal field. 1.1 WHAT IS AI? We have claimed that AI is exciting, but we have not said what it is.
    [Show full text]
  • Arxiv:2106.11534V1 [Cs.DL] 22 Jun 2021 2 Nanjing University of Science and Technology, Nanjing, China 3 University of Southampton, Southampton, U.K
    Noname manuscript No. (will be inserted by the editor) Turing Award elites revisited: patterns of productivity, collaboration, authorship and impact Yinyu Jin1 · Sha Yuan1∗ · Zhou Shao2, 4 · Wendy Hall3 · Jie Tang4 Received: date / Accepted: date Abstract The Turing Award is recognized as the most influential and presti- gious award in the field of computer science(CS). With the rise of the science of science (SciSci), a large amount of bibliographic data has been analyzed in an attempt to understand the hidden mechanism of scientific evolution. These include the analysis of the Nobel Prize, including physics, chemistry, medicine, etc. In this article, we extract and analyze the data of 72 Turing Award lau- reates from the complete bibliographic data, fill the gap in the lack of Turing Award analysis, and discover the development characteristics of computer sci- ence as an independent discipline. First, we show most Turing Award laureates have long-term and high-quality educational backgrounds, and more than 61% of them have a degree in mathematics, which indicates that mathematics has played a significant role in the development of computer science. Secondly, the data shows that not all scholars have high productivity and high h-index; that is, the number of publications and h-index is not the leading indicator for evaluating the Turing Award. Third, the average age of awardees has increased from 40 to around 70 in recent years. This may be because new breakthroughs take longer, and some new technologies need time to prove their influence. Besides, we have also found that in the past ten years, international collabo- ration has experienced explosive growth, showing a new paradigm in the form of collaboration.
    [Show full text]
  • Improving Statistical Language Model Performance with Automatically Generated Word Hierarchies
    Improving Statistical Language Model Performance with Automatically Generated Word Hierarchies John G. McMahon* Francis J. Smith The Queen's University of Belfast An automatic word-classification system has been designed that uses word unigram and bigram frequency statistics to implement a binary top-down form of word clustering and employs an average class mutual information metric. Words are represented as structural tags--n-bit num- bers the most significant bit-patterns of which incorporate class information. The classification system has revealed some of the lexical structure of English, as well as some phonemic and se- mantic structure. The system has been compared---directly and indirectly--with other recent word-classification systems. We see our classification as a means towards the end of construct- ing multilevel class-based interpolated language models. We have built some of these models and carried out experiments that show a 7% drop in test set perplexity compared to a standard interpolated trigram language model. 1. Introduction Many applications that process natural language can be enhanced by incorporating information about the probabilities of word strings; that is, by using statistical language model information (Church et al. 1991; Church and Mercer 1993; Gale, Church, and Yarowsky 1992; Liddy and Paik 1992). For example, speech recognition systems often require some model of the prior likelihood of a given utterance (Jelinek 1976). For convenience, the quality of these components can be measured by test set perplexity, PP (Bahl, Jelinek, and Mercer 1983; Bahl et al. 1989; Jelinek, Mercer, and Roukos 1990), in spite of some limitations (Ueberla 1994): PP = P(wlN)- ~, where there are N words in the word stream (w~/and ib is some estimate of the probability of that word stream.
    [Show full text]
  • The Problems with Problem Solving: Reflections on the Rise, Current Status, and Possible Future of a Cognitive Research Paradigm1
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Purdue E-Pubs The Problems with Problem Solving: Reflections on the Rise, Current Status, and Possible Future of a Cognitive Research Paradigm1 Stellan Ohlssoni Abstract The research paradigm invented by Allen Newell and Herbert A. Simon in the late 1950s dominated the study of problem solving for more than three decades. But in the early 1990s, problem solving ceased to drive research on complex cognition. As part of this decline, Newell and Simon’s most innovative research practices – especially their method for inducing subjects’ strategies from verbal protocols - were abandoned. In this essay, I summarize Newell and Simon’s theoretical and methodological innovations and explain why their strategy identification method did not become a standard research tool. I argue that the method lacked a systematic way to aggregate data, and that Newell and Simon’s search for general problem solving strategies failed. Paradoxically, the theoretical vision that led them to search elsewhere for general principles led researchers away from stud- ’ ies of complex problem solving. Newell and Simon’s main enduring contribution is the theory that people solve problems via heuristic search through a problem space. This theory remains the centerpiece of our understanding of how people solve unfamiliar problems, but it is seriously incomplete. In the early 1970s, Newell and Simon suggested that the field should focus on the question where problem spaces and search strategies come from. I propose a breakdown of this overarching question into five specific research questions. Principled answers to those questions would expand the theory of heuristic search into a more complete theory of human problem solving.
    [Show full text]
  • NOTE Our Lucky Moments with Frederick Jelinek
    The Prague Bulletin of Mathematical Linguistics NUMBER 88 DECEMBER 2007 91–92 NOTE Our Lucky Moments with Frederick Jelinek Barbora Vidová Hladká is contribution is going to be a congratulation to Frederick Jelinek’s birthday jubilee. Be- fore I reach the very congratulation I would like to remind a lucky moment that had a strong influence on the life of a certain Institute of Charles University in Prague aer 1989. And it is by no chance that the honored person witnessed the above mentioned moment and its con- sequences. From my personal point of view, I have become one of the ”victims” of this lucky moment so I really appreciate the opportunity to wish well to Fred via the Prague Bulletin circulating the institutions over the world. e crucial events in November 1989 in Czech Republic brought freedom to a lot of people. Freedom to scientists in the group of computational linguistics at the Faculty of Mathematics and Physics, Charles University changed (among other things) their subdepartment into an independent department of the faculty in 1990, namely the Institute of Formal and Applied Linguistics (ÚFAL) headed by Eva Hajičová. Freedom to Fred Jelinek made it possible for him (among other things) to give a two term course on spoken and written language analysis at the Czech Technical University in Prague in 1991-1992. At that time, Fred was a senior manager of the IBM T.J. Watson Research Center, Yorktown Heights, NY and he was heading a group carrying out research on continuous speech recognition, machine translation and text parsing and understanding.
    [Show full text]