William Kahan)

Total Page:16

File Type:pdf, Size:1020Kb

William Kahan) An interview with C. W. “BILL” GEAR Conducted by Thomas Haigh On September 17-19, 2005 In Princeton, New Jersey Interview conducted by the Society for Industrial and Applied Mathematics, as part of grant # DE-FG02-01ER25547 awarded by the US Department of Energy. Transcript and original tapes donated to the Computer History Museum by the Society for Industrial and Applied Mathematics © Computer History Museum Mountain View, California ABSTRACT Numerical analyst Bill Gear discusses his entire career to date. Born in London in 1935, Gear studied mathematics at Peterhouse, Cambridge before winning a fellowship for graduate study in America. He received a Ph.D. in mathematics from the University of Illinois, Urbana-Champaign in 1960, under the direction of Abraham Taub. During this time Gear worked in the Digital Computer Laboratory, collaborating with Don Gillies on the design of ILLIAC II. He discusses in detail the operation of ILLIAC, its applications, and the work of the lab (including his relationships with fellow graduate student Gene Golub and visitor William Kahan). On graduation, Gear returned to England for two years, where he worked at IBM’s laboratory in Hurlsley on computer architecture projects, serving as a representative to the SPREAD committee charged with devising what became the System 360 architecture. Gear then returned to Urbana-Champaign, where he remained until 1990, serving as a professor of computer science and applied mathematics and, from 1985 onward, as head of the computer science department. He supervised twenty two Ph.D. students, including Linda Petzold. Gear is best known for his work on the solution of stiff ordinary differential equations, and he discusses the origins of his interests in this area, the creation of his DIFSUB routine, and the impact of his book “Numerical Initial Value Problems in Ordinary Differential Equations” (1971). He also explores his work in other areas, including seminal contributions to the analysis of differential algebraic equations and work in the 1970s an ambitious system for generalized network simulation. Gear wrote several successful textbooks and was active in ACM’s SIGNUM and SIGRAPH, served as SIAM President and a member of the ACM Council as well as a number of blue ribbon panels and scholarly societies. In 1990 Gear left Illinois, to become one of the founding vice presidents of NEC’s new Research Institute in Princeton, New Jersey. From 1992 until his retirement in 2000 he was president of the institute, and he discusses his work there, the institute’s place within NEC, and its accomplishments and problems. HAIGH: Thank you very much for agreeing to take part in the interview. It’s always nice to come to Princeton. I wonder if I could begin by asking you, in general terms, about your family background. GEAR: Well, I was born in 1935 in the period before the war in a suburb of London. My father was very much working-class. I think by that time he may have been a foreman of the operation he was running. He had left school I think at fourteen, which was typical of his class in those days. His grandfather left school I believe at twelve or ten (I can’t remember) and gone to work as a laborer in a factory. I think there are two important things about my father. One was I think he had a tremendous moral-ethical sense about certain things, which he pushed on me, about the need to treat people properly. The other thing that he was always hammering into me was how I had to get an education—he said I had to get it because he didn’t want me to work like he had to work though his life. That was odd because, when I finally became a faculty member at Illinois and my father came over and visited, he couldn’t understand why I seemed to work all the time—he thought the reason to get an education was that someone then took it easy. Of course that’s the great thing about working at the university: if you’ve got a job you really like, it’s not work at all. It’s the thing you probably enjoy most of all. Now back to my early background. We lived in a suburb of London built in the early 1930s to house the growing mass of people moving into the London area. I went to the local primary school in that area. Of course, in those days England was unbelievably homogeneous: virtually all white, virtually all English. Immigration hadn’t started yet. It was in the period leading up to the war. In fact, I had only been in local primary school for two weeks when war was declared. I was sent to my grandparents for the first of two trips and spent a year there. HAIGH: And where did they live? GEAR: About 40 miles north of London. Of course in those days, that was a long distance. It was relatively safe in the country from bombing. Then they sent me there again when the V2s were coming in, late 1944, I guess. Other than that I went to the local school and behaved like any typical working-class kid: playing most of the time, avoiding most schoolwork. In those days, I loved mathematics (of course it was called “arithmetic” in those days). I remember sitting in classes doing things like doubling a number as often as I could to see how big it would get. I can remember when I was five I decided to see how high one could count, so every night I would start counting higher and higher, and then I would ask my mother to remember the number so that I could continue from there the next night. I guess I didn’t believe yet in infinity. I’m sure she just gave me an arbitrary number each night [laughs]. That’s better than counting sheep to get to sleep. So it was an uneventful childhood. My father was working class and I suspect money was somewhat limited. We weren’t poor. Even though it was wartime and there was rationing, there was sufficient food for everybody. It may not have been a great diet. So I didn’t see it as a period of privation, although I remember the night when ice cream became available when I was ten years old. And when candy went off ration, I had terrible teeth for a couple of weeks from eating too much candy. HAIGH: Other than things you’ve already mentioned, are you aware of any particular influences that this background had on the way you approached your later career? C.W. Gear 3 GEAR: Well, I think that in the primary school, we probably had one very good teacher in my last year. Let me say a little bit about the educational system in England then, because my father was pushing me very hard in getting an education. He was really worried because at that time, the career path for a normal person of my background was to go to primary school and then to secondary school at age eleven-and-a-bit, which basically prepared you for a blue-collar job. There were a few scholarship positions open to the local grammar schools, which -- although they were state-funded -- most of the positions were fee-paying and were beyond the means of people like my parents. So my father was very concerned that I should try to get a scholarship there, and I think he was pretty pessimistic that I’d do it because I was only interested in mathematics and terrible in other subjects. I remember he used to drill me on general knowledge questions in the evenings to try and get me to learn some stuff. I was very fortunate because in 1945 after the war, the Labor government won and Churchill and the Tories were booted out . I was too young then—I was ten, so I don’t know what was behind all of that. But the Labor government revolutionized many things in the educational system. They made all the positions scholarship; in other words, they got rid of the fees altogether and made them all the subject of what became the “eleven plus” exam. I was part of the first group to take that. So there were many more positions. As I recall, there were three parts to the exam that I took: one was mathematics, which was almost certainly arithmetic which I’m sure I probably aced because I really liked that; one was some sort of English; and the third one was labeled as some sort of general knowledge, my father was desperate because he drilled me on these facts of English history and all sorts of stuff every night and I would never remember it. But in fact it was an IQ test, and most of those things are pretty easy for a mathematician type. So I probably did very well on two of them, and that probably got me over the hurdle. HAIGH: They would call those IQ tests “verbal reasoning,” for some reason. GEAR: Oh, okay, I don’t know. I just remember that instead of questions of fact what I did was little puzzles. So that got me to Harrow County school, the grammar school—not the famous Harrow, but down the hill from it in the cheaper part of town. But it was the best local grammar school. From there things went on up. My ambition at that age was probably to be something like an electrician, because I loved technology, I loved taking old radios apart and trying to rebuild them in some way, I loved mechanical things.
Recommended publications
  • United States Patent [19] [11] E Patent Number: Re
    United States Patent [19] [11] E Patent Number: Re. 33,629 Palmer et a1. [45] Reissued Date of Patent: Jul. 2, 1991 [54] NUMERIC DATA PROCESSOR 1973, IEEE Transactions on Computers vol. C-22, pp. [75] Inventors: John F. Palmer, Cambridge, Mass; 577-586. Bruce W. Ravenel, Nederland, Co1o.; Bulman, D. M. "Stack Computers: An Introduction," Ra? Nave, Haifa, Israel May 1977, Computer pp. 18-28. Siewiorek, Daniel H; Bell, C. Gordon; Newell, Allen, [73] Assignee: Intel Corporation, Santa Clara, Calif. “Computer Structures: Principles and Examples," 1977, [21] Appl. No.: 461,538 Chapter 29, pp. 470-485 McGraw-Hill Book Co. Palmer, John F., “The Intel Standard for Floatin [22] Filed: Jun. 1, 1990 g-Point Arithmetic,“ Nov. 8-11, 1977, IEEE COMP SAC 77 Proceedings, 107-112. Related US. Patent Documents Coonen, J. T., "Speci?cations for a Proposed Standard Reissue of: t for Floating-Point Arithmetic," Oct. 13, 1978, Mem. [64] Patent No.: 4,338,675 #USB/ERL M78172, pp. 1-32. Issued: Jul. 6, 1982 Pittman, T. and Stewart, R. G., “Microprocessor Stan Appl. No: 120.995 dards,” 1978, AFIPS Conference Proceedings, vol. 47, Filed: Feb. 13, 1980 pp. 935-938. “7094-11 System Support For Numerical Analysis,“ by [51] Int. Cl.-‘ ........................ .. G06F 7/48; G06F 9/00; William Kahan, Dept. of Computer Science, Univ. of G06F 11/00 Toronto, Aug. 1966, pp. 1-51. [52] US. Cl. .................................. .. 364/748; 364/737; “A Uni?ed Decimal Floating-Point Architecture For 364/745; 364/258 The Support of High-Level Languages,‘ by Frederic [58] Field of Search ............. .. 364/748, 745, 737, 736, N.
    [Show full text]
  • Faster Math Functions, Soundly
    Faster Math Functions, Soundly IAN BRIGGS, University of Utah, USA PAVEL PANCHEKHA, University of Utah, USA Standard library implementations of functions like sin and exp optimize for accuracy, not speed, because they are intended for general-purpose use. But applications tolerate inaccuracy from cancellation, rounding error, and singularities—sometimes even very high error—and many application could tolerate error in function implementations as well. This raises an intriguing possibility: speeding up numerical code by tuning standard function implementations. This paper thus introduces OpTuner, an automatic method for selecting the best implementation of mathematical functions at each use site. OpTuner assembles dozens of implementations for the standard mathematical functions from across the speed-accuracy spectrum. OpTuner then uses error Taylor series and integer linear programming to compute optimal assignments of function implementation to use site and presents the user with a speed-accuracy Pareto curve they can use to speed up their code. In a case study on the POV-Ray ray tracer, OpTuner speeds up a critical computation, leading to a whole program speedup of 9% with no change in the program output (whereas human efforts result in slower code and lower-quality output). On a broader study of 37 standard benchmarks, OpTuner matches 216 implementations to 89 use sites and demonstrates speed-ups of 107% for negligible decreases in accuracy and of up to 438% for error-tolerant applications. Additional Key Words and Phrases: Floating point, rounding error, performance, approximation theory, synthesis, optimization 1 INTRODUCTION Floating-point arithmetic is foundational for scientific, engineering, and mathematical software. This is because, while floating-point arithmetic is approximate, most applications tolerate minute errors [Piparo et al.
    [Show full text]
  • Version 0.2.1 Copyright C 2003-2009 Richard P
    Modern Computer Arithmetic Richard P. Brent and Paul Zimmermann Version 0.2.1 Copyright c 2003-2009 Richard P. Brent and Paul Zimmermann This electronic version is distributed under the terms and conditions of the Creative Commons license “Attribution-Noncommercial-No Derivative Works 3.0”. You are free to copy, distribute and transmit this book under the following conditions: Attribution. You must attribute the work in the manner specified • by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). Noncommercial. You may not use this work for commercial purposes. • No Derivative Works. You may not alter, transform, or build upon • this work. For any reuse or distribution, you must make clear to others the license terms of this work. The best way to do this is with a link to the web page below. Any of the above conditions can be waived if you get permission from the copyright holder. Nothing in this license impairs or restricts the author’s moral rights. For more information about the license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/ Preface This is a book about algorithms for performing arithmetic, and their imple- mentation on modern computers. We are concerned with software more than hardware — we do not cover computer architecture or the design of computer hardware since good books are already available on these topics. Instead we focus on algorithms for efficiently performing arithmetic operations such as addition, multiplication and division, and their connections to topics such as modular arithmetic, greatest common divisors, the Fast Fourier Transform (FFT), and the computation of special functions.
    [Show full text]
  • Unitary and Symmetric Structure in Deep Neural Networks
    University of Kentucky UKnowledge Theses and Dissertations--Mathematics Mathematics 2020 Unitary and Symmetric Structure in Deep Neural Networks Kehelwala Dewage Gayan Maduranga University of Kentucky, [email protected] Author ORCID Identifier: https://orcid.org/0000-0002-6626-9024 Digital Object Identifier: https://doi.org/10.13023/etd.2020.380 Right click to open a feedback form in a new tab to let us know how this document benefits ou.y Recommended Citation Maduranga, Kehelwala Dewage Gayan, "Unitary and Symmetric Structure in Deep Neural Networks" (2020). Theses and Dissertations--Mathematics. 77. https://uknowledge.uky.edu/math_etds/77 This Doctoral Dissertation is brought to you for free and open access by the Mathematics at UKnowledge. It has been accepted for inclusion in Theses and Dissertations--Mathematics by an authorized administrator of UKnowledge. For more information, please contact [email protected]. STUDENT AGREEMENT: I represent that my thesis or dissertation and abstract are my original work. Proper attribution has been given to all outside sources. I understand that I am solely responsible for obtaining any needed copyright permissions. I have obtained needed written permission statement(s) from the owner(s) of each third-party copyrighted matter to be included in my work, allowing electronic distribution (if such use is not permitted by the fair use doctrine) which will be submitted to UKnowledge as Additional File. I hereby grant to The University of Kentucky and its agents the irrevocable, non-exclusive, and royalty-free license to archive and make accessible my work in whole or in part in all forms of media, now or hereafter known.
    [Show full text]
  • Alan Mathison Turing and the Turing Award Winners
    Alan Turing and the Turing Award Winners A Short Journey Through the History of Computer TítuloScience do capítulo Luis Lamb, 22 June 2012 Slides by Luis C. Lamb Alan Mathison Turing A.M. Turing, 1951 Turing by Stephen Kettle, 2007 by Slides by Luis C. Lamb Assumptions • I assume knowlege of Computing as a Science. • I shall not talk about computing before Turing: Leibniz, Babbage, Boole, Gödel... • I shall not detail theorems or algorithms. • I shall apologize for omissions at the end of this presentation. • Comprehensive information about Turing can be found at http://www.mathcomp.leeds.ac.uk/turing2012/ • The full version of this talk is available upon request. Slides by Luis C. Lamb Alan Mathison Turing § Born 23 June 1912: 2 Warrington Crescent, Maida Vale, London W9 Google maps Slides by Luis C. Lamb Alan Mathison Turing: short biography • 1922: Attends Hazlehurst Preparatory School • ’26: Sherborne School Dorset • ’31: King’s College Cambridge, Maths (graduates in ‘34). • ’35: Elected to Fellowship of King’s College Cambridge • ’36: Publishes “On Computable Numbers, with an Application to the Entscheindungsproblem”, Journal of the London Math. Soc. • ’38: PhD Princeton (viva on 21 June) : “Systems of Logic Based on Ordinals”, supervised by Alonzo Church. • Letter to Philipp Hall: “I hope Hitler will not have invaded England before I come back.” • ’39 Joins Bletchley Park: designs the “Bombe”. • ’40: First Bombes are fully operational • ’41: Breaks the German Naval Enigma. • ’42-44: Several contibutions to war effort on codebreaking; secure speech devices; computing. • ’45: Automatic Computing Engine (ACE) Computer. Slides by Luis C.
    [Show full text]
  • Historical Perspective and Further Reading
    Gresham's Law (“Bad Historical Perspective and Further money drives out Good”) for 3.10 computers would say, “The Reading 3.10 Fast drives out the Slow even if the Fast is wrong.” This section surveys the history of the floating point going back to von Neumann, including the surprisingly controversial IEEE standards effort, plus the rationale W. Kahan, 1992 for the 80-bit stack architecture for floating point in the IA-32. At first it may be hard to imagine a subject of less interest than the correctness of computer arithmetic or its accuracy, and harder still to understand why a sub- ject so old and mathematical should be so controversial. Computer arithmetic is as old as computing itself, and some of the subject’s earliest notions, like the eco- nomical reuse of registers during serial multiplication and division, still command respect today. Maurice Wilkes [1985] recalled a conversation about that notion during his visit to the United States in 1946, before the earliest stored-program computer had been built: . a project under von Neumann was to be set up at the Institute of Advanced Studies in Princeton. Goldstine explained to me the principal features of the design, including the device whereby the digits of the multiplier were put into the tail of the accumulator and shifted out as the least significant part of the product was shifted in. I expressed some admiration at the way registers and shifting circuits were arranged . and Goldstine remarked that things of that nature came very easily to von Neumann. There is no controversy here; it can hardly arise in the context of exact integer arithmetic so long as there is general agreement on what integer the correct result should be.
    [Show full text]
  • Floating Point
    Contents Articles Floating point 1 Positional notation 22 References Article Sources and Contributors 32 Image Sources, Licenses and Contributors 33 Article Licenses License 34 Floating point 1 Floating point In computing, floating point describes a method of representing an approximation of a real number in a way that can support a wide range of values. The numbers are, in general, represented approximately to a fixed number of significant digits (the significand) and scaled using an exponent. The base for the scaling is normally 2, 10 or 16. The typical number that can be represented exactly is of the form: Significant digits × baseexponent The idea of floating-point representation over intrinsically integer fixed-point numbers, which consist purely of significand, is that An early electromechanical programmable computer, expanding it with the exponent component achieves greater range. the Z3, included floating-point arithmetic (replica on display at Deutsches Museum in Munich). For instance, to represent large values, e.g. distances between galaxies, there is no need to keep all 39 decimal places down to femtometre-resolution (employed in particle physics). Assuming that the best resolution is in light years, only the 9 most significant decimal digits matter, whereas the remaining 30 digits carry pure noise, and thus can be safely dropped. This represents a savings of 100 bits of computer data storage. Instead of these 100 bits, much fewer are used to represent the scale (the exponent), e.g. 8 bits or 2 A diagram showing a representation of a decimal decimal digits. Given that one number can encode both astronomic floating-point number using a mantissa and an and subatomic distances with the same nine digits of accuracy, but exponent.
    [Show full text]
  • Transcript of 790112 Hearing in Avila
    MUCH.EAR REGULAT0RY COMMjSS10N 1N THE NATTER OF.. PACXPZC 'GAS 5 KLECYRXC, @~=~A."tZ (09.ah2.o Canyon Unit:s 2. @md 2) C4~'.ref Mog 50 275 50-323 Av9.1a Beach, Ca2.if'araia P/ace- 12 Zanuaxy 3.979 8S45 -87's Date- Pages Telephone: (202) ~&7.3700 ACE'- FEDEBALREPOBTEBS, INC. zBMRR 603 / Officisl0 epertere 444 North Capital Street Washington, D.C. 20001 P NAT1ONWlDE t QVPRAG"-" - DAlLY 4: "ti .d3.GCIQ $PJX LEQ st'Z Q Qp L%21EBXCZi QELCH i'i c:LBQCQD Zi7C&~iLZ~ 8 . QULRPORZ COPE'IXSSZOi'. »»»»&»~»~»»»»»» Xn the ma"h~-; of: Qh PBCXPXC GAS E«ELZCTRXC COK?2ZY Doclc8t. LioG ~ 50 275 Si3»323 {99.K~33.0 CaIlj«cn UILi~cs 1 RnQ 2) »» I»»»»w»»»»»»»»»»»»A W»»»041»»4!l»»+ Clwv&1 l GZ RCl>~l g r, ~+ BGil «+4Yli8 '«~P XLITIg h SKVi1Q Bek~Chg CQ1".foh'~liibo PZ''i QQ'i~ 3 2 JQ'iQ~ J 1979 T59 18cYzinLJ iIL "i QQ c~+QvQ»8Q'~it,'GQ I;-~zP~~P. "ERG zeconvene6, gursvan to aa~ouz.~anal", at, 9:30 ".rl. SWORE ELX>~FTH BGh" BS ~ Chaizm~~a~ McL?9.c 84CpCrg GiDQ X>9.cc'QQ9.ng BQ "mQ DR. VTZL~IM4 E. SKHTXH, kf~ebar. GL-rrrZ O. mXGHr., r~~e=-. >i -PEKtQKCPS h OII b85M1f Of APP13.CRIIt. PGCi Pic QBG 5 H18CM2. ~ CCRPB'QP ' BE7CE @%TON F«I~) ~ 3210) Mo rZIlg.~~q M PhaeniX, 2-i"OIla 8S02.2 mu:oxen .H. zUaaUSH, HBQO akim PBXXI*P CKlHZp ~BC/lb Lec~Q1 08QcL~SQ p Pacific Gas F "1ec+-ic I Hc~l" Ith, CQGlpanp'g 77 BQ«L1o BMOCp Pz«iElcisco g Cafe.fo~ia 943.06 ;4 )I '> 1 $ 1 ~ 5 il1 ~ i ~ I h ~ ti RPPElu~CCES: (Cant'cO On behalf of.
    [Show full text]
  • Challenges in Generating the Next Computer Generation
    2. Challenges in Generating the Next Computer Generation GORDON BELL A leading designer of computer structures traces the evolution of past and present generations of computers and speculates about possiblefuture courses of develop- ment for the next generations. WHENthe Japanese told us about the Fifth Generation, we knew we were in a race. Ed Feigenbaum and Pamela McCor- duck characterized it in their book, The Fifth Generation, but did not chart the racetrack. In the 1970~~while writing Com- puter Structures with Allen Newell, and recently in orga- nizing the collection of The Computer Museum, I identified the pattern repeated by each generation. In order to win, or even to be in the race, knowing the course is essential. The Cycle of Generations A computer generation is produced when a specific cycle of events occurs. The cycle proceeds when motivation (e.g. the threat of industrial annihilation) frees resources; CHALLENGES IN GENERATING THE NEXT COMPUTER GENERATION 27 techno log^ and science provide ideas for building with both industry and academe providing the push. new mach'ines; The Japanese Approach to the Next Generation organizations arise to build the new computing structures; and (after the fad) "If a computer understands English, it must be Japanese."(A pearl from Alan Perlis, speaking at The Computer Museum, use of the resulting products confirm a generation's September 1983) existence. The Japanese Fifth Generation plan, formulated in 1980, A generation results from at least two trips around this is based on worldwide research. Because the Japanese cycle, each succeeding trip an accelerated version of the pre- understand large-scale, long-term interactive processes, vious one, as in a cyclotron.
    [Show full text]
  • Bibliography
    Bibliography [1] E. Abu-Shama and M. Bayoumi. A new cell for low power adders. In Int. Midwest Symposium on Circuits and Systems, pages 1014–1017, 1995. [2] R. C. Agarwal, F. G. Gustavson, and M. S. Schmookler. Series approx- imation methods for divide and square root in the Power3 micropro- cessor. In Koren and Kornerup, editors, Proceedings of the 14th IEEE Symposium on Computer Arithmetic (Adelaide, Australia), pages 116–123. IEEE Computer Society Press, Los Alamitos, CA, April 1999. [3] T. Ahrendt. Fast high-precision computation of complex square roots. In Proceedings of ISSAC’96 (Zurich, Switzerland), 1996. [4] M. Ajtai. The shortest vector problem in L2 is NP-hard for random- ized reductions (extended abstract). In Proceedings of the annual ACM symposium on Theory of computing (STOC), pages 10–19, 1998. [5] M. Ajtai, R. Kumar, and D. Sivakumar. A sieve algorithm for the short- est lattice vector problem. In Proceedings of the annual ACM symposium on Theory of computing (STOC), pages 601–610, 2001. [6] L. Aksoy, E. Costa, P. Flores, and J. Monteiro. Optimization of area in digital FIR filters using gate-level metrics. In Design Automation Confer- ence, pages 420–423, 2007. [7] E. Allen, D. Chase, V. Luchangco, J.-W. Maessen, and G. L. Steele Jr. Object-oriented units of measurement. In OOPSLA ’04: Proceedings of the 19th Annual ACM SIGPLAN Conference on Object-Oriented Program- ming, Systems, Languages, and Applications, pages 384–403, New York, NY, 2004. ACM Press. [8] Altera Corporation. FFT/IFFT Block Floating Point Scaling, 2005. Appli- cation note 404-1.0.
    [Show full text]
  • History in the Computing Curriculum 1950 to 1959
    History in the Computing Curriculum Appendix A4 1950 to 1959 1950 [early]: Project Whirlwind becomes operational on a limited basis. (w) 1950 [February]: Remington-Rand Corporation acquires the Eckert-Mauchly Computer Corporation. The latter loses crucial contracts due to the McCarthy trials. (a,e,w) 1950 [April]: The SEAC becomes operational at the National Bureau of Standards. (w) 1950 [May]: The Pilot ACE is completed at England's National Physics Laboratory and runs its first program on May 10. (e,w) 1950 [July]: The Standards Western Automatic Computer (SWAC) becomes operational. (w) 1950: The SWAC, built under Harry Huskey's leadership, is dedicated at UCLA on August 17. (e) 1950: Alan Turing publishes an article in the journal Mind establishing the criteria for the Turing Test of machine intelligence. (e) 1950 [November]: The Bell Labs Model VI computer becomes operational. 1950: Konrad Zuse installs the refurbished Z4 computer at the ETH in Zurich. (w) 1951 [February]: The first Ferranti Mark I version of the Manchester University machine is delivered to Manchester University. (w) 1951 [March]: The first UNIVAC I computer is delivered to the US Census Bureau on March 31. It weighed 16,000 pounds, contained 5,000 vacuum tubes, could do 1000 calculations per second, and costs $159,000. (a,e,w) 1951: The US Census Bureau buys a second UNIVAC I. (p) 1951 [May]: Jay Forrester files a patent application for the matrix core memory on May 11. (e) 1951 [June]: The first programming error at the Census Bureau occurs on June 16. (a) 1951 [July]: The IAS machine is in limited operation.
    [Show full text]
  • State of the Art and Future Directions of Computational Mathematics And
    STANFORD 50: State of the Art & Future Directions of Computational Mathematics & Numerical Computing A conference celebrating the 50th anniversary of George Forsythe’s arrival at Stanford and Gene Golub’s 75th birthday Incorporating the Eighth Bay Area Scientific Computing Day (BASCD) Stanford University March 29–31, 2007 Contents Welcome from the Organizers! 3 Conference Schedule 4 Thursday, March 29, 2007 ...................................... 4 Friday, March 30, 2007 ........................................ 6 Saturday, March 31, 2007 ...................................... 8 Abstracts 9 Posters 20 Graduate Student Posters ...................................... 20 Junior Scientist Posters ....................................... 21 Judges ................................................. 21 Participants 22 FOR SYTHEtation 23 Some of George and Alexandra Forsythe’s Books ......................... 23 Some of Gene Golub’s Books .................................... 24 GENEalogy 25 George Forsythe’s PhD Students .................................. 25 Some of Gene Golub’s Postdocs ................................... 25 Gene’s PhD Students ......................................... 26 Fondly Remembered NA Graduates and Faculty 27 Acknowledgments 28 Sponsors ................................................ 28 Session Chairs ............................................ 29 Organizers ............................................... 29 Welcome from the Organizers! Stanford 50: State of the Art and Future Directions of Computational Mathematics and Numerical
    [Show full text]