Talking Nets: an Oral History of Neural Networks 6

Talking Nets: an Oral History of Neural Networks 6

View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Artificial Intelligence 119 (2000) 287–293 Book Review J.A. Anderson and E. Rosenfeld (Eds.), Talking Nets: An Oral History of Neural Networks ✩ Noel E. Sharkey 1 University of Sheffield, Department of Computer Science, Regent Court, 221 Portobello Street, Sheffield S1 4DP, UK Drugs, tragedy, romance, success, misfortune, luck, pathos, bitterness, jealousy and struggle are not words that we expect in a review of an academic book of interviews with some of the great and good of neural network research. But this book is not just concerned with the science and engineering of neural network research as seen in the literature; it deals with the motivation from the childhood and early developments of each of the 17 interviewees and with informal accounts of their scientific endeavours. This is an attempt to get to an understanding of the science through an understanding of the scientists themselves, their schooling, their interests, and their major influences. It is a wonderfully charming book and well worth reading. To maintain the historical sequence of events, the editors have placed the interviews in the order of the birth dates of the interviewees. This works well in many cases but there are some notable exceptions. For example, the Werbos interview is sandwiched between Sejnowski and Hinton when he had actually carried out significant work much earlier. Then there is Leon Cooper who, although fourth oldest, should really belong, historically, at position 12 because he first had a Nobel winning career in Physics. The idea seems a bit ageist when Cooper actually complains about having to give his birth date in the interview. While the “order by age” gives a reasonable approximation to the historical lineage, a better lineage could have been obtained by putting the authors in the order in which they entered the field, or ordered according to first period (’40s to ’60s), middle period (’60s to ’79) and second period (’79 to present ’99). As an historical record, some of the early interviews provide first hand accounts of the intellectual and emotional climate surrounding studies of artificial neural networks, cybernetics and artificial intelligence at the birth of computing. Rather than the usual cleaned up view of science, this book looks at discovery in the raw. Names like Weiner, ✩ MIT Press, Cambridge, MA, 1998. 448 pp. $39.95 (cloth), $22.95 (paper). ISBN 0-262-01167-0. http: //mitpress.mit.edu/book-home.tcl?isbn=0262011670. 1 Email: [email protected]. 0004-3702/00/$ – see front matter 2000 Elsevier Science B.V. All rights reserved. PII:S0004-3702(00)00014-X 288 N.E. Sharkey / Artificial Intelligence 119 (2000) 287–293 McCulloch, Pitts, Carnap, and Russell are intertwined and we see how their paths crossed in the difficult period during and following the second world war. Jerry Lettvin, Jack Cowan and Michael Arbib all give accounts of their association with the famous McCulloch (and Pitts) group at MIT. All three give differing accounts of how McCulloch and Pitts met. However, the Arbib and Cowan accounts came from McCulloch whereas the Lettvin account is first hand. Lettvin’s interview, the first in the book, is particularly gripping. It sets the scene of wartime USA in the hospital and psychiatric community and tells how the development of ideas still progressed through adversity (there is no mention of funding bodies or research grants here). Achievement arose through the power of intellectual passion to develop a theory about how the brain gives rise to mind (and to intelligence). In this respect, the achievement award must go to McCulloch and Pitts who inspired a whole generation of brilliant young researchers, some of whom are interviewed here. The McCulloch and Pitts relationship and the genius of Pitts is also described in the interviews of Jack Cowan and Michael Arbib. It was McCulloch and Pitts, in the early 1940s, who began to build the computational foundations for neural network research. Pitts was an enthusiast of the work of Leibniz who had invented the binary number system in the late 17th century. Leibniz had also pointed out that, if a binary calculator was possible (he couldn’t actually build one at the time), then so was a logical machine that could perform any finite task that could be expressed completely and unambiguously in logical language. This fitted well with the goals of McCulloch and Pitts. From real neural research, they seized on the idea of the all-or-none, or threshold, property of neurons—if the input is strong enough the neuron fires, otherwise it stays at base level. Thus the neuron can, abstractly, be considered as binary computing element with an output of 1 (firing) or 0 (resting). Logical (Boolean) machines could be constructed by joining together these simple binary neurons with the goal of showing one way in which a logical language of thought could arise from a nervous system. McCulloch is painted as a wild extroverted Scot who liked to drink a lot—it made him very ill in the end. As described in other interviews, he was a frightening man but with a very generous spirit and good heart. He welcomed the homeless Lettvin and Pitts into his own family home with his wife (as he did for later students) during the early 1940s. This is when they wrote their seminal paper, The Logical Calculus of Ideas Immanent in Nervous Activity [Bulletin of Mathematical Biophysics 5 (1943) 115–133]. Interestingly, Cowan tells us that McCulloch had been working on these problems for about 20 years but the solutions came within a couple of months after his collaboration with the 17 year old Pitts. Pitts is portrayed as a brilliant but tragic character who came to an early and sad end. Apparently at age 12 he had read Principia Mathematica when he ran into a library to hide from bullies. Afterwards, when he escaped, he wrote to Russell pointing out some serious problems and Russell invited him to come to England as a student. Instead, he went to Chicago where the famous logician Carnap was on the faculty. As an unregistered student, Pitts wrote a critique of Carnap’s new book on Logic. The story goes that the penniless Pitts marched into Carnap’s office one day and presented him with some of the failings and problems of the book. He then left without giving his name. Carnap searched hard for N.E. Sharkey / Artificial Intelligence 119 (2000) 287–293 289 several months before he found Pitts and he got the university to give him a menial job. This is a little like the recent movie Good Will Hunting except that in that case the young working class genius janitor gets a lot of money for accepting promotion from janitor to researcher. In the real world, Pitts gets promoted to assistant janitor! A number of interviewees mention Pitts’ tragic demise. Arbib tells the tale of meeting Pitts in about 1959 to talk to him about some of the more obscure proofs in the 1943 McCulloch and Pitts article ibid. He couldn’t get much sense out of Pitts because he was shaking so much with the DTs. Lettvin attributes Pitts’ disillusionment partly to the results from a series of experiments by their group that culminated in the paper “What the frog’s eye tells the frog’s brain”. They had found perceptual invariances that were not formally tractable in a logical way. Although Pitts believed in and approved of the results, it showed him that logic was not the right approach to the brain. During the late 1950s there was interest in how to adapt neural networks automatically; to develop learning rules. This did not grow from the McCulloch group, according to Jack Cowan, because McCulloch believed that “You have to get to know the anatomy [of neural networks] before you can pervert it”. His philosophy was that if something is true, it works—rather than if something works it is true. McCulloch believed in innate structure. His group were strong believers in the Kantian notions of synthetic aprioriand so had little time for learning theories. The third chapter is an interview with Bernard Widrow, one of the pioneers of neural network learning theory. There is a feeling of real scientific development and perseverance in Widrow’s interview. He has talked about some of these things before at meetings but it is useful to have a written archival record. Widrow and Hoff developed a least mean squares learning algorithm in 1959 and successfully tested it on a computer. Because of the limitations of computing facilities at the time, Widrow (with Hoff) developed a simple method of copper plating pencil leads to produce variable resistors—reversing the current removed plating. In this way, they could have adaptive weights in hardware. This was the first adaptive filter built in hardware. Rosenblatt, for his famous Perceptron model of the same period, used motorised ‘pots’ to adjust the weights. There is a myth that the first wave of neural network research, particularly on learning machines, came to an abrupt end because of a seminal critique by Minsky and Papert in their book, Perceptrons [MIT Press, 1969]. Apparently Minsky had attended an early lecture by Rosenblatt on perceptron learning where the audience were not convinced that it could do what Rosenblatt said it could do (there was no perceptron convergence proof then). Cowan was there and says that he thinks it was this encounter that set Minsky against perceptrons. The book contains a number of proofs that exposed some of the limitations of learning methods with single layer neural nets.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us