
GENERAL ARTICLE Artificial Intelligence∗ The Big Picture Deepak Khemani In the first week of the year 2020, we got the news that AI now outperforms doctors in detecting breast cancer. This is in line with a continuous stream of news coming from the world of diagnosis and has lent credence to the sentiment that AI is poised to overcome humankind. However, some perceptive observers have commented that recent advances are largely due to the massive increase in both availability of data and Deepak Khemani is a computing power. Moreover, it is only a narrow task of clas- professor at IIT Madras. He sification that has led the news blitz. Classification can be has been working in AI for thought of as a stimulus-response process. Human intelli- over thirty years, with a focus on knowledge representation gence is much broader. In particular, humans often display and problem solving. He is a stimulus-deliberation-response cycle. There is much that the author of the text book, A goes on in the “thinking” phase that was the original aim of First Course in Artificial AI before the data and speed started dominating applications. Intelligence, and has three online courses on Swayam. The second of the two-part article on AI traces the evolution Hiscurrentfocusisto in the field since the Dartmouth conference, and takes stock implement a contract bridge of where we are on the road to thinking machines. playing program that reasons like a human expert. The term Artificial Intelligence is attributed to John McCarthy (1927–2011) who along with Marvin Minsky (1927–2016), Natha- niel Rochester (1919–2001) and Claude Shannon (1916–2001) organized a summer conference in Dartmouth College. Much of research in AI had its seeds in the 1956 Dartmouth con- ference. Pamela McCorduck, in her delightful book Machines Keywords Who Think, observed that “several directions are considered to Search, knowledge, logic, lan- guage, machine learning, agents. have been initiated or encouraged by the Workshop: the rise of symbolic methods, systems focussed on limited domains (early Expert Systems), and deductive systems versus inductive sys- ∗Vol.25, No.1, DOI: https://doi.org/10.1007/s12045-019-0921-2 RESONANCE | January 2020 43 GENERAL ARTICLE tems”. The proposal also included the use of neuron nets – “How can a set of (hypothetical) neurons be arranged so as to form con- cepts”, and the use of natural language. GOFAI McCorduck says that the greatest impression at Dartmouth was made by “two vaguely known persons from RAND and Carnegie Tech...a significant afterthought.” The two people were Herbert Simon (1916–2001) and Alan Newell (1927–1992) working at Carnegie Tech and RAND Corporation, who played a major role in setting up AI research in what was later called the Carnegie Mellon University. Along with J. C. Shaw (1922–1991), also from RAND, they had already developed a program called the Logic Theorist (LT). “It was the first program deliberately engi- neered to mimic the problem solving skills of a human being”. It went on to prove several theorems in Russell and Whitehead’s Problems given at the celebrated Principia Mathematica finding shorter and more el- end of the chapter in egant proofs for some! Simon, a Nobel Laureate in Economics, math books are easier to confirmed in his 1996 book Models of My Life that “a paper coau- solve because we know Journal of Symbolic Logic the method we are thored by LT was rejected by the on expected to employ. Not the grounds that it was not a new result”. It is also interesting so when the problem is to note that Simon wrote to Bertrand Russell (1872–1970), an au- posed in later life, thor of the original proof, that “in general, the machine’s problem- outside the context of the chapter. solving is much more elegant when it works with a selected list of strategic theorems than when it tries to remember and use all the previous theorems in the book.” A repository of knowledge is a key to intelligent behaviour. That retrieving the right chunk of knowledge from memory is not easy for humans either is illustrated by the following high school prob- lem, given in the book How to Solve It: Modern Heuristics (Michale- wicz and Fogel, 1999). The authors say that problems given at the end of the chapter in math books are easier to solve because we know the method we are expected to employ. Not so when the problem is posed in later life, outside the context of the chapter. Try the following problem yourself: Given a triangle ABC and an 44 RESONANCE | January 2020 GENERAL ARTICLE interior point D in the triangle, show that the sum of the lengths of the segments AD and DC is less than the sum of AB and BC. The statement by Simon is revealing of a major problem in AI – how to retrieve the relevant piece of knowledge from the vast repository that the memory may contain. Indeed, an active area of interest in AI is Memory Based Reasoning which, like humans, aims to exploit knowledge and experience for problem solving. One feature that separates experts from novices is the ability to do this. The chess psychologist Adriaan de Groot (1914–2006) con- ducted several ground-breaking experiments in the cognitive pro- cesses that occur in the brains of strong chess players. His most startling result was that grandmasters found a good move during the first few seconds of contemplation of the position, drawing attention to “the role of memory and visual perception in these processes, and to how strong players, especially grandmasters, used experience with past positions to expediate the process of finding a move”. Simon and Newel had laid the foundation of Classical or Sym- Symbolic AI is bolic AI when they put forth the Physical Symbol System Hy- concerned with writing pothesis, which says that processes acting upon symbol systems programs which ffi themselves are symbol are su cient to create artificial intelligence. A symbol is “a per- systems operating upon ceptible something that stands for something else”. Road signs, data structures, also numerals in mathematics, and letters of an alphabet, are exam- symbol systems. ples. Symbol systems refer to composite structures of symbols, for example, words and sentences in a natural or a formal lan- guage. Processes are realizations of algorithms acting upon sym- bol systems. Symbolic AI thus is concerned with writing programs which them- selves are symbol systems operating upon data structures, also symbol systems. This approach has also been called Good Old Fashioned AI (GOFAI) specially in the light of the bottom-up approaches in which intelligent behaviour emerges from a collec- tion of simple elements, like neurons, often through a process of evolution and learning. It is sometimes said that while in classical AI, symbols stand for elements, individuals and also concepts, in neural networks, it is not clear how such things are represented. RESONANCE | January 2020 45 GENERAL ARTICLE In artificial neural networks, also called sub-symbolic systems, representation is somehow encoded in the weights of the connec- tions between neurons. Further, they are not localised as in sym- bols, but are distributed across the network. Such networks are also called connectionist networks in which information is stored in patterns across neurons. A striking feature is that when parts of the network are damaged, specific information is still not lost. A point to ponder here is that even the artificial neural networks are implemented as computer programs, and so deep down are somehow symbolic. CHESS Games like chess have Games like chess have long held a fascination for AI researchers long held a fascination because, on the one hand, they are considered to be hallmarks for AI researchers of intelligence and genuinely complex problems, while, on the because, on the one hand, they are other hand, they are easier to implement in terms of input, output considered to be and representation. The paraphernalia required is minimal, and hallmarks of intelligence I have known chess enthusiasts on a walk playing a game men- and genuinely complex tally simply by saying their moves aloud. Luminaries like John problems, while, on the other hand, they are von Neumann (1903–1957) and Alan Turing have pondered over easier to implement in chess. Alex Bernstein (1936–2010) from IBM, who was present terms of input, output at Dartmouth, was already working on chess. and representation. It was quite apparent that a program that searches through fu- ture moves would be confounded by the explosion in the number of possibilities in chess. (See the section on searching below.) It is estimated that there are about 10120 distinct chess games. Compare this to the estimated 1075 or so number of fundamen- tal particles in the entire universe, and it is clear that the game cannot be completely analysed. The British grandmaster, David Levy, in 1968 scoffed at the idea of a computer program beating In 1997, the then world him at chess and wagered that none would do so in the next ten champion Garry years. He did, narrowly, win his bet, but machines were rapidly Kasparov lost a six-game improving. In a couple of decades, in 1997, the then world cham- matchtoIBM’sDeep Blue machine. pion Garry Kasparov lost a six-game match to IBM’s Deep Blue machine. In 2006 Levy converted, and became a champion of AI, 46 RESONANCE | January 2020 GENERAL ARTICLE even going as far as to publishing a book predicting that humanoid robots will become human companions in the future. The oriental game of Go by the same measure is even harder, but as described later, even here, a machine has beaten the reigning world cham- pion.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages16 Page
-
File Size-