For Problems Sufficiently Hard ... AI Needs Cogsci

For Problems Sufficiently Hard ... AI Needs Cogsci

For Problems Sufficiently Hard ... AI Needs CogSci Selmer Bringsjord & Micah Clark Department of Cognitive Science Department of Computer Science Rensselaer AI & Reasoning (RAIR) Lab: http://www.cogsci.rpi.edu/research/rair/index.php Troy NY 12180 USA [email protected][email protected] http://www.rpi.edu/∼brings The View Sketched solvable, but time-consuming, problem. Kasparov lost because he earns his livelihood playing a game that, Is cognitive science relevant to AI problems? Yes — but from the standpoint of the affirmative answer defended only when these problems are sufficiently hard. When in this brief paper, is too easy (as one of us has argued they qualify as such, the best move for the clever AI elsewhere: (Bringsjord 1998)). researcher is to turn not to yet another faster machine Things change when you move to Turing-unsolvable bestowed by Moore’s Law, and not to some souped-up problems. Now of course if you agreed that some hu- version of an instrument in the AI toolbox, but rather mans can solve such a problem, you would have to agree to the human mind’s approach to the problem in ques- as well that computationalism, the view (put roughly) tion. Despite Turing’s (Turing 1950) prediction, made that human minds are bio-incarnated computing ma- over half a century ago, that by now his test (the Tur- chines at or below the Turing Limit, is false.2 We sus- ing test, of course) would be passed by machines, the pect that many of our readers will be averse to such best conversational computer can’t out-debate a sharp agreement, and in the present debate that’s fine. It’s toddler. The mind is still the most powerful thinking fine because our claim is that if AI researchers want to thing in the known universe; a brute fact, this. try to get standard computing machines to solve meaty But what’s a “sufficiently hard” problem? particular problems from a Turing-unsolvable class of Well, one possibility is that it’s a problem in a set of problems,3 they need to turn for help to human cogni- those which are Turing-solvable, but which takes a lot tion. Obviously, one can affirm computationalism while of time to solve. As you know, this set can be analyzed still maintaining that humans have the capacity to solve into various subsets; complexity theorists do that for particular problems from a Turing-unsolvable class. For us. Some of these subsets contain only problems that example, while it may not be true that humans can most would say are very hard. For example, most would solve the halting problem, it’s undeniable that, for some say that an NP-complete problem is very hard. But is particular Turing machines of considerable complexity, it sufficiently hard, in our sense? No. Let P be such ingenious, diligent humans can ascertain whether or not a problem, a decision problem for F associated with they halt. some finite alphabet A, say. We have an algorithm A 1 Now, we must confess that, at least as of now, we that solves P . And odds are, A doesn’t correspond don’t have a deductive argument for the view that AI to human cognition. The best way to proceed in an must turn to human cognition in order to produce a attempt to get a computer to decide particular members ? machine capable of solving particular problems from a of A is to rely on computational horsepower, and some set of Turing-unsolvable ones. What we do have is some form of pruning to allow decisions to be returned in empirical evidence, connected to specific challenges of relatively short order. the sort in question. That is, we can explain the chal- What we have just described structurally, maps with lenges, and show that AI of a sort that explicitly turns surprising accuracy onto what was done in AI specifi- for guidance to the human case is more successful than cally for the problem of chess. In his famous “20 Ques- cognitive-science-ignoring variety in surmounting these tions” paper, written in the very early days of AI and challenges. We will briefly mention three such chal- CogSci (and arguably at the very dawn of a sub-field lenges here: one that researchers have explored in the very relevant, for reasons touched upon later, to is- Rensselaer AI & Reasoning (RAIR) Lab with indis- sues dealt with herein: computational cognitive model- pensable help from many others (esp. Owen Kellett), ing and cognitive architectures), Newell (Newell 1973) and with support from the National Science Founda- suggested that perhaps the nature of human cognition tion; another that has in significant part catalyzed a could be revealed by building a machine able to play major advance in computational logic; and thirdly, the good chess. But Deep Blue was assuredly not what Newell had in mind. Deep Blue was an experiment 2For a recent clarification of what computationalism in harnessing horsepower to muscle through a Turing- amounts to, and an argument that the doctrine is false, see (Bringsjord & Arkoudas 2004). 1I.e., for every u ∈ A?, A(u) = Y es iff F u (F is true of 3We include as well problems that appear to be Turing- u), and A returns No when not F u. unsolvable. challenge of getting a machine to learn in a special man- 1111111110 State 2 = 0*[U][X][X][X][Vs]0* ner associated, in the human sphere, with learning by 1111111110 State 3 1111111110 State 0 = 0*[U][X][X][Xq][V']0* reading. 1111111010 State 3 1111111010 State 3 1111111010 State 0 = 0*[U][X][Xq][Y][V']0* Challenge #1: Busy Beaver 1111101010 State 3 The first challenge is produced by the attempt to 1111101010 State 3 1111101010 State 0 = 0*[U][Xq][Y][Y][V']0* “solve” Rado’s (Rado 1963) Σ problem (or the “busy 1110101010 State 3 beaver” problem).4 The Σ function is a mapping from 1110101010 State 3 1110101010 State 0 = 0*[Uq][Y][Y][Y][V']0* N to N such that: Σ(n) is the largest number of con- 1010101010 State 3 tiguous 1’s an n-state Turing machine with alphabet {0, 1010101010 State 3 01010101010 State 0 1} can write on its initially blank tape, just before halt- 11010101010 State 1 ing with its read/write head on the leftmost 1, where a 11010101010 State 2 sequence 11010101010 State 0 11110101010 State 1 m times 11110101010 State 2 = 0*[U'][rY][Y][Y][V']0* z }| { 11110101010 State 0 11 ... 11 11111101010 State 1 5 11111101010 State 2 = 0*[U'][Z][ Y][Y][V']0* is regarded simply as m. Rado (Rado 1963) proved r 11111101010 State 0 this function to be Turing-uncomputable long ago; a 11111111010 State 1 nice contemporary version of the proof (which is by the 11111111010 State 2 = 0*[U'][Z][Z][rY][V']0* 11111111010 State 0 way not based diagonalization, the customary technique 11111111110 State 1 used in the proof of the halting problem) is given in 11111111110 State 2 = 0*[U'][Z][Z][Z][rV']0* 11111111110 State 0 (Boolos & Jeffrey 1989). Nonetheless, the busy beaver 11111111111 State 1 problem is the challenge of determining Σ(n) for ever 111111111110 State 2 = 0*[U'][Z][Z][Z][V''s]0* larger values of n.6 How would one go about trying to get a computer to Figure 1: Christmas Tree Execution determine Σ(n), for, say, 6? Well, no matter what you do, you will run smack into a nasty problem: the prob- lem of holdouts, as they are called. Holdouts are Turing machines which, when simulated (starting, of course, on it makes on the tape. Machines exhibiting this behavior an empty tape) continue to run, and run, and run ... are classified as non-halters due to a repeatable back and you’re not sure if they’re going to stop or not.7 So and forth sweeping motion which they exhibit on the what to do? Well, smart humans develop ingenious vi- tape. Observably, the pattern of the most basic form sual representation and reasoning to figure out whether of Christmas Trees is quite easy to recognize. The ma- holdouts will terminate. Some of these representations, chine establishes two end components on the tape and and the reasoning over them, are remarkable. one middle component. As the read/write head sweeps As a case in point, let us examine one of these rep- back and forth across the tape, additional copies of resentations: so-called “Christmas Trees,” discussed in the middle component are inserted, while maintaining (Brady 1983; Machlin & Stout 1990). For purposes of the integrity of the end components at the end of each this discussion, consider the following notation used to sweep. represent the contents of a tape: Figure 1 displays a partial execution of a 4-state Christmas Tree machine which is representative of one ∗ ∗ 0 [U][X][X][X][Vs]0 . sweep back and forth across the tape. As can be seen, at the beginning of this execution, the machine has es- Here 0∗ denotes an infinite sequence of 0’s that caps tablished three middle or [X] components capped by each end of the tape. Additionally, [U], [X], and [V ] the [U] and [V ] end components on each side. As the represent some arbitrary sequence of characters on the read/write head sweeps back and forth across the tape, tape while the subscripted s indicates that the machine it methodically converts the [X] components into [Y ] is in state s with its read/write head at the leftmost components and then into [Z] components on the return character of the [V ] character sequence.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us