I Realized Some Time Ago That One Has to Be Very Careful, When We Apply Terms from The

Total Page:16

File Type:pdf, Size:1020Kb

I Realized Some Time Ago That One Has to Be Very Careful, When We Apply Terms from The

Creativity: a Contribution to Ed Feigenbaum’s Festschrift. Panel on Creativity, Learning and Discovery. Stanford, March 2006.

Harold Cohen

After a lifetime of teaching I think I have a rough idea what human learning means, but I don’t have much idea what human learning and machine learning have in common. And while I think about program autonomy every day, I don’t have much idea how autonomous human beings are, much less whether the term has the same meaning in both domains. Then there’s creativity… well, I’ll come to that, but before I convince you I’m on the wrong panel, let me back up a little.

As some of you know, I am the author of AARON, an art-making program that I began while I was a guest scholar here at the AI Lab, more than thirty years ago. I need hardly explain to this gathering that it was Ed’s guiding hand that brought me here, Ed having reviewed an otherwise ill-fated NSF proposal and judged me to be a soul worth saving. In the name of program autonomy I’d like to be able to say that the images on the cover of your programs are AARON’s tribute to Ed, but in truth the tribute is my own, as it should be.

AARON did make the images, however; it’s a program that makes art, (***) not a program with which I make art. If I start the program running before I go to bed at night I’ll have about a hundred new, original images to review next morning. If that’s evidence of autonomy, then I might reasonably claim that AARON has a degree of autonomy. And in the sense that any human artist who could do what AARON does would be considered very creative, I might also claim that AARON is creative. In fact, I have never made such a claim, for reasons I hope to make clear in this talk.

1 In any case, it isn’t AARON’s arguable creativity I want to talk about. (!!!) I’m going to make the quite immodest assumption that to have conceived of a program like AARON and to have kept it moving forward for 35 years has required a modicum of creativity of my own, and what I want to report about my own creativity is that, in exercising it, I’ve never experienced anything that didn’t feel like perfectly normal intelligent behavior addressing very obvious problems.

If I’m right about that, then here’s a question that will take us to the heart of the issue of creativity. What accounts for the fact that I have found particular problems to be obvious, crucially in need of solution and compelling enough to justify years of work, when evidently they’ve been beneath the radar for everyone else?

In short, I want to separate the implementation of creativity from its essential core. The implementation, on the one hand, the individual’s intellectual strategies and his expert knowledge may indeed require only the normal intelligence I’ve sensed in my own behavior. They are required components of creative performance, certainly; but, of course, every field has its knowledgeable experts armed with powerful intellectual strategies who are productive but not creatively productive.

I’m going to argue, on the other hand, that the essential core of creativity lies, not in its implementation, but in the lifelong intellectual development of the individual and in the highly differentiated world model to which it gives rise. For the individual who lives inside that world model, a problem that invokes creativity sticks out like a sore thumb and the need to deal with it is unavoidable and compelling. For the rest of us outside that model, it isn’t even a minor bump.

2 So; here I am with my thirty-three year old program and my own current sore thumb is the idea of program autonomy. I’ve been increasingly preoccupied with it over the past few years, but let me see if I can identify the point at which that preoccupation began.

I met my first computer in 1968, about when I joined the visual arts faculty at UCSD. That was five years before I came to the AI Lab. Evidently I knew from the outset that using the computer meant programming the computer; which was surprising, given the cultural prejudices of the time. One of the reviewers of my NSF proposal voiced the common view when he wrote “How can Professor Cohen hope to learn Fortran; he’s an artist.” And those few other artists who came to computing in the late 60’s seem generally to have been persuaded by this view that they needed to enlist programmers to do what they wanted done. Inevitably, the result defined the computer as a rather conventional art-making tool, but one that disallowed the traditional, intensely intimate, hands-on approach to the making of art.

My own assumptions must have reflected a pre-existing attitude about the artist’s hands-on involvement in art-making, then, and the fact that I’d been a painter for twenty years might be enough to explain it. But why did I became involved with computers in the first place?

It wasn’t, in fact, because of any interest in computers; it was because of an increasing frustration about image-making. Making images isn’t difficult, you understand; anyone who can make some dirty marks on a piece of paper can make images, because the viewer brings a propensity to assign meaning to the marks, referring them to objects and events in the real world; which is basically what one means by an image. That’s the easy part; the hard part is understanding how this remarkable transaction works. And by the end of the

3 ‘sixties, with twenty years of painting behind me, I was becoming acutely aware that I had hardly more idea how it worked than I’d had when I started.

I never actively sought computing. But once presented with the opportunity, it occurred to me that in the process of writing a program to make the marks – in separating myself from the act -- I might learn more about how images worked than I’d ever learned by making them myself. Whatever creativity was required to take that step, it required only common sense to see that the separation had to be complete, that if I allowed myself to diddle with the making of individual images I might just as well not have a program do it at all.

So was that the beginning of my preoccupation with program autonomy? No, it was an important step, but it wasn’t the beginning. Let’s go back a little further.

In the early ‘sixties, (***) making my paintings involved inventing formal elements -- things that didn’t exist in the real world – and then rendering them as if they did. Those paintings allowed me to investigate the mechanisms of image-making, but as time went on I found myself developing an uneasy feeling about the invention part of the process. It was getting harder to do all the time and I was becoming increasingly convinced that there had to be a limit to how long one could go on inventing. (!!!)

So I was faced with the problem, unique to my own condition, of finding a way to generate formal material that didn’t require constant invention. (***) My solution was to invent a set of rules for making the paintings, so that I could then simply follow the rules and not need to invent anything else. Of course, it didn’t really eliminate the need for invention, but it did remove it from the day- to-day formal domain where I found it so troubling and put it somewhere else, at a higher level. (!!!)

4 Which certainly explains why, five years later, I simply assumed that using the computer meant programming the computer. I’d already implemented a rule- based program, for myself if not for a computer.

Was that the beginning of my concern with program autonomy, then?

No. Again, it was an important step in this story, but not its beginning. If I wanted to push back further I would need to account for the paintings in the early ‘sixties that led to this particular problem and to its solution. Pushing back even further, I could consider why I chose to be a painter rather than something else. Would I, eventually, find the beginning?

Surely not. I see creativity, not in terms of innovation with respect to real-world issues, where it is most in evidence, but, more fundamentally, as the agent that pushes forward the evolution of the individual’s world model; as the drive towards some strongly-felt though, initially, ill-defined goal; the goal of program autonomy in my example. AARON serves as the agent of evolution, to be sure; but the evolving world model is mine, not AARON’s; and that is the primary reason I do not claim it to be creative.

Several qualifications are in order, however. This is a view of human creativity, not artificial creativity. To justify using the same term in the two very different domains, we need at least to identify certain key characteristics common to both. But I can’t pretend to much insight about what those characteristics might be, and I’m aware that in denying AARON’s creativity I may be applying narrowly human constructs where they don’t belong.

I’m reminded of a conversation I had with Ed in the 1970’s. I’d been rash enough to write, in an exhibition catalog, that machines didn’t think, but they did make decisions; Ed was quite upset. “They do think,” he said, “and in due course

5 we’ll all talk about machines thinking and we’ll never give the matter a second thought.”

He had a point. And I can’t help wondering, occasionally, whether in due course we’ll talk about AARON as the most creative program in history and never give the matter a second thought.

6

Recommended publications