WHAT THE COMPUTER CAN NOT DO.......''
Hubert Dreyfus has been a critic of
artificial intelligence research since the 1960s. In a series of papers and books, including
Alchemy and AI (1965),
What Computers Can't Do (
1972;
1979;
1992) and
Mind over Machine (1986),
he presented an assessment of AI's progress and a critique of the
philosophical foundations of the field. Dreyfus' objections are
discussed in most introductions to the
philosophy of artificial intelligence, including
Russell & Norvig (2003), the standard AI textbook, and in
Fearn (2007), a survey of contemporary philosophy.
[1]
Dreyfus argued that human intelligence and expertise depend primarily on unconscious instincts rather than conscious
symbolic
manipulation, and that these unconscious skills could never be captured
in formal rules. His critique was based on the insights of modern
continental philosophers such as
Merleau-Ponty and
Heidegger, and was directed at the first wave of AI research which used high level
formal symbols to represent reality and tried to reduce intelligence to symbol manipulation.
When Dreyfus' ideas were first introduced in the mid-1960s, they were met with ridicule and outright hostility. By the 1980s, however, many of his perspectives were rediscovered by researchers working in
robotics and the new field of
connectionism—approaches now called "
sub-symbolic" because they eschew early AI research's emphasis on high level symbols. Historian and AI researcher
Daniel Crevier writes: "time has proven the accuracy and perceptiveness of some of Dreyfus's comments." Dreyfus said in 2007 "I figure I won and it's over—they've given up."
[5]
Dreyfus' critique
The grandiose promises of artificial intelligence
In
Alchemy and AI (1965) and
What Computers Can't Do (1972),
Dreyfus summarized the
history of artificial intelligence and ridiculed the unbridled optimism that permeated the field. For example,
Herbert A. Simon, following the success of his program
General Problem Solver (1957), predicted that by 1967:
- A computer would be world champion in chess.
- A computer would discover and prove an important new mathematical theorem.
- Most theories in psychology will take the form of computer programs.
The press reported these predictions in glowing reports of the imminent arrival of machine intelligence.
Dreyfus felt that this optimism was totally unwarranted. He believed
that they were based on false assumptions about the nature of human
intelligence. Pamela McCorduck explains Dreyfus position:
[A] great misunderstanding accounts for public confusion about
thinking machines, a misunderstanding perpetrated by the unrealistic
claims researchers in AI have been making, claims that thinking
machines are already here, or at any rate, just around the corner.
These predictions were based on the success of an "information
processing" model of the mind, articulated by Newell and Simon in their
physical symbol systems hypothesis, and later expanded into a philosophical position known as
computationalism by philosophers such as
Jerry Fodor and
Hillary Putnam.
Believing that they had successfully simulated the essential process of
human thought with simple programs, it seemed a short step to producing
fully intelligent machines. However, Dreyfus argued that philosophy,
especially
20th century philosophy,
had discovered serious problems with this information processing
viewpoint. The mind, according to modern philosophy, is nothing like a
computer.
Dreyfus' four assumptions of artificial intelligence research
In
Alchemy and AI and
What Computers Can't Do, Dreyfus
identified four philosophical assumptions that supported the faith of
early AI researchers that human intelligence depended on the
manipulation of symbols.
"In each case," Dreyfus writes, "the assumption is taken by workers in
[AI] as an axiom, guaranteeing results, whereas it is, in fact, one
hypothesis among others, to be tested by the success of such work."
- The biological assumption
- The brain processes information in discrete operations by way of some biological equivalent of on/off switches.
In the early days of research into
neurology, scientists realized that
neurons fire in all-or-nothing pulses. Several researchers, such as
Walter Pitts and
Warren McCulloch, argued that neurons functioned similar to the way
Boolean logic gates operate, and so could be imitated by electronic circuitry at the level of the neuron.
[11] When digital computers became widely used in the early 50s, this argument was extended to suggest that the brain was a vast
physical symbol system, manipulating the binary symbols of zero and one. Dreyfus was able to refute the biological assumption by citing research in
neurology that suggested that the action and timing of neuron firing had analog components.
To be fair, however, Daniel Crevier observes that "few still held that
belief in the early 1970s, and nobody argued against Dreyfus" about the
biological assumption.
- The psychological assumption
- The mind can be viewed as a device operating on bits of information according to formal rules.
He refuted this assumption by showing that much of what we "know" about the world consists of complex
attitudes or
tendencies
that make us lean towards one interpretation over another. He argued
that, even when we use explicit symbols, we are using them against an
unconscious background of
commonsense knowledge
and that without this background our symbols cease to mean anything.
This background, in Dreyfus' view, was not implemented in individual
brains as explicit individual symbols with explicit individual meanings.
- The epistemological assumption
- All knowledge can be formalized.
This concerns the philosophical issue of
epistemology, or the study of
knowledge. Even if we agree that the psychological assumption is false, AI researchers could still argue (as AI founder
John McCarthy
has) that it was possible for a symbol processing machine to represent
all knowledge, regardless of whether human beings represented knowledge
the same way. Dreyfus argued that there was no justification for this
assumption, since so much of human knowledge was not symbolic.
- The ontological assumption
- The world consists of independent facts that can be represented by independent symbols
Dreyfus
also identified a subtler assumption about the world. AI researchers
(and futurists and science fiction writers) often assume that there is
no limit to formal, scientific knowledge, because they assume that any
phenomenon in the universe can be described by symbols or scientific
theories. This assumes that everything that
exists can be
understood as objects, properties of objects, classes of objects,
relations of objects, and so on: precisely those things that can be
described by logic, language and mathematics. The question of what
exists is called
ontology, and so
Dreyfus
calls this "the ontological assumption:" If this is false, then it
raises doubts about what we can ultimately know and on what intelligent
machines will ultimately be able to help us to do.
The primacy of background coping skills
In
Mind Over Machine (1986), written during the heyday of
expert systems,
Dreyfus analyzed the difference between human expertise and the
programs that claimed to capture it. This expanded on ideas from
What Computers Can't Do, where he had made a similar argument criticizing the "
cognitive simulation" school of AI research practiced by
Allen Newell and
Herbert A. Simon in the 1960s.
Dreyfus argued that human problem solving and expertise depend on
our background sense of the context, of what is important and
interesting given the situation, rather than on the process of
searching through combinations of possibilities to find what we need.
Dreyfus would describe it in 1986 as the difference between
"knowing-that" and "knowing-how", based on
Heidegger's distinction of
present-at-hand and
ready-to-hand.
[14]
Knowing-that is our conscious, step-by-step problem solving
abilities. We use these skills when we encounter a difficult problem
that requires us to stop, step back and search through ideas one at
time. At moments like this, the ideas become very precise and simple:
they become context free symbols, which we manipulate using logic and
language. These are the skills that
Newell and
Simon
had demonstrated with both psychological experiments and computer
programs. Dreyfus agreed that their programs adequately imitated the
skills he calls "knowing-that."
Knowing-how, on the other hand, is the way we deal with things
normally. We take actions without using conscious symbolic reasoning at
all, as when we recognize a face, drive ourselves to work or find the
right thing to say. We seem to simply jump to the appropriate response,
without considering any alternatives. This is the essence of expertise,
Dreyfus argued: when our intuitions have been trained to the point that
we forget the rules and simply "size up the situation" and react.
Our sense of the situation is based, Dreyfus argues, on our goals,
our bodies and our culture—all of our unconscious intuitions, attitudes
and knowledge about the world. This “context” or "background" (related
to
Heidegger's
Dasein)
is a form of knowledge that is not stored in our brains symbolically,
but intuitively in some way. It affects what we notice and what we
don't notice, what we expect and what possibilities we don't consider:
we discriminate between what is essential and inessential. The things
that are inessential are relegated to our "fringe consciousness"
(borrowing a phrase from
William James): the millions of things we're aware of, but we're not really thinking about right now.
Dreyfus claimed that he could see no way that AI programs, as they were implemented in the 70s and 80s, could capture this
background or do the kind of fast problem solving that it allows. He argued that our unconscious knowledge could
never
be captured symbolically. If AI could not find a way to address these
issues, then it was doomed to failure, an exercise in "tree climbing
with one's eyes on the moon."
History
Dreyfus began to formulate his critique in the early 1960s while he was a professor at
MIT,
then a hotbed of artificial intelligence research. His first
publication on the subject is a half-page objection to a talk given by
Herbert A. Simon in the spring of 1961.
Dreyfus was especially bothered, as a philosopher, that AI researchers
seemed to believe they were on the verge of solving many long standing
philosophical problems within a few years, using computers.
Alchemy and AI
In 1965, Dreyfus was hired (with his brother
Stuart Dreyfus' help) by
Paul Armer to spend the summer at
RAND Corporation's Santa Monica facility, where he would write
Alchemy and AI,
the first salvo of his attack. Armer had thought he was hiring an
impartial critic and was surprised when Dreyfus produced a scathing
paper intended to demolish the foundations of the field. (Armer stated
he was unaware of Dreyfus' previous publication.) Armer delayed
publishing it, but ultimately realized that "just because it came to a
conclusion you didn't like was no reason not to publish it."
[17] It finally came out as RAND Memo and soon became a best seller.
The paper flatly ridiculed AI research, comparing it to
alchemy:
a misguided attempt to change metals to gold based on a theoretical
foundation that was no more than mythology and wishful thinking.
It ridiculed the grandiose predictions of leading AI researchers,
predicting that there were limits beyond which AI would not progress
and intimating that those limits would be reached soon.
Reaction
The paper "caused an uproar", according to Pamela McCorduck. The AI community's response was derisive and personal.
Seymour Papert dismissed one third of the paper as "gossip" and claimed that every quotation was deliberately taken out of context.
Herbert A. Simon
accused Dreyfus of playing "politics" so that he could attach the
prestigious RAND name to his ideas. Simon says "what I resent about
this was the RAND name attached to that garbage".
[23]
Dreyfus, who taught at
MIT, remembers that his colleagues working in AI "dared not be seen having lunch with me."
[24] Joseph Weizenbaum, the author of
ELIZA, felt his colleagues' treatment of
Dreyfus
was unprofessional and childish. Although he was an outspoken critic of
Dreyfus' positions, he recalls "I became the only member of the AI
community to be seen eating lunch with Dreyfus. And I deliberately made
it plain that theirs was not the way to treat a human being."
[25]
The paper was the subject of a short in
The New Yorker
magazine on June 11, 1966. The piece mentioned Dreyfus' contention
that, while computers may be able to play checkers, no computer could
yet play a decent game of chess. It reported with wry humor (as Dreyfus
had) about the victory of a ten year old over the leading chess
program, with "even more than its usual smugness."
In hopes of regaining AI's reputation,
Seymour Papert arranged a chess match between Dreyfus and
Richard Greenblatt's
Mac Hack program. Dreyfus lost, much to Papert's satisfaction. An
Association for Computing Machinery bulletin
[27] used the headline:
- "A Ten Year Old Can Beat the Machine— Dreyfus: But the Machine Can Beat Dreyfus"[28]
Dreyfus complained in print that he hadn't said a computer will
never play chess, to which
Herbert A. Simon
replied: "You should recognize that some of those who are bitten by
your sharp-toothed prose are likely, in their human weakness, to bite
back ... may I be so bold as to suggest that you could well begin the
cooling---a recovery of your sense of humor being a good first step."
Vindicated
By the early 1990s several of Dreyfus' radical opinions had become mainstream.
Failed predictions. As Dreyfus had foreseen, the grandiose
predictions of early AI researchers failed to come true. Fully
intelligent machines (now known as "
strong AI") did not appear in the mid-1970s as predicted.
HAL 9000 (whose capabilities for natural language, perception and problem solving were based on the advice and opinions of
Marvin Minsky) did not appear in the year 2001. "AI researchers", writes Nicolas Fearn, "clearly have some explaining to do."
Today researchers are far more reluctant to make the kind of
predictions that were made in the early days. (Although some futurists,
such as
Ray Kurzweil, are still given to the same kind of optimism.)
The biological assumption, although common in the forties and early fifties, was no longer assumed by most AI researchers by the time Dreyfus published
What Computers Can't Do. Although many still argue that it is essential to reverse-engineer the brain by simulating the action of neurons (such as
Ray Kurzweil or
Jeff Hawkins),
they don't assume that neurons are essentially digital, but rather that
the action of analog neurons can be simulated by digital machines to a
reasonable level of accuracy. (
Alan Turing had made this same observation as early as 1950.)
[33]
The psychological assumption and
unconscious skills.
Many AI researchers have come to agree that human reasoning does not
consist primarily of high-level symbol manipulation. In fact, since
Dreyfus first published his critiques in the 60s, AI research in
general has moved away from high level symbol manipulation or "
GOFAI", towards new models that are intended to capture more of our
unconscious reasoning.
Daniel Crevier writes that by 1993, unlike 1965, AI researchers no longer made the psychological assumption, and had continued forward without it. These new "
sub-symbolic" approaches include:
- Computational intelligence paradigms, such as neural nets, evolutionary algorithms
and so on are mostly directed at simulated unconscious reasoning.
Dreyfus himself agrees that these sub-symbolic methods can capture the
kind of "tendencies" and "attitudes" that he considers essential for
intelligence and expertise.
- Research into commonsense knowledge has focussed on reproducing the "background" or context of knowledge.
- Robotics researchers like Hans Moravec and Rodney Brooks were among the first to realize that unconscious skills would prove to be the most difficult to reverse engineer. (See Moravec's paradox.) Brooks would spearhead a movement in the late 80s that took direct aim at the use of high-level symbols, called Nouvelle AI. The situated movement in robotics research attempts to capture our unconscious skills at perception and attention.[35]
Ignored
Although clearly AI research has come to agree with Dreyfus,
McCorduck writes that "my impression is that this progress has taken
place piecemeal and in response to tough given problems, and owes
nothing to Dreyfus."
The AI community, with a few exceptions, chose not to respond to
Dreyfus directly. "He's too silly to take seriously" a researcher told
Pamela McCorduck.
Marvin Minsky said of Dreyfus (and the other critiques coming from
philosophy) that "they misunderstand, and should be ignored." When Dreyfus expanded
Alchemy and AI to book length and published it as
What Computers Can't Do
in 1972, no one from the AI community chose to respond (with the
exception of a few critical reviews). McCorduck asks "If Dreyfus is so
wrong-headed, why haven't the artificial intelligence people made more
effort to contradict him?"
Part of the problem was the
kind of philosophy that Dreyfus used in his critique. Dreyfus was an expert in
modern European philosophers (like
Heidegger and
Merleau-Ponty), as Pamela McCorduck points out.
AI researchers of the 1960s, by contrast, based their understanding of
the human mind on engineering principles and efficient problem solving
techniques related to
management science. On a fundamental level, they spoke a different language.
Edward Feigenbaum complained "What does he offer us?
Phenomenology! That ball of fluff. That cotton candy!"
[39] In 1965, there was simply too huge a gap between European philosophy and
artificial intelligence, a gap that has since been filled by
cognitive science,
connectionism and
robotics
research. It would take many years before artificial intelligence
researchers were able to address the issues that were important to
continental philosophy, such as
situatedness,
embodiment,
perception and
gestalt.
Another problem was that he claimed (or seemed to claim) that AI would
never be able to capture the human ability to understand context, situation or purpose in the form of rules. But (as
Peter Norvig and
Stuart Russell
would later explain), an argument of this form can not be won: just
because one can not imagine the rules, this does not mean that no such
rules exist. They quote
Alan Turing's answer to all arguments similar to Dreyfus':
"we cannot so easily convince ourselves of the absence of complete
laws of behaviour ... The only way we know of for finding such laws is
scientific observation, and we certainly know of no circumstances under
which we could say, 'We have searched enough. There are no such laws.'"[40]
Dreyfus did not anticipate that AI researchers would realize their
mistake and begin to work towards new solutions, moving away from the
symbolic methods that Dreyfus criticized. In 1965, he did not imagine
that such programs would one day be created, so he claimed AI was
impossible. In 1965, AI researchers did not imagine that such programs
were necessary, so they claimed AI was almost complete. Both were wrong.
A more serious issue was the impression that Dreyfus' critique was
incorrigibly hostile. McCorduck writes "His derisiveness has been so
provoking that he has estranged anyone he might have enlightened. And
that's a pity."
Daniel Crevier writes that "time has proven the accuracy and
perceptiveness of some of Dreyfus's comments. Had he formulated them
less aggressively, constructive actions they suggested might have been
taken much earlier."