REVIEW-ARTICLES
- Marc
Angenot. Utopia as State Novel
(Pierre-François Moreau. Le Récit utopique, droit natural et roman
de l'état)
- Dagmar
Barnouw. "Melancholy Here
Now": Artificial and Alien Intelligence (John Haugeland, ed. Mind Design. Philosophy, Psychology and Artificial
Intelligence; Douglas R. Hofstadter and Daniel Dennett, eds. The Mind's I:
Fantasies and Reflections on Self and Soul)
- Daniel
Gerould. On Soviet Science Fiction (Jacqueline Lahana. Les mondes parallèles de la science-fiction soviétique; Leonid
Heller. De la science-fiction soviétique: par dela le dogme, un univers)
BOOKS IN REVIEW
- Aldiss'
Essays (Casey Fredericks)
- Hassler's
Comic Tones in Science Fiction (David N. Samuelson)
- Greenland's
Entropy Exhibition (Kathleen L. Spencer)
- "Index
Learning": The Le Guin Bibliography (James W.
Bittner)
- Five
from the Borgo Press (Science Fiction Voices 4: Interviews
with Science Fiction Writers. Conducted by Jeffrey M. Elliot; Science Fiction
Voices 5: Interviews with Science Fiction Writers. Conducted by Darrell Schweitzer;
Brian M. Stableford. Masters of Science Fiction: Essays on Six Science Fiction Authors;
The Future of the Space Program. Large Corporations and Society: Discussions
with 22 Science-Fiction Writers. Conducted by Jeffrey M. Elliot and David Mogen,
Wilderness Visions. Science Fiction Westerns, Vol. 1) (Peter Fitting)
REVIEW ARTICLES
Marc Angenot
Utopia as a State Novel
Pierre-François Moreau. Le Récit utopique, droit naturel et roman de l'état. Paris: Presses universitaires de France ("Pratiques théoriques" Series),
1982. 142pp. FF. 58.00.
P.F. Moreau's monograph is the latest attempt at providing an encompassing paradigm for
classical utopias as a literary or philosophical genre. It discusses a number of
relatively new hypotheses about a tradition that to Moreau's way of thinking extends from
Thomas More up to, but not including, L.S. Mercier (whose
L'an 2440, by
transposing the ideal State into the future, radically modifies the genre: see Moreau, p.
7). I was quite surprised to find M. Moreau declaring at the outset that although there
are several histories of the utopian tradition, his project is original for
attempting to delineate the "laws of the genre" (p. 8). He never quotes or
discusses Berneri, or Buber, Cioranescu, Elliott, Krey, Manuel, Morton, Negley, Schwonke,
Suvin, or Villgradter; nor does he seem to be aware of the theories and concepts of Beauchamp, Biesterfeld, Dupont, Gove, Philmus, and others; and it is quite difficult to decide
whether he knows of any of them. This is not to say that I would expect him to refer to
them extensively; but there are moments where his own hypotheses do call for a discussion
of their predecessors--a discussion that M. Moreau never provides.
His genological paradigm is based not on literary or rhetorical features but on a
certain "juridical anthropology," according to which Utopia from the Renaissance
on is mainly a fictional means for formulating and illustrating the emergent basic
concepts of
Natural Law.
In formal terms, M. Moreau remarks that while utopian
writings have as their central textual element a conjectural narrative about a better
community, most classical utopias (and especially More's) in fact usually consist of three
interelated discourses: a critical discourse about the present state of affairs in
the author's "empirical" society, a narrative discourse depicting a
"better" society, and a justificative discourse explaining under what conditions
such a better social organization would be possible. The utopographer is never a simple
philosophical polemicist denouncing corrupt practices and injustices. His constant mandate
is to trace the diversity of apparent societal miseries and misfortunes back to an
axiomatic but unapparent evil which is taken to be the basic cause of all social
wrongs (pp. 13-14). It is from that point of view that the Utopian narrative reorganizes
previous discursive materials found in such ancient writings as Plato's Republic, Hesiod's
and others' poetic visions of a Golden Age, Christian concepts and Millenarist or other
unorthodox conjectures, the Blessed Islands of folkloric tradition and the myths of
Cockayne, and so on.
M. Moreau proceeds by analyzing (pp. 52-53) certain constant topoi in utopian
narratives: the closure of the utopian community (which is, almost exclusively, an
island); the quarantine imposed on travelers; the traditional principle: "Only a few
laws but effective and clear"; the (Christian) critique of worldliness and luxury; a
frequent rational tolerance for suicide and euthanasia, etc. He concludes this part of his
book by noting that "as distinguished from most previous political and moral
discourses, Utopia thinks in terms of techniques of societal management," albeit
without always indicating the practical means for such management (p. 54).
Acknowledging--as his subtitle indicates--that Utopias can rightly be termed
"State Novels," Moreau sees them as contributing to the building up of an
"anthropology of equality" (p. 54), for in them societal insulation serves to
erase any disparities among citizens.
Moreau in his fourth chapter deals in a rather subtle way with utopia's linguistic (in
fact, hyper-rational) fantasies. Here he focuses on the descriptions of social management
offered by Utopographers from More to Foigny and Veiras as they evince some of the major
political ideas to come out of the Renaissance. "Do Utopian States," asks
Moreau, "differ so much from what contemporary 'empirical' States strived to
achieve," including total ideological control and the "Surveillance and
Punishment" outlined by Michel Foucault as the implicit motto of the Modern State?
Furthermore, to reinforce this as a rhetorical question, Moreau claims that classical
utopists were "quite often members of State apparatus" (p. 99) (but is that
true? what about Campanella, Foigny, Veiras, Tiphaigne de la Roche: hardly men of power or
bureaucrats as far as I can see), and that as such they "contented themselves with
fictionally extending somewhat actual possibilities by supposing that [their State] could
be improved and at the same time extricated from any material obstacle" (pp. 99-100).
Moreau's conclusions deal with the Utopian tradition as part of and in dialogue with
the emergent and evolving modern concept of Natural Law as promoting in fact that specific
tradition which based the concept of law and legitimacy on the idea of the equality of
Citizens as legal subjects. He shows in a rather convincing way that Utopia's main
interest comprises those classical doctrinal jurists who from Ockham to Kant built up a
legal philosophy whose originality consisted precisely in conceiving of the political
realm in terms of the categories of private law (pp. 132-33). Indeed,
Moreau's entire eighth chapter, relating as it does the utopian tradition to the
defense and illustration of the "Sujet
de droit" (an idea which led to the American Declaration
of Independence in 1776 and the French Declaration of Rights in 1789) is
extremely interesting and telling. Certain polemics in the 18th century between jurists
and utopists seem to show that what was termed the chimerical aspect of most utopias was
not found in the concept of a perfectly balanced and egalitarian state, but in the
abstract concept of a perfect citizen, as a purely juridical entity, which their concept
of the State presupposes. Hence the divergent theories about private property, conceived
either as the source of inequalities (as in most utopias) or as a justifiable means of
avoiding dissension and discord among individuals (pp. 138 ff.). "Utopographers
suggest that one can--and must at a given level--set aside private property (and no proponent
of Natural Law would object to this approach); but precisely at that level their point of
view remains axiomatically limited," and this in turn led them to ignore those
questions that allow jurists to reinscribe the individual's "peculiarities"
within the formal frame of the "Sujet de Droit" (see p. 142). M. Moreau's thesis here is certainly quite insightful and obliges us to reconsider the
way we approach the "utopian" aspects of classical utopias, understood as
critical and potentially liberating frames of thinking. Yet M. Moreau does not discuss
those aspects of the genre, and in particular he does not consider the role of radical
critique that the likes of Veiras, Tyssot, and others conceived (possibly self-delusively)
for their work; nor, for that matter, does he give any attention to present-day utopists
who put emphasis on the critical and liberatory possibilities of utopia. His book is
therefore at once suggestive and perspicacious and incomplete or one-sided.
Dagmar Barnouw
"Melancholy Here Now": Artificial and Alien
Intelligence
John Haugeland, ed. Mind
Design. Philosophy, Psychology and Artificial Intelligence. Cambridge,
MA: MIT Press, 1981. xii + 368pp. $10.00 paper.
Douglas R. Hofstadter and Daniel Dennett, eds. The Mind's I: Fantasies and Reflections on Self and Soul NY:
Basic Books, 1981. 501pp. S15.00.
In Knowledge and the Flow of Information, the philosopher Fred I. Dretske
argues that sensory experience (perception) should be understood as information in analog
form and cognition as information extracted from perception and converted to digital form.
It is an argument that will make researchers in the field of artificial intelligence (AI)
happy; for Dretske, like them, reasons abstractly from situations rigorously distanced
from real-world situations. At one point he uses an example for concept-formation that
appears in Hilary Putnam's "The Meaning of 'Meaning'"1
and comments: "The example has an unfortunate science fiction flavor, but the lesson
it carries is practical enough."2 Without lamenting
the philosopher's matter-of-fact dismissal of SF, I want to draw attention to what Dretske
overlooks: (1) that abstracting from a real-world situation, as it is done imaginatively
in SF, can indeed be a helpful tool in epistemology; and--what is still more important
in the context of the following remarks--(2) that by using imaginative SF extrapolations
derived from questions raised by AI research, certain kinds of fictional projections of
man-made, man-like beings can be accepted as significant contributions to the arguments
of the more skeptical critics of AI.
Man-made beings must have man-made minds; if they are man-like, their minds must be
highly complex. We might feel tempted to try to gain access to a (fictitious) man-made
being by asking "what is it (literally) like for a man-made mind to be a man-made
mind?" This, however, seems a very problematic question on several accounts, and the
fact that as far as we know--or at least can agree--there are no man-made minds yet is
just one, and by no means the most important, of them. We know, as individuals, what it
feels like to be us, our mind, but we do not know what a human mind is, and even less (if
that be logically possible) what it is like to be a human mind. The subjectivity of
consciousness and experience seems to be shielded by impenetrable screens against the
question "what is it like to be... ?" and it is therefore apparently futile to
ask "what is it like to be a computer?" Nevertheless, the concept of artificial
intelligence contains that question, and AI research cannot avoid it.
This state of affairs has rather far-reaching implications for central assumptions
underlying most of the AI research today, and SF could help to understand them more
clearly and critically. Fiction has always tried to penetrate the insulating screen of
subjectivity; it has a vocation for imagining an Other, indeed, it has thrived on
examining what it is like to be the Other. SF texts asking "what is it like to be a
man-made mind?" can explore artificial or alien intelligence, or artificial as alien
intelligence, from a perspective that re-directs the inquiry to human perception and
cognition, to questions of human intelligence, of identity, of the relation of Self and
Other. Dealing with minds in their worlds and with their possible or impossible
interaction, SF can address "what is it like to be" in such a way chat an
awareness of approximation is communicated. It can offer, that is, an unassertive model
of explorative speculation and thus can shed some light on problems evidently unsolvable
as yet.
Now, what are these unsolvable problems in AI research; or rather, and more precisely,
what are they perceived--or not perceived--to be? John Haugeland starts his introduction
to Mind Design with Hobbes's remark that "reasoning is but reckoning,"
and comments that three centuries later, with the development of electronic computers,
this idea has finally caught on and has become "the single most important theoretical
hypothesis in psychology (and several allied disciplines)." Making this connection, Haugeland might have added that Hobbes, following Aristotle, intended to clarify certain
structural properties of reasoning, but did not, of course, make any claims as to a direct
equation between mind (thought, consciousness) and computation. The central assumption
of AI research, or Cognitivism, is (to use Haugeland's description) that at a suitable
level of abstraction theories of natural intelligence should share the basic form with
theories that explain highly sophisticated computer systems (p. 2). Such an assumption
will seem useful to the AI researcher insisting on a pure (scientific) form of
psychological research, but what must be and has been debated is precisely the usefulness
of this concept of scientificity in psychology. As Haugeland admits, it is here that
questions of meaningfulness and significance arise, and he deals with them specifically in
his own contribution to the collective volume Mind Design, "The Nature and
Plausibility of Positivism."
Up to a point, meaning as making sense can be discussed in terms of the relationship
between a human chess-player and a chess-playing machine--a model much-beloved by AI
researchers. The decisions made by the chess-playing "intentional black box"
make sense to the human chess-player if they meet specific "cogency conditions"
(that is, conditions stated for making sense--in this case those relevant to the game of
chess). While Haugeland admits that meaningfulness is not an intrinsic property of
behavior that can be observed or measured, he does want to define it as a
"characteristic that can be attributed in an empirically justified interpretation, if
the behavior is part of an overall pattern that 'makes sense'" (p. 258)--that is,
satisfies specified cogency conditions. This would support the Cognitivist, or AI, view
that mind is an information-processing system, a view claimed to enable psychology to
avoid the choice between "the supposedly disreputable method of introspection, and a
crippling confinement to purely behavioral description," as Haugeland nicely puts it
(see Mind Design, p. 261).
It is true that the possibility of not being able to escape such a choice has disturbed
psychologists. However, Cognitivism, with its own largely unreflected version of
behaviorism, is not the solution, but instead is an addition to the problem. First, there
is the epistemological flaw of circularity: the AI researcher postulates a human
intentional black box instantiated in the nervous system, and then argues happily for the
systemic nature of the mind as the very information-processing system, which the black-box
model presupposes it is (see Mind Design, p. 265). Second, there are the very
serious problems caused by the illegitimate segregation of cognitive and non-cognitive
states--i.e., by Cognitivism's inability to deal with phenomena like moods, skills,
understanding. On a very simple level, the human chess-player, if supposed to fit the
model, must not be happy, unhappy, tense, relaxed (not to even mention physical
sensations). As Haugeland admits, a mood like melancholy would have "to accompany and
infect every other input; it would, in fact, change the meanings of all of them (Mind
Design, p. 271). Moods can neither be divorced from the explanation of cognition, nor
can they be incorporated into a Cognitivist explanation of mental processes. By the same
token, skillful activity is quicker than thought: think of athletes, performing artists,
but also of something as commonplace as typing or driving a car. Very good chess-players
demonstrate a "problem conception" in preselecting the moves they want to think
about and thus are able to beat sophisticated chess-playing machines, even though these
can include the totality of possible moves in their decision-making. Finally,
understanding and insight cannot be explained in terms of the cogency conditions that are
relied upon to explain why the outputs of the mind as information-processing system make
sense, because such explanation would preclude any new way of understanding--e.g., any
progress in scientific thought, including that of AI research itself.
So far, the kind, depth, and scope of the questions asked in Cognitivist research
concerned with artificial intelligence have been limited by a lack of imagination in
regard to the Other. Assuming too easily (because they started with the concept of mind as
information-processing system) that a man-made mind would be like a human mind, AI
researchers have neglected to consider "what is it like to be a human mind?" as
well as "what is it like to be a man-made mind?" The concept of artificial
intelligence has been restrictive precisely because it has been informed so centrally by
the illusion that everything could be taken into account. The illusion has had to be
propped up by the views that the subjective--and historical--dimension of conscious- ness
is negligible and the history of language usage a contamination of an otherwise perfect
(i.e., perfectly constructible) system. Fiction, with its origin in the story as that
which is told as an example for an occasion and on an occasion, with its balance between
referentiality and self-sufficiency, its acknowledgment of the subjectivity and
historicity of all human experience, seems--and, in fact, has been--a useful medium in
which the fallacies of equating mind with information-processing system can be explored.
Consider for instance the adventures of Klapaucius and Trurl, the epic heroes of Lem's Cyberiad.
Indeed, one is tempted to suggest The Cyberiad as recommended reading for AI
researchers. Many of them are as guilty of making premature assumptions which will block
their understanding of the real problems as are Lem's over-eager, well-meaning robots.
This is clear from Drew McDermott's discussion of certain flaws rather common in current
AI research.3 So-called "natural language
interfaces," McDermott complains, are taken for granted, albeit none have been
written and natural language is a notoriously difficult problem in AI; any tentative
approach to simulating ways of understanding very simple stories becomes a
"story-understanding module" ready to be produced whenever the question of
understanding arises; and co-operation of modules becomes "talking," intimating
the idea of natural communication. But while preaching caution, McDermott still believes
in the potential of AI research and thinks that understanding natural language would be
the "most fascinating and important research goal" in the long run (Mind
Design, p. 156). He admits, however, that we are presently far from that goal.
And we may never get there, if AI researchers continue to share the mental sets of
Trurl and Klapaucius. Tinkering with their computers, with their proclivity for instant
solutions and their optimistic anticipations, the two robots appear all too human in their
attitudes towards the problems they want to solve. In their readiness to abstract from the
world within which they want their machines to act, they are also close to the
psychological vacuity of AI. Experience for them is cumulative--in the sense of a stacking
of concepts, of micro-worlds, which they insist can be expanded in a linear fashion into
meaningful statements about the complex world of daily experience. Logically they are
extremely fussy; psychologically they are happily fuzzy; and it is this particular
mixture, familiar to observers of methodological single-mindedness in the social sciences,
which creates one pitfall after the other.
Trurl and Klapaucius have no difficulties with their attitudes towards their machines
(software and hardware) as extensions of their own unexamined desires, hang-ups, and
fantasies. They say "GOAL" and feel the power at their fingertips; they order
"UNDERSTAND" without having any notion of what it is that understanding
requires; and, never trying to figure it out, they go round and round in absurd
syllogistic circles. If everything else fails, they develop whole epistemologies from
scratch, pointing to their successful input of "dragons of probability" (compare
McDermott, in Mind Design, pp. 145-49).
Klapaucius and Trurl give orders to their machines and expect them to behave
accordingly. But thanks to the nature of their orders, the output frequency does not make
any sense. They are unable to understand how that can be, because they see the
machines--as fully dependent on their, the programmers', logic--i.e., as following the
programmers' orders in "mindless," "robotlike" fashion. In the field
of AI research, the giving of orders to complex systems which respond to us and make sense
creates the illusion that the machine is more like than unlike ourselves. This illusion
may push us to accept prematurely the notion of a man-made mind--prematurely, That is,
because we then jump to conclusions about what mind is. Now, Klapaucius and Trurl,
inhabitants of a fictional world, are man-made minds. Their minds and the minds of their
machines are similar, and they do not know much about either of them. However, in their
eagerness to give orders, they are ready to predict the behavior of the machine precisely
because it is a machine and ready to serve them, in any circumstance, for any purpose.
Lem, then, explores the implications of certain attitudes towards the concept of mind by
making up SF stories that show what happens when the (man-made) protagonists, dealing with
problems of the mind, consistently neglect to ask, "What is it like to be a
(man-made) mind?"
Micro-worlds are the favorite domains of AI researchers, including Lem's robot
protagonists; but, unlike sub-worlds, they do not include the everyday shared world and
cannot, therefore, be used to compose larger systems. They are rather, as Dreyfus points
out in his "From Micro-Worlds to Knowledge Representation," "local
elaborations of a whole which they presuppose."4
The
Natural Language Understanding Program that Winograd developed some time ago at Stanford,
for instance, works with a simulated, robotic arm which moves a set of blocks of various
shapes and permits a person to engage in a dialogue with the computer about the block
world. Winograd called this "language" SHRDLU after Frederic Brown's 1942 SF
story "Etaoin Shrdlu," in which a linotype machine comes to life after it has
been animated by an alien technician from an advanced civilization. The AI dialogue is as
remote from any truly natural language as are the alien-intelligence-inspired utterances
coming from the linotype machine.
This last example shows that some AI researchers and AI enthusiasts are indeed
sufficiently open-minded to read SF and be intrigued, stimulated, even influenced by its
extrapolations. The question, then, is not whether they accept potentially helpful
contributions to the posing of certain problems from SF, but which texts they perceive as
suggestive or illuminating and how they use them (see below). Winograd, in naming his
simulated dialogue after the literally alien-inspired Shrdlu, may have had (private,
playful) motives we cannot guess; but he must have seen that the connection thus
established pointed to a specifically limited concept of language which he must
nevertheless have thought of as representing a possible step towards the understanding of
natural language. The SF text, for him, meant an affirmation of a certain (problematic)
approach to the definition of artificial intelligence.
In the 1940s, SF was indeed interested in a rather precipitous imposition of
micro-worlds on the real world--and for obvious reasons, given the phenomena of mental
control and aggression in a wartime context. Theodore Sturgeon, for example, in his 1944 Killdozer,
has a bulldozer on a tiny Pacific island attack, under the influence of an alien
intelligence, the man stationed there. The alien intelligence comes from a civilization
where machines have become masters of men. Operating through the machine, the alien
intelligence becomes artificial intelligence. (Significantly, the 1976 TV version of the
story omits the presence of the alien intelligence altogether: the bulldozer has
artificial intelligence and is therefore all the more dangerous and evil because close to
human. Similarly, Steven Spielberg's 1971 film Duel has a driverless truck pursue
a motorist with cunning malice; again, the truck must have artificial intelligence. In the
1970s, a period of rapid developments in "high technology" and of social unrest,
the machine itself, hardware and software combined, is seen as threatening.)
From the 1950s on, trying to come to terms with the problem of gravely irrational
social behavior in the recent past and also in the present, SF writers explored
possibilities of positive mental control in socio-political contexts. In the process, they
became interested in projections of robots as perfectly rational beings, therefore
superhuman and ideally suited to lead or control humans. This concept of perfect, because
perfectly rational, control has amusing parallels in AI, where it has produced some
psychologically barren concepts (just as, in SF, it has resulted in mediocre
fiction--e.g., Asimov's I, Robot stories [1950]). Rather more stimulating and
imaginative have been projections of relationships between man and machine where the
perfectly rational machine is influenced by certain irrational or
"transrational" properties of the human mind. Roger Zelazny's "For Breath I
Tarry" (1966) is one instance, though its vision of a potential intellectual and
emotional intimacy between man and machine is far too simplistic to be satisfying. The
story's protagonist, the computer Frost, monitoring and controlling the northern
hemisphere long after mankind has died out, becomes fascinated by human relics. Provoked
by a Mephisto-like agent, Frost tries to understand what it is like to be human, to be a
human mind. He is told that the difference is experience, subjective consciousness. The
machine can describe, as man cannot, all the details of a process; but man can experience
it, relate it to the past, to a life lived, as the machine cannot. Zelazny has Frost go
and brood over works of art and finally achieve intuitive understanding. Man and machine,
then, merge; but this is purely wishful thinking....
Conscious and imaginative empathy for the Other, the experience of the Other--this is
not a matter of stirring a little rosy emotion into grey reasoning, but rather a question
of the thought process itself. Zelazny's story touches upon important issues, but it does
not go into them with any profundity.
How do humans think? In his analysis of thought-structures, Marvin Minsky develops a
concept of "frame" which accommodates a more complex approach to knowledge
representation than micro-world models do (see Mind Design, pp. 95-128). But
Dreyfus rightly points out that the "frame" concept has its difficulties also.
The subject matter of science consists of exact symbolic descriptions, but the thought
processes of scientists are another matter (see Mind Design, p. 186). Like
microworlds, the "frame" is too divorced from the shared context, the world in
which scientists live and have learned to think. As Frost cannot reconstruct the human
world, he ought not to be able to reconstruct--that is, understand-- human thought,
consciousness, whose historical, processive dimension he cannot grasp. Mind and world are
constituted in mutual dependence, and Frost can only get around this mutuality by a
fictional leap of faith. Still, Zelazny's SF text, even if it presents an impossible
solution, does suggest that there is a problem which has to be considered.
Knowledge representation, then, requires an understanding of experience, of
consciousness; and neither Shank's scripts presenting chains of conceptualizations that
describe the normal sequence of events in a familiar situation--e.g., birthday party,
football game, classroom, restaurant--nor Winograd and Bobrow's new KRL program
(knowledge-representation language) is convincing in this respect. In the case of Shank's
scripts, the computer, if it is to understand a simple situation in a restaurant, must
understand everything that people normally know about it. But its micro-world turns out to
be much too isolated, and for qualitative reasons that go beyond mere "bugs" in
the "software." It is unable, for instance, to distinguish between degrees of
deviation from the norm--between, say, the guest's order not being filled because the chef
has run out of most of the food listed on the menu as opposed to the guest's devouring the
chef (see Mind Design, p. 190, n. 3). On the other hand, the Stanford group's
KRL, which does demonstrate an awareness of the importance of current context, foci, and
goals, of the context-dependence of reasoning and the structure of memory, still has to
solve the problem of how the computer is to determine the current context. And this proves
to be an undertaking which entails an almost infinite regress. Each situation the computer
could be in would have to be treated as an object with its prototypical description; the
specific situation would have to be recognized in its context, and that would determine
what goals and foci were relevant. By contrast, human beings are gradually trained into
their cultural situation. They never come to it from the outside. Having been born into a
world, they are always already in it.
A computer program like KRL, even if it could stereotypically represent all human
knowledge, including all possible types of human (social) situations, would have to
represent them from the outside, as if from the point of view of the proverbial Martian or
of a god (see Mind Design, p. 198). To be outside in this sense (i.e., distant)
seemed desirable to Asimov, who thought that such beings a robot, a god--could act more
rationally, knowingly, and objectively than insiders in the muddled world of human
affairs.5 Zelazny, by contrast, has the outsider, the
computer Frost, succeed in its desire to overcome that psychological distance while
miraculously maintaining its balance between reason and emotion. However, the Strugatsky
brothers' skeptical handling of the same problem in Hard to Be a God (1974) is
more enlightening still. Seen as a god by the people he has been sent to help, the
emissary from a more advanced civilization finds ultimately that he cannot help them
because he has come to rely too much on the simulations of their socio-psychological
conflicts and physical existence that the computational machine hidden in his headband
provides him with. Meant to give him greater objectivity, knowledge, and rationality, his
distance has divorced him inexorably from their humanity, which accordingly is no longer
his. Their human sense of situation at any given moment is informed and determined by
changing moods, current concerns, projects, long-range interpretations of self,
sensory-motor skills, cultural backgrounds, and expectations; and the interrelation and
interdependence of all these things is such that they have to be shared to be understood.
Natural language may indeed be the most important medium in which such sharing takes
place. The editors--or rather, "composers"--of The Mind 's I may have
had this in mind when they brought together texts presenting very different modes of
discourse--from Turing's classic "Computing Machinery and Intelligence" (Mind,
1950), through Borges's "Borges and I," Lem's "Non Serviam," and
Thomas Nagel's "What is it like to be a Bat" (Philosophical Review, 1974),
to John R. Searle's "Minds, Brains and Programs" (Behavioral and Brain
Sciences, 1980). The Mind's I is thus a colorful mixture of fiction,
philosophy, psychology, and biology that will no doubt stimulate further explorations of
the question of consciousness. However, the authors' "reflections" on the
selected texts--the "fantasies" promised in the title?--are often not very
useful or stimulating. This is especially true of Hofstadter's, which tend to
self-indulgence, even cuteness, as he picks at the ideas of other writers (e.g., Lem)
without real intellectual curiosity and without attempting to engage in a serious
intellectual discussion. The result is that Mind Design, Haugeland's anthology of
straightforward pieces on AI research, is in many ways more helpful and provocative
reading for those concerned with SF than the more literary Mind's I. The most
interesting aspect of the latter--at least for students of SF looking into questions
associated with alien, artificial, and human intelligence--is perhaps Hofstadter's and
Dennett's quarrel with Searle.
Searle's "Minds, Brains and Programs" has provoked a great number of
comments: 28 appeared in the 1980 issue of The Behavioral and Brain Sciences that
originally carried his essay, and among them were those of Haugeland, Dennett, Hofstadter,
and Shank. Hofstadter, as he does in his "Reflections," addressed himself to the
concept of "intentionality" which Searle uses as a way of getting around a
"philosophical" problem--viz., how does or can mind, soul, "I" come
out of brain, cells, atoms?6 But such "getting
around"--especially in the present context--means useful clarification.
Intentionality is for Searle "a biological phenomenon, and it is as likely to be as
causally dependent on the specific biochemistry of its origins as lactation,
photosynthesis, or any other biological phenomena."7
Searle in "Minds, Brains and Programs" does not question that a machine can
think. Rather, it is clear to him that only a machine can think, and indeed only very
special kinds of machines, namely brains or machines that had the same causal powers as
brains. And that is the main reason strong AI has had little to tell us about thinking,
since it has nothing to tell us about machines. By its own definition it is about
programs, and programs are not machines. (Mind's I, p. 372)
Now Haugeland, in his introduction to Mind Design, suggests that a project at
IBM to "wire and program an intelligent robot" would amount to AI, whereas
"a project at DuPont to brew and mold a synthetic-organic android probably would
not" (p. 2). On the grounds of such a distinction, he implicitly rejects Searle's
biological basis for intentionality, and thus Searle's argument that AI, by claiming to be
able to tell us about thinking, is indeed caught in what Hofstadter identifies as
"conceptual confusion." The important question for Haugeland is not the machine
and its program--as it is for Searle when he uses the example of the perfect simulation of
a Chinese speaker (see below). It is, rather, the question whether a program could be
designed to simulate the understanding of Chinese as a natural language--and if not, why
not. This, however, does not take him beyond the reservations about AI he stated in his
contribution to Mind Design, reservations which touch on, but do not pursue,
Searle's insistence on the biological (and notably, the neuro-physiological) facts about
the brain.
The conceptual confusion in AI as Searle discusses it in "Minds, Brains and
Programs" and again in his review of The Mind's I,8
prevents AI researchers from understanding the nature of intelligence and, along with it,
that of natural language. There can be, he claims--and no doubt he is right--no meaningful
access to man-made mind or natural mind as long as the mind is seen as program and the
Turing Test as the criterion of the mental. Hofstadter, on the contrary, says in
conclusion to his "Reflections" on Searle's piece, that minds exist in brains
and may come to exist in programmed machines. If and when such machines come about, their
causal powers will derive not from the substance they are made of, but from their design
and the programs that run in them. And the way we will know they have those causal powers
is by talking to them and listening carefully to what they have to say. (Mind's I, p.
382)
This, however, is just an affirmation of the Turing Test. What the machines say, Searle
argues is only an echo of what we put into them, like the responses of the Anglophone who,
ignorant of Chinese, is locked in a room with boxes of rules in English instructing him in
how to match ideograms. From outside the room, more Chinese symbols are passed to the
subject (whom Searle likens to the computer): and, by following the instructions (i.e.,
the "program"), he passes back the right symbols, the answers. "But,"
Searle argues, "I still don't understand a word of Chinese and neither does any other
digital computer because all the computer has is what I have: a formal program that
attaches no meaning, interpretation, or content to any of the symbols."9
AI researchers like Shank, in their responses to Searle's criticism, expressed their
admiration for the program of instructions: "To write them would require a great
understanding of the nature of language. Such rules would satisfy many of the questions
linguistics, psychology and AI."10 Whether they do
the latter depends, after all, on the questions. They cannot, however, incorporate an
understanding of language, because the nature of natural language, like the nature of
intelligence, cannot be separated from man's biological and social history. For that
reason, to accuse Searle of not understanding his own example, the "systems
reply,"11 is not to the point of the
questions he raises. It is most probably true that Searle does not sufficiently appreciate
the difficulties involved in writing programs and making them work. But he does not need
to in order to demonstrate convincingly the central fallacy of the myth of the computer as
(man-made) mind.
A mind is a biological-historical phenomenon, and the question "what is a
mind" has to be raised in the context of the real world of everyday experience. The
extrapolations from such world as they figure in SF are different in kind from the
abstractions used in AI research, because the former, by necessity, must retain something
of the biological-historical quality of experience. From H.G. Wells's The War of the
Worlds (1898) to Philip K. Dick's Do Androids Dream of Electric Sheep? (1968)
or his A Scanner Darkly (1977), Ian Watson's The Embedding (1973), and
such stories of Lem's as "The Mask" and "The Accident," those SF texts
which do so most skillfully have also been the most successful in their fictional projections of the interaction between human and artificial/alien intelligence.
Such fictions do not darkly warn of the dangers inherent in certain attitudes currently
prevalent in Al research so much as they provide complementary perspectives on the subject
of that research. By imaginatively projecting a more advanced--i.e., more intimate and
more troubled--relationship among artificial, alien, and human intelligence than AI seems
capable of conceiving of, SF can thus contribute to an understanding of the puzzle that is
mind.
NOTES
1. See Putnam, Mind, Language and Reality (NY: Cambridge
UP, 1975), pp. 215-71.
2. Dretske, Knowledge and the Flow of Information (Cambridge,
MA: MIT Press, 1981), p. 227.
3. McDermott, "Artificial Intelligence Meets Natural
Stupidity," Mind Design, pp. 143-60.
4. See Dreyfus's contribution to Mind Design, pp. 161-204,
esp. p. 170; this paper extends the reservations he expressed in his What Computers
Can't Do (NY: Harper & Row, 1972, 1979).
5. See Asimov's "The Next Hundred Years," in Social
Speculations: Visions for Our Time, ed. Richard Kostelanetz (NY: Morrow, 1971), p.
53.
6. See Hofstadter's contribution to The Behavioral and Brain
Sciences, 3 (1980):433ff., and also to Mind's I, pp. 373ff.
7. See also Searle's "The Intentionality of Intention and
Action" (1979), in Perspectives on Cognitive Science, ed. Donald A. Norman
(Norwood, NJ: Alex Publishing Corp., 1981), pp. 207-30. There are problems with this essay
of Searle's in regard to his concept of action, but they need not concern us here.
8. Searle, "The Myth of the Computer,"New York
Review of Books, Apr.29, 1982, pp. 3-6.
9. Ibid., p. 5.
10. Shank, in The Behavioral and Brain Sciences (see n.
6 above), p. 447.
11. See Dennet, ibid, pp. 428-30.
Back to Home