#88 = Volume 29, Part 3 = November 2002
Shibano Takumi
“Collective Reason”: A Proposal (1971, rev. 2000)
Translated by Xavier Bensky and introduced by Tatsumi Takayuki
Shibano Takumi (1926-) has been called the father of Japanese sf because, as the
founder of Japanese science fiction’s first fanzine, Uchûjin (Cosmic Dust), it
was he who discovered and nurtured so many of the genre’s authors. Under the pen
name Kozumi Rei, he authored Hokkyoku shitii no hanran (1959, rev. 1977,
Revolt
in Polar City) as well as many short stories, and he translated numerous works
of hard science fiction, from Larry Niven’s Ringworld (1970) to John Cramer’s
Twistor (1989). As “C.R.” he also authored two monthly columns in Cosmic Dust:
“Fanzine Review” and “Eye of Space.”
Shibano’s idea that “a professional sf author should be a fan as well”
necessarily brought him into conflict with SF Magajin (SF Magazine) editor
Fukushima Masami over the constitution of fandom and “prodom.” But in discussing
the motives for the formation of the Science Fiction and Fantasy Writers of
Japan (SFWJ), Fukushima wrote this about their relationship:
Those who can see this only as a petty, short-sighted turf battle for the
leadership of Japanese sf have no appreciation for the bone-breaking struggle
Shibano and I have both waged to establish this genre. In fact the two of us
have worked to accomplish a common goal, he in his way and I in mine (Mitô no
jidai [The Untrodden Age])
Interested readers can discover further details in Shibano’s memoir
Chiri mo
tsumoreba—Uchûjin no yonjû nen shi (1997, When the Dust Settled: Forty years of
Cosmic Dust). In the 1960s, the friction between Shibano and Fukushima, like the
debate between Yamano Koichi and Aramaki Yoshio, was an inevitable conflict in
pursuit of this “common goal,” promoting sf.
Shibano’s view of sf unfolds from his definition of the genre as “a literature
recognizing that the products of human reason separate themselves from reason
and become self-sufficient.” What Shibano identified as “the idea of sf” was a
posthumanist theory constructed from a vantage point after modernism, and his
position resonates with contemporary ideas of poststructuralism and chaos
theory. Shibano applied these theories in the columns he wrote as “C.R,” asking
a fundamental question: whether the fans who read sf were medial forms leading
to the posthuman, or perhaps already posthuman themselves. A being who could
grasp the failure of individual reason by means of individual reason itself
would be a mediator à la Arthur C. Clarke—i.e., an Overmind. But the ability to
shed that skin of reason by oneself is a quality, he argued, that belongs to the
posthuman as conceived by authors like A.E. Van Vogt and Robert Heinlein. The
idea at the heart of Shibano’s theory was this: young people are naturally
receptive to sf; but those who continue to read sf as adults are people who
suffer the weight of the individual self in the real world as they reduce that
self to something infinitesimal. Considered in light of Shibano’s ideas,
Fukushima Masami’s confession four years earlier is even more interesting.
Fukushima wrote: “I feel a helplessness at the fact that while I have been so
wrapped up in SF’s bold efforts to remake reality, half my life has slipped by.”
That is why, Fukushima said, “I feel an affinity for stories about other
dimensions, about time travel, about immortality” (SF Magazine, Feb. 1966).
Shibano first set down his vision systematically in 1970, in a
Cosmic Dust column “SF no shisô” (“The Ideas of SF”) that he signed Kozumi Rei. But no
sooner was the first installment published than its Renaissance humanist
position was opposed in print by sf author Aramaki Yoshio, then a recent arrival
on the literary scene. The debate seemed as if it might end with a simple
acknowledgment of these different understandings of the concept of humanism, but
it stretched on unexpectedly from October 1971 through Cosmic Dust’s final issue
in 1972. In 1992, twenty years after the debate, Shibano revised and reedited
that (initial) essay for republication in my volume Nippon SF ronsôshi (2000,
Science Fiction Controversies in Japan). This is the version translated below.
“Collective Reason”: A Proposal
0. For now, let’s call it human “collective reason.” Although it is thought to
consist of the combined reason of many different individuals, it actually has an
autonomous life of its own, something beyond the scope of individual control and
understanding, like a child that no longer obeys its parents. It is a function
of the phenomenon of collective thought that emerges in human groups, following
the same pattern as individual reason. The development of civilizations and the
formation of cultures depend on it. (Jung’s “collective unconscious” could also
be seen as one of its expressions).
Needless to say, the product of individual reason attaining a kind of autonomy
is not particularly novel in itself. Science and technology, of course, and even
laws and works of art, develop of their own accord, detached from the intentions
of those who established or created them.1 Collective reason may be the
generalized case of such phenomena. Furthermore, there is nothing that
necessarily limits its site of emergence to human groups, but to deal with too
many different cases right at the outset would probably only create confusion.
I would like to state beforehand that “collective reason” is still just a
working hypothesis for interpreting reality from a different angle. The purpose
of this essay is not to prove the existence of “collective reason,” but rather
to construct the concept by considering a number of things in the history of
human groups as illustrations of its appearance.
1. Let us begin our explanation at the very beginning. Long ago, it is said,
when human beings were trying to gain a footing on the earth as an emerging
species, the first step in the cerebral process that gave birth to
“civilization” was the recognition of temporal patterns stemming from the
experience of keeping the fire lit. A fire dies out if it is not fed, but you
can’t disregard the fire’s intensity and feed it too much either. From such
experiences, the human cerebrum learned to take a set of consecutive phenomena
from what was at hand, and to identify the first event as the “cause” and the
second as the “effect.” One could consider conditioned reflexes in animals as an
example of this in its primal phase, but within human groups such correlations
established themselves in the form of superstitions as the basis of social
behavior, and went even beyond what was originally required. Early humans thus
came to live surrounded by innumerable taboos, good and evil augury, and proofs
of divine retribution, all mixed together whether they were useful or not.
For the group overall, these probably served to protect early humans from the
outside world, create stable lifestyles, and lead them to prosperity. However,
individual members undoubtedly saw many of these superstitions as meaningless,
burdensome, or even harmful restrictions on their lives. Based on the premise
that there was a system governing these superstitions, a process of analysis
began that split and developed in two different directions. The first was
“religion,” which attempts to square everything away by hypothesizing a
transcendental being at the root of all things, and the second was “science,”
which attempts to persuade by investigating the regularity of various phenomena
and connecting them together with evidence. I apologize for the terribly rough
treatment of the matter, but schematically speaking, this should be correct.
The observations that follow are based on the history of the West, in which
these two positions developed with an intensity that made them irreconcilable.
In the modern age, the absolute quality of the transcendental “god” faded, and
in response, the absolute quality of the image of “humankind” came to the fore.
It was a shift in consciousness that obeyed the same collective motivation as
before. Thus appeared “modern humanism.” Simply put, for the more advanced human
groups of the day, threats caused by problems within human society now
outweighed threats from the outside world, so rather than a “religion” that
sought the grace of god, it was humanist “ideology” (a term I use here as the
largest common denominator) that provided the most effective tools to deal with
the situation. (Of course, considered more closely, this represents the
correlation between religion and ideology on the one hand, and science on the
other. That is not the focus of this essay, however. Moreover, while it is true
that religion and superstition’s roles were dwarfed as a result of this process,
their influence has not weakened in the least, even today. This probably
demonstrates that, as members of the collective, our everyday thoughts and
actions are still governed less by rational ideas than by intuitive beliefs and
even taboos akin to conditioned reflexes.)
The crux of this long introduction is that this series of developments was not
born from the minds of individual contributors within the collective—at least
not in the sense that each one consciously headed in that direction. Rather,
this was the emergence of an effect unwilled by and unrelated to any individual
consciousness.
Moreover, while the religion and science (and, in some respects, superstition
and ideology) born from this autonomous reason played an important role in
making human beings more human, they also created a variety of harmful
influences, as everyone knows. One of my justifications for giving the title of
“reason” to a mere working hypothesis is that its products are such double-edged
swords. Each of the cases considered below exhibits this same pattern.
2. In the last ten years or so, Japan has seen tremendous changes in its social
values. It has readily assimilated many antecedent traditions from the West, and
is trying to go beyond them in some fields. Consider, for example, the declining
value of leadership positions: job titles with the word “head” in them have
begun to lose their former appeal, the number of salaried workers who shun
executive positions is on the rise, and the notion of managerial posts as
rewards has lost its meaning. What could possibly be the reason for this?
Actually, to raise that issue, we need to begin by asking why positions with the
word “head” in them rank so high in the first place. What is the origin of the
notion that leaders should be respected by their subordinates as “superior”
beings, and why should obtaining such a position turn into a lifetime goal?
Resuming our account of origins, early humans (like some herd animals) required
leaders with outstanding wisdom and experience in order to protect the
collective from foreign enemies. Even if it meant sacrificing some constituent
members, protecting and preserving the leader served to ensure the safety and
prosperity of the whole. The rise of a primitive sense of “hierarchy” [jôge] can
probably be explained through power relations. The “strongest” individuals (not
only physically but overall) obtained the right to slack off their work, give
orders to other members of the group, or monopolize the attention of the
opposite sex. Although they came to shoulder unexpected “leadership
responsibilities” as a result, strong individuals wielded the power to bend
rules as well, though there were probably also many cases where a sense of
responsibility came first. But here too, this value system necessarily began to
develop independently of its origins [jisô o hajimeta]. As a result, the
leadership’s protective structure became exaggeratedly formal, inflated beyond
necessity, and laden with what some considered ridiculous rituals, peculiar
mores, and a variety of other supplementary elements that proceed from a sense
of hierarchy. Finally, this led to the formation of the larger “surface” or
behavior pattern represented by the collective’s particular customs—in other
words, its “culture.”
Humanism, which was to become the foundation for all of modern ideology,
originally rejected such blindly hierarchical relations. It seems that such
things don’t happen all at once, however: much of humanistic thought has not
discarded “god,” and there were even some early cases in which the existence of
a slave class was sanctioned.
One vision of what would happen if humanism’s original sense of equality were
carried forward in a linear fashion can be found, for example, in Robert
Heinlein’s Beyond This Horizon (1948). That is indeed a world in which each and
every citizen is forced to become a leader, and I think it is truly an excellent
forecast. In reality, though, it turns out that Heinlein’s honorable “armed
citizens” are those who ride in automobiles and spew exhaust fumes on the
pedestrians who correspond to the “unarmed citizens” in the novel. It seems that
science fiction predictions cannot help but be apocalyptic.
It appears, in any case, that humanism is quickly eroding in turn. I find it
very hard to believe that a future such as Heinlein’s world, strictly based on
modern individualism, will ever really come to pass. In our current
circumstances, it appears that progress toward universal equality is proceeding
less quickly than the disintegration of culture resulting from changes in the
correlation between “responsibility,” “honor,” and “reward”—the things that had
heretofore constituted a raison d’être for all members of the collective. The
decline in the status of leaders is, in the final analysis, just one aspect of
this trend.
I think it safe to say that what has sustained this development until now is a
contemporary humanism—I call it “indulgent humanism”—that has gone through broad
changes since humanism’s strict early phase when it confronted God. But what
about the future?
What awaits us on the road ahead is not merely the loss of hierarchical
relations but rather their very reversal, is it not? That is the trend, at
least. I would like to consider this matter in the following pages.
3. There was an incident at a certain American university last year in which a
group of students took university office computers “hostage” and issued demands
to the administration. Since respect for computer life was not the absolute
priority it would have been with human hostages, the incident was promptly
concluded by police forces who stormed the office. Still, for its suggestion
that we may now live in a world where computers are recognized as having
“personalities,” this incident stirs up deep emotions. While the ringleaders
didn’t seriously equate humans with machines, in their own minds at least they
did not clearly perceive the truth of modern humanism’s hypocritical demand for
the absolute priority of human life.
In due course, our own common beliefs about man-machine equivalence are bound to
reach to the level of these students and go beyond it. In fact, the sense of
“hierarchy reversal” that I mentioned earlier may be humanity’s preliminary
effort to adapt to the age that most likely awaits us, in which machines will
dominate humans.2
Future computers will probably one day surpass human beings in all abilities,
take complete control of all industrial activity, and eventually make advances
into the fields of politics, art, and culture in general. When an advanced
machine attains the position of supreme being—or even just leader—people will go
about their business under its control (a control probably exercised from within
their bodies and without), just as naturally as people go around looking at
their watches today. This would truly be the advent of a full-scale computopia,
but humans won’t stand for it if they perceive this state of affairs as
“servitude” to machines. Therefore, we have begun in advance the operation of
clearing away the old notion of hierarchy. Couldn’t we think of it that way?3
So how could human beings rationalize such a system? I’ll share with you an
amusing allegory pertaining to this. It was in a round-table discussion
published in a computer trade journal:
Once upon a time, human beings could not stand bearing the burden of
responsibility for their own actions and tried to comfort themselves by finding
something to rely on. They chose the dog as their model, an animal that lives a
carefree life and entrusts its owner with the power of life and death. In
imitation, humans contrived the existence of “GOD,” which was “DOG” written
backwards, and made everyone its slave, shifting all responsibilities onto it.
Eventually, however, the existence of God came to be a nuisance. Humans became
sick and tired of always praying to God and swearing allegiance, like dogs
wagging their tails to show obedience to their masters. Eventually, however,
they found a more suitable model. That was the cat, an animal that lets itself
be taken care of without ever fawning on humans. Convinced of their choice,
humans are now excitedly trying to invent “TAC,” which is “CAT” spelled
backwards.
This “TAC,” of course, is something that supposes an omnipotent machine system.
Come to think of it, many early computers such as ENIAC had names ending in
“AC.” Joking aside, though, this story paints a frank picture of future society.
In the end, human beings are destined to settle into their roles as pets of the
machine system. That is the storyteller’s parting shot.
Then again, the reversal of hierarchies may already be established by that time,
so these machines that settle into the role of humans’ owners will probably be
considered “slaves entrusted with plenary power” rather than “despots” With that
caveat, it is not a bad prediction. Nevertheless, if society were to actually
turn out that way, its appearance would most likely surpass anything forecast in
science fiction.
One work of fiction that paints a full and accurate picture of such a computopia
is “The Evitable Conflict” in Isaac Asimov’s I, Robot (1950). Here again,
however, it seems that a science fiction prediction ends up as a kind of
apocalyptic statement. Considering how times change, I think it is unreasonable
to hope for a future penetrated by a scientific reason so similar to the brand
we have today.4
4. So far, I have frequently mentioned “the erosion of humanism” and, in
contrast to strict modern humanism, I have used expressions like “hypocritical
contemporary humanism” and “indulgent humanism.” In fact, it seems as if the
word “humanism,” along with the term “democratic,” has already turned into an
all-purpose pardon of sorts. Needless to say, a surplus of slogans represents a
decay of meaningful substance. (If I’m not careful, someone might argue that
“collective reason” itself is an expression of humanism. Well, obviously, I’m
kidding again, since my argument means denying the conclusiveness of this thing
we call the individual.)
At any rate, just as the basic structures of thought once changed from
“religion” to “ideology,” as the machine system advances, another turn will
surely follow. So then, from the human point of view, what will be the
fundamental principle to emerge in the wake of the ideology called humanism?
Of course, even I don’t have a clear image of it, but the one thing I can say is
that it will probably be a concept akin to “methodology.” This will be of a
completely different order from so-called scientific methodology, however. Like
the structures that preceded it, it will have to be something that can provide a
standard for the actions of individuals within the collective.5
When the substance of this “methodology” becomes visible, we will probably also
be able to grasp the now ambiguous mechanism by which “collective reason”
manifests itself. This process could likely start off with the machine system
acquiring a position as a thinking entity external to humans. At the risk of
oversimplifying, I’ll illustrate this with a familiar situation: imagine a
futuristic computer, serving as moderator at a symposium for humans, able to
synthesize all the participants’ statements and state a conclusion.
Remarks like this may come across as abrupt, irresponsible, and careless.
Therefore, I would like to state the following, just in case. The future
computer I am talking about is not some new gadget that will suddenly appear one
day. It is an entity that will come into being after the computer society
sustaining human society goes through many generations, automatically building
up a system by interconnecting and by generating ever more sophisticated
programs and subprograms, until all this progressive development converges on a
single course. Right now, we can only predict the direction of this development,
not its ultimate outcome. So to address an issue much closer to hand: in the
eyes of humans, will this entity be seen as a step in the right direction?
Since the criteria for what is considered “favorable” will probably change
between now and that time, we simply cannot make any definite statements. Nor
does it seem likely that our individual wishes will be reflected in the process
of determining the course of developments—this will be decided by the flow of
circumstances surrounding the various computer companies’ programming and
networking as they continue to evolve. No single individuals—not politicians,
religious leaders, philosophers or even the computer scientists and software
developers directly engaged in the system’s operation—can subjectively interfere
with its evolution. As the considerations and intentions of developers, the
requests and responses from users, and countless other short-sighted ideas and
feedback become haphazardly assimilated, without any larger vision or judgment
whatsoever, an enormous complex of software steadily accumulates. In this way,
the collective reason of the so-called “First World countries” that rule our
globe is already in the process of achieving autonomy today.
This is the setup for what Aldous Huxley described in Brave New World (1932).
Details aside, the strange atmosphere of the future society portrayed in that
classic work may be surprisingly on target.
5. In my student days, I believed that all metaphysical arguments could be
reduced to a physical level. I also thought that no “truth” was absolute, and
that any truth was a relative concept that might lose its status as truth,
depending on the standpoint of the person advocating it. If someone pointed out
to me that I was a rationalist, I would boast in retort: “I’m not so irrational
as to let myself be bound by an ideology like rationalism.” That in itself is
just a play on words. But one false step, and this can lead to the loss of one’s
standards of behavior. Anyone who would reject the “absolute” must remember the
ineluctable dilemma that making all things relative is, in itself, an absolute
position. This is a paradox that beleaguers all the forms of logic that don’t
resort to dogma.
As you have probably noticed by now, this very essay has, from the beginning,
contained such a paradox. To think that an individual like myself, relying on my
own diminutive faculties of reason, is arguing for a collective reason that
transcends individual reason—what could possibly be more contradictory?
Nevertheless, now that I have begun, I intend to follow this through to its
conclusion. As with Schrödinger’s cat, a single paradox surely doesn’t render
the entire system meaningless. Moreover, it might seem that I’m flirting with
paradox once again, but this very argument has the effect of shaking the
foundations of collective reason, at least in some small way.
What is clear is that reality has already advanced to the stage at which the
simple law of excluded middle no longer applies.6 The seeds from which such
paradoxes began to gain recognition and assert themselves as facts were sown
early in the natural sciences. To cite an example that falls within my own
limited understanding, physics dealt with the opposition between the particle
and the wave theories of light. And, moving into the twentieth century, it was
able to pull off the fusion of materialism and idealism by making Einstein’s
relativity and Heisenberg’s uncertainty the foundation of all its theories. This
was a process in which both the “object” and “subject” established by
materialism and idealism were removed from consideration, and the relationship
between subject and object—“observation”—was given a new reality. Of course,
qualifying it as such does not solve the actual mysteries of nature, so from a
scientist’s point of view, such naming is probably just wasted motion, a
meaningless redundancy. The only thing I have accomplished with this reasoning
is to indirectly persuade myself. I don’t pretend to have reached a true
understanding of real circumstances.
Actually, “observation” in physics is shifting its attention away from this new
reality and back towards the object. In contrast to this, let us call
orientation toward the subject “recognition,” in the narrow sense of a
phenomenon that still does not make the subject entirely clear. We could mention
Zeno’s paradoxes (again, something even my own individual reason can grasp) as
an early investigation of this. Finally, the physicist Gödel’s “incompleteness”
revealed a paradox at the foundations of mathematics, and investigated the
structure of “cognition”—or rather, rendered strict investigation meaningless.
As we all know, Gödel proved logically that within a given branch of
mathematics, a system of deductive logic that includes the idea of infinity can
never consitute a closed sysem, as had been thought.
What would happen if this were applied to all systems of deductive logic? What
if all extant logic (except in a few cases where the object of discussion is
finite) were ultimately impossible to close and perfect?7
6. To tell the truth, I first began to take an interest in the existence of
autonomous ideas when I entered junior high school and learned algebra. Until
then, I had strained my small brain trying to solve arithmetic exercises, but
now I could translate them into the numerical formulae that are the language of
mathematics—setting up an equation with X as an unknown quantity, and then
solving it by applying an algorithm. There was no longer any need to go through
all the steps of a procedure, always keeping in mind something like a computer’s
“internal state.” All this permitted a partial lapse in the comprehension of
circumstances through reason—in a sense, a partial suspension of thought. This
is because on the route from superstition to science (if not on the path from
superstition to religion), collective reason left behind evidence of its passage
in the form of these formulas.
It is quite a leap, but as a more developed version of this query, I could
mention an example well known to science fiction fans, the “twin paradox.” I
will not question the conclusion provided by the formulae, but neither can I
truly comprehend it with my faculties of reason. As I have been arguing, since
formulae are independent of the reasoning minds that create them, any
development in our reason that hinges on those equations also represents
something that deviates autonomously from individual reason. And it is far more
convincing than single-engined individual reason because it constitutes a
verified system (meaning a system whose usefulness we can see first hand).8
To take another more familiar example, textbooks use conservation of momentum
and transfer of energy to explain the moon’s retreat from the earth and the
child’s top that flips upside down when you spin it. True, the former can be
accounted for by differences in the gravitational pull that different parts of
the earth exert on the moon, and the latter can be explained by the build-up of
friction on the floor. However, as all these elementary analyses become
bothersome, we resort to higher-level principles, which appear from the lay
person’s perspective to be arbitrary laws.
By now, it should be clear what I’m getting at. My hypothesis of collective
reason follows the same pattern as these examples. Regrettably, in this sphere
it is hard to discern any equivalents to the corroborating formulae or
elementary analyses of physics. It goes without saying that, in former times,
the systematization of superstition and the protection of the leadership were
rooted in the efforts of some individuals to seek profits for themselves, just
as society’s present march towards computopia is probably driven by the plans of
individuals and companies positioned at strategic points inside and outside the
system. But even if it were not clearly impossible to quantify the mechanisms
involved, I don’t believe that the actions of these figures are having the
effect that they intended.
Ultimately, this is why I have only been able to offer a small number of
predictions here. I look forward to a time in the future when we will have a
slightly clearer sense of the direction computers are advancing in, and we will
be able to list a greater number of examples. (It is also possible that one
single counterexample will cause my entire system to collapse, but this would
also resolve the question.)
With that, my efforts to construct an idea of collective reason have completed
their full circuit, and if my ideas have gained general acceptance thus far, I
suppose my efforts have borne fruit. But letting the argument run a little
further, it might be said that my efforts themselves could be regarded as a
manifestation of collective reason. If that is the case, though, all of this has
been no more than a game of language. What then is the purpose of debating the
fine points? If individual reason can never cope with collective reason, isn’t
my individual proposal itself meaningless?
Yet one cannot condemn this as a necessarily fruitless endeavor. Judgments about
the relevance of debate, or whether individual reason can cope with collective
reason, cannot possibly be made from an individual standpoint, either. Thus,
only one single criterion of evaluation remains—namely, the fairly utilitarian
criterion of whether or not this debate can give us some standard of behavior.
In other words, from this point on, the focus of our discussion will move beyond
logic and enter the realm of practice.
7. So, while I don’t like superfluous additions, I would like to talk generally
about practical aspects. What each of us can do now is focus our conscious
attention on the human condition—by means of the idea of “collective reason” or
other individually invented criteria—and decide how we will deal with that
condition as individuals. Given two judgments based on the same reasoning, the
one made with a clear consciousness of that reason is bound to be different from
the one that lacks such an awareness. Perhaps as a general result of that act of
choice, we may be able to change slightly our level of comfort with the future computopia. (Of course, we will not be able to judge the results right away, and
we might discover our choice was entirely meaningless, but what of it?)
Needless to say, from here on, it comes down to an issue of each person’s
morality. Now this may come as a surprise, given the earlier tone of this essay,
but currently the most reliable foundation for such morals continues to be
humanism. It may erode, or become just a hypocritical slogan; it may represent a
mere indulgence for the masses. But this doesn’t matter in the least. In fact,
isn’t that kind of broad, diffused humanism actually preferable to rash
ideology? Strict modern humanism is an exaggerated position situated between
domination from the top and domination from below, compared to which worldly
present-day humanism seems to have persevered through a longer history. In any
case, at our current stage of consciousness, every decision-making “individual”
is still a flesh-and-blood human being, so I suppose it is only natural that
people should seize on this body that is so intimately connected to the
interpreting subject. That which experiences pleasure or pain, that which lives
or dies, is still none other than the self, right? Indeed, but only until the
eventual domination of the machine system is complete.
It is when I am faced with humanism as an act of faith, something accepted like
an infant accepts baptism, that I want to turn away. What is important is this:
instead of clinging to humanism as an article of faith and forcing it on others
as the One Truth, one should accept it as an ethical standard whose present
necessity is proven by experience, and adhere to it until something better is
discovered. (If you will pardon a rough analogy, getting the knack of this is
much like choosing democracy as a mode of politics.)
I’m sorry but somehow this seems to have turned into a morality lecture. At any
rate, even without saying all this out loud, I think in the consciousness of
those who love and read science fiction. a common understanding is developing
along these lines. Besides, as I’ve indicated, one function of the sf we love is
surely to provide a point of departure for this questioning consciousness and
these kinds of judgments. Of course, I don’t think such an understanding is the
exclusive property of science fiction fans, nor can I say that all science
fictions fans are that way, but at the very least, this may explain why among
themselves fans are able to enjoy a different kind of conversation.
My proposed definition of science fiction is this: “Science fiction is the
general term for a sphere of literature (and its related genres) that embraces
the concept of a ‘collective reason’ that is autonomous and removed from
individual control.”9
NOTES
1. The term translated throughout the essay as “autonomy” is jisô. Written with
the Japanese characters for “running by oneself,” it literally means something
like “self propelled.” It suggests not only independence from external control
but also dynamic motion—something that literally “gets away” from us. This seems
to be in keeping with the essay’s idea that not only is collective reason free
from individual control, but that it may follow an unexpected trajectory that
takes it far away from individual reason.
2. In order to avoid misunderstandings, I would like to state that the
prediction I have outlined here is a vision of mankind’s future situated at the
extreme of optimism. That is because it is entirely based on the idea that war
will not exterminate mankind early on, that pollution and overpopulation will
not wipe out civilization, and that today’s pace of development will continue as
it has. Furthermore, I don’t want to give the impression that I’m looking
forward to the age of machine domination. It is merely inevitable, regardless of
what we may or may not hope for.
3. There is nothing outrageous about this concept of “domination from below.” It
is a system that has appeared throughout Japanese history and persists to the
present day. In other words, it is not inevitably the stronger individual who
occupies the position of leadership. This probably arose as a necessary
compromise with the emperor system. It might be one of the Japanese people’s
most valuable inventions.
4. It goes without saying that the machine system’s domination of human beings
is still far away and that those currently ruling the collective are
flesh-and-blood human beings. So this reversal will be inconvenient for those
individuals presently occupying positions of authority (and not only executives,
I should add), but for the sake of mankind’s future, we will have to ask them to
put up with it. For example, a former Alexander or Moses will cut a lowly figure
as a group tour guide, while an Admiral Nelson or Tôgô will find himself
overseeing departures and landings from an airport control tower; in other
words, although the nature of the job will remain the same, there will be a
demotion in rank. To take an extreme case, the slave driver who formerly lashed
oarsmen below deck will end up sitting in the driver’s seat of a bus. I’m sorry
for entertaining myself with free associations, but however you look at it, the
picture of passengers in seats arranged just like a galley, each pushing a
button to get off the bus, is far removed from images of the elegant ship’s
passengers of yore. The slaves, now freed from their toil by the machine’s
control over energy, are still controlled by bus timetables and service routes.
5. In its form, I wonder if this isn’t a type of model theory. That’s what it
seems to be. Moreover, there are in fact clues allowing us to speculate about
its internal properties. For example, consider the statement: “Christ taught
people to love and Marx taught people to hate” (I read this in an essay by
Umehara Takeshi). Of course, this is not a comparative theory pitting religion
against ideology but rather an expression of the dual nature of our interest in
the world around us, as seen from a point between these two paradigms. It is
important to consider this question apart from the image of “good” and “evil”
that accompanies the words “love” and “hate.” To illustrate, what would someone
teach us who represented an age between religion and the superstitions that
preceded it? If such a person existed, he or she would teach “awe and
suspicion.” And just what would a person representing the transition to the
coming “methodology” have to teach us? I still don’t know exactly how such
investigations pertain to the problem at hand, but perhaps examining the problem
from this angle will give us a glimpse of humanity’s future.
6. The law of excluded middle states that for every proposition P, either P is
true or Not P is true. (Trans.)
7. I confess that this concept of observationism is not my original idea. It is
something I arrived at by reinterpreting the idea of “pure experience” (junsui
keiken) in Nishida Kitarô’s Zen no kenkyû (1911, An Inquiry into the Good).
Those who are poor at natural sciences but well versed in Eastern thought may
find this discussion easier to understand if they replace the technical terms
“observation” and “cognition” with the well known Zen koan “the sound of one
hand clapping.” Yet it is not at all my intention to praise Eastern wisdom. It
seems to me that as a Japanese person, so-called “existential” thought is
something innately self-evident to me. The important thing isn’t attaining a
state, but rather understanding what it means when a result one pursued through
individual reason and individual logic ends up connected to something that
contradicts both these things.
8. In other words, no matter how it may contradict individual common sense,
conclusions obtained through the application of such mathematical methods are
“correct” and automatically tend toward the next phase of development. Of
course, in actuality, I don’t think there are too many cases in which scientific
research has followed that exact procedure, but that doesn’t change the fact
that this is the fundamental pattern. In advanced fields of physics, one can see
any number of exemplary cases. Such elements are rare in humanistic fields of
research such as sociology or psychology, where no matter how rigorous one’s
theory may be, one is not supposed to confront people with facts that violate
common sense. This probably helps to explain why early works of science fiction
were essentially “natural science novels” [shizen kagaku shôsetsu].
9. This is accompanied by a series of definitional systems. If we follow David
Hilbert, who pioneered the study of logic and the foundation of mathematics, we
must establish a system of axioms before constructing any definitions. But for
an argument consisting of natural language [kotoba], the resulting assertions
might be too vague to be understandable. So we need at least some indication or
index of the nature of the axioms. As a result, the definitions, which are one
step removed from the logical system, become a declaration of the theorist’s
position. Some may doubt that, but when we debate the nature of science fiction
in our everyday lives, each one of us has our own preferred models of the genre
in mind, so my own definition of sf is nothing more than an expression that
integrates these models. But if, in this process, I can discover some
commonality with the models that others have embraced, we can go beyond swapping
our favorite sf stories and hope to start developing an effective theory.
Furthermore, in my own case, I cannot help trying to analyze why I am so taken
with science fiction as well, and as a result of that analysis I have
necessarily arrived at a definitional system.
Leaving the details aside for another occasion, what I have attempted with this
system is something like “an explanation for our non-human observers.” I admit
that the task has been beyond my abilities, but if I were to say that I was
prompted by the description in the beginning of Stanislaw Lem’s The Astronauts
(1950), perhaps my readers will understand.
Others may ask why I bother at this point. One could cite Abe Kôbô’s objection:
“As soon as it is given the name ‘lion,’ the lion is changed from a legendary
being into a mere beast.” I don’t think science fiction is something that can be
named or defined quite so easily, however; and it is for that very reason that I
cannot help but be interested in its definition.
Back to Home