It
is often said that we can never truly know the minds of others, because
we can’t “get inside their heads.” Our ability to know our own
minds, though, is rarely called into question. It is assumed that your
experience of your own consciousness clinches the assertion that you
“know your own mind” in a way that no one else can. This is a mistake.
Ever
since Plato, philosophers have, without much argument, shared common
sense’s confidence about the nature of its own thoughts. They have
argued that we can secure certainty about at least some very important
conclusions, not through empirical inquiry, but by introspection: the
existence, immateriality (and maybe immortality) of the soul, the
awareness of our own free will, meaning and moral value. In a Stone
column Gary Gutting explained
how this tradition continues to manifest itself in contemporary
philosophy as the search for “a ‘transcendental’ or ‘absolute’
consciousness that provides the fuller significance of our ordinary
experiences.” Thomas Nagel has invoked the same source to trump science in this publication as well.
Introspection,
“the mind’s eye,” assures us with the greatest confidence that it is
the best, in some cases the only authority on how the mind works,
because we all think it has direct, first person access to itself. We’re
all very confident that we just know what’s going on in our own minds,
from the inside, so to speak.
Yet
research in cognitive and behavioral sciences increasingly undermines
that confidence. It seems hardly a week goes by without another article
in the media reporting counterintuitive laboratory findings
by empirical psychologists studying cognition, emotion and sensation.
What makes many of these results remarkable is their consistent
violation of expectations, assumptions and prejudices forced on us by
our own conscious awareness.
In
fact, controlled experiments in cognitive science, neuroimaging and
social psychology have repeatedly shown how wrong we can be about our
real motivations, the justification of firmly held beliefs and the
accuracy of our sensory equipment. This trend began even before the work of psychologists such as Benjamin Libet, who showed that the conscious feeling of willing an act actually occurs after the brain process that brings about the act — a result replicated and refined hundreds of times since his original discovery in the 1980s.
Around
the same time, a physician working in Britain, Lawrence Weiskrantz,
discovered “blindsight” — the ability, first of blind monkeys, and then
of some blind people, to pick out objects by their color without the
conscious sensation of color. The inescapable conclusion that behavior
can be guided by visual information even when we cannot be aware of
having it is just one striking example of how the mind is fooled and the
ways it fools itself.
Meanwhile,
philosophy has largely persisted in its centuries-long Cartesianism —
following Descartes’ insistence in his “Meditations” (1641) that our
knowledge of our own minds’ nature is more reliable than any other
belief. Galen Strawson recently illustrated this centuries-old
conviction in a recent essay in The Stone: “We know what conscious experience is because the having is the knowing:
Having conscious experience is knowing what it is.” He writes “It is in
fact the only thing in the universe whose ultimate intrinsic nature we
can claim to know.”
Continue reading the main story
Despite
these assurances from philosophy, empirical science has continued to
build up an impressive body of evidence showing that introspection and
consciousness are not reliable bases for self-knowledge. As sources of
knowledge even about themselves, let alone anything else human, both are
frequently and profoundly mistaken.
To
see the mistake we need to recognize another mistake Descartes made:
his denial that other animals have any mental lives at all. Careful
field observation by primatologists beginning with Jane Goodall revealed
that apes have well developed “theories of mind.” They engage in “mind
reading” to make (sometimes good) guesses about the future behavior of
others. Mind reading is psychologists’ shorthand for treating other
animals as having something like desires and beliefs that work together
to produce choices in behavior.
After
a certain point in the evolutionary past, organisms began needing to
predict whether others posed threats in order to protect themselves, and
later needed to coordinate to attain outcomes not achievable alone.
This environment strongly selected for mind reading. Had variation in
cognitive abilities not hit on this adaptation, puny creatures like us
would never have survived in the face of savanna megafauna.
Mind
reading, even in our own hands, is a very imperfect tool: We have to go
on others’ behavior (including verbal behavior). We can’t really tell
with much precision exactly what others believe or want, because we
can’t get inside their heads. So our predictions are often pretty vague
and frequently false. Like other Darwinian adaptations, mind reading is
an imperfect, “quick and dirty” solution to a “design problem.” It was
just good enough that, equipped with this theory of mind, we managed to
gradually climb to the top of the food chain. We were able to do so in
large part because once mind reading was in place human language, which
requires it, became possible.
FMRI
research, the study of autism, and experiments on infant “false-belief”
detection have shown that mind reading is a relatively well-localized
module in the human brain, innate in structure, subject to breakdown —
often genetically caused, and identifiable in infant/toddler
development.
Most
important, there is compelling evidence that our own self-awareness is
actually just this same mind reading ability, turned around and employed
on our own mind, with all the fallibility, speculation, and lack of
direct evidence that bedevils mind reading as a tool for guessing at the
thought and behavior of others. When, as David Hume said, we look into
ourselves, all we ever see are images, all we ever hear is silent
speech-sounds. These sensations (along with emotions) are the only
contents of consciousness, the only things introspection can use to
figure out what we are thinking. The resources of introspection are
exactly the same as the resources our minds work with to explain and
predict the actions of others: sensory data provided by sight, hearing,
smell, touch (and sometimes taste, too).
Of
course we have a lot more sensory data — images and silent speech
instead of visual experience and heard speech — to go on in trying to
figure out our own desires and beliefs than what other people’s behavior
reveals about what is going on in their minds. That’s part of what
makes for the illusion that we know our own minds so much better. But
the difference is only the amount of data, not its quality or source. We
never have direct access to our thoughts. As Peter Carruthers first
argued, self-consciousness is just mind reading turned inward.
How
do we know this? Well, Hume would have answered that introspection
tells us so. But that won’t wash for experimental scientists. They
demand evidence. Some of it comes from the fMRI work that established
the existence of a distinct mind-reading module, more from autistic
children, whose deficits in explaining and predicting the behavior of
others come together with limitations on self-awareness and
self-reporting of their own motivations. Patients suffering from
schizophrenia manifest deficiencies in both other-mind reading and
self-mind reading. If these two capacities were distinct one would
expect at least some autistic children and schizophrenics to manifest
one of these capacities without the other.
That
we read our own minds the same way we read other minds is evident in
what cognitive science tells us about consciousness and working memory —
the dual imagistic and silent-speech process that we employ to
calculate, decide, choose among options “immediately before the mind.”
The most widely accepted psychologist’s theory of consciousness
identifies it as a mode of “global broadcast” solely from sensory
modalities to “executive”— deciding, and “affective”— feeling systems
that act on this sensory input. Self-consciousness has nothing else to
work with but the same sensory data we use to figure out what other
people are doing and are going to do.
The
upshot of all these discoveries is deeply significant, not just for
philosophy, but for us as human beings: There is no first-person point
of view.
Our
access to our own thoughts is just as indirect and fallible as our
access to the thoughts of other people. We have no privileged access to
our own minds. If our thoughts give the real meaning of our actions, our
words, our lives, then we can’t ever be sure what we say or do, or for
that matter, what we think or why we think it.
Philosophers’
claims that by reflecting on itself thought reliably reveals our
nature, grounds knowledge, gives us free will, endows our behavior with
moral value, are all challenged. And the threat doesn’t stem from some
tendentious scientistic worldview. It emerges from the detailed
understanding of the mind that cognitive science and neuroscience are
providing.
Alex Rosenberg is
co-director of the Center for Social and Philosophical Implications of
Neuroscience in the Duke Initiative for Science and Society. His second
novel, “Autumn in Oxford” will appear in August.
Now in print: “The Stone Reader: Modern Philosophy in 133 Arguments,”
an anthology of essays from The Times’s philosophy series, edited by
Peter Catapano and Simon Critchley, published by Liveright Books.
Follow The New York Times Opinion section on Facebook and Twitter, and sign up for the Opinion Today newsletter.
Follow The New York Times Opinion section on Facebook and Twitter, and sign up for the Opinion Today newsletter.