Thursday, January 24, 2019

Thought, Consciousness, Brains and Machines





welcome covers
Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Mind & Self

Thought, Consciousness, Brains and Machines

Adrian Brockless on the proper way to use the words ‘thought’ and ‘consciousness’.

The concept of thought is one with which people make mischief. Is thought to do with what it is like to have experiences? How can we know for sure that others think as we do? In what ways are thought and consciousness related? Can an unconscious entity think? Perhaps thought emerges from ever increasing degrees of computational sophistication? Yet there are machines that are capable of far more complex computational manoeuvres than human beings, but to which we do not accord thought.

A Solid Grounding in Thought

Both neuroscience and artificial intelligence require a clear understanding of the concepts of thought and consciousness. If they do not have this then, by definition, one cannot be clear about what such investigations reveal. But can philosophy really inform the kinds of scientific investigations now taking place in AI and neuroscience? Many believe not. Nevertheless, our understanding of concepts and how they operate is a philosophical matter, not a scientific one. Scientific discoveries can make a concept redundant but they cannot falsify the concept itself, that is, show that the concept itself doesn’t make sense. To take a well-worn example, the concept of ‘phlogiston’ was once associated with the process of burning. Phlogiston was believed to be a substance in any object that could burn. The existence of phlogiston in flammable objects has long since been disproved and we now have much clearer ideas about how combustion takes place. But properly speaking, the concept itself is neither true nor false, merely redundant – unless one is writing a treatise on the history of science (or a philosophical article about concepts).
Here’s another example of a conceptual question that is philosophical as opposed to scientific. I’ve adapted this from one supplied by Ludwig Wittgenstein (1889-1951) in his Blue and Brown Books. Imagine that the electron microscope has just been invented. You decide to train it on part of the table in front of you – an object you’ve always considered to be solid. To your great surprise you discover that the table is made up of atoms with gaps in-between. The dense nuclei of the atoms make up only a tiny proportion of the table’s volume and are separated by relatively vast spaces that are essentially empty. Have you disproved the concept of solidity? Or have you merely discovered something more about what it means for something to be solid?
How you answer this question will, of course, determine what (if anything) you count as a solid object; but there is nothing in science that can help you say what a correct concept of ‘solidity’ should be.
So after the discovery that solid objects comprise atoms which are actually vibrating, with gaps in-between, what should we make of the concept of solidity? Unlike the concept ‘phlogiston’, the concept of solidity has not been made redundant. But has it been proved true ? No more so than the concept of phlogiston has been proved false.
Firstly, if one argues that the discovery that objects are made of atoms with gaps in-between means that the concept of solidity has been proved false, the scientific explanation becomes confused in the sense that this would mean that the phenomenon being explained – the solidity of everyday objects – is not solidity at all! This might be acceptable if solidity was something about which we merely speculated, but that is not so. We put our mugs and pens down on solid surfaces, and we contrast solid objects with stuff such as water and air which are not solid. The concept of solidity has uncontroversial everyday uses. So, far from proving the concept of solidity to be false or redundant, the scientific discoveries instead tell us more about the nature of solid objects. To claim that the concept of solidity has been proved false by scientific discoveries is therefore poor philosophy. It is poor philosophy because the purpose of the scientific investigation is to explain how it comes to be that objects are solid, not whether the concept of solidity is itself true, false, or non-existent. (And it cannot be false because it is a concept we use everyday perfectly meaningfully, and indeed, could not do without.) It is also poor philosophy because it wrongly suggests that scientific investigations explaining the nature of solidity have implications for our everyday applications of the concept. They don’t. This is why what we count as solidity, and how we do so, is a philosophical question as opposed to a scientific one.
The same holds for the concepts of thought and consciousness. The meanings of these concepts cannot be determined by empirical discoveries (although, of course, they can be extended by them). As I will argue, neither can advances in technology themselves determine whether or not a machine thinks.

Thoughtful Uses of ‘Thought’

We have always used terminology associated with thought metaphorically. It might be said of a book, if it falls repeatedly from a bookshelf, that ‘it has a mind of its own’; or a car might be called ‘grumpy’ if it refuses to start on a cold winter morning. These are unproblematic metaphorical uses of terminology associated with thought. No one is thereby suggesting that books really do have minds or that cars really are grumpy. But in recent years there has arisen a tendency in the scientific and philosophical communities to apply such metaphors in some areas in a literal way, perhaps forgetting that they are metaphors. Some claim that it is literally one’s brain that thinks, infers, hypothesizes and so on, as opposed to saying ‘I think’, ‘I infer’, ‘I hypothesise’, etc, as a result of my brain’s activities. Is this a merely question of semantics? I think not, for the following reasons.
We know what ‘to think’ and ‘to infer’ mean through our uses of them in everyday contexts. That’s how we master using these terms, and how they’re taught to us. The same is true in relation to the concept of solidity. But to say ‘my brain thinks’ or ‘my brain infers’ does not mean anything in our everyday discourse. I do not ordinarily say, for example, that ‘my brain is thinking’ when considering a complex mathematical problem.
This is important because our normal language use provides the criteria for legitimate ascriptions of what does and doesn’t count as thought. These allow for the development of investigations aimed at telling us more about thought and the brain. If when devising our research we violate the grammar of such concepts – if we use the terms ‘think’ or ‘consciousness’ without our use being anchored in the ways in which these words are normally used – then whatever we discover will be compromised by our confused uses of those terms. We might for instance then claim that something thinks when, in fact, it does not – as when computer scientists claim that machines think and neuroscientists that brains think. This is similar to the claim that there are no solid objects because it has been discovered that solid objects are comprised of moving atoms and gaps, in the sense that it involves a confusion over the use or application of concepts.
I am certainly not saying that brains have nothing to do with thought. Obviously, without brains there could be no thought, no sight, no hearing, and no consciousness. Nonetheless – to update another example from Wittgenstein – imagine if there were just live brains in vats of nutrients, with no human bodies and no human behaviour, but that these brains displayed normal human neurological activity. What would give us so much as the idea that they were thinking? The neurological activity itself? Of course not! What allows us to talk of thinking in relation to neurological activity is originally the behaviour of human beings. Such behaviour is related to the rules which govern our correct uses of the word ‘thinking’. We believe someone is thinking because their behaviour shows us that they are. Of course, the concepts of ‘brain’, ‘thought’ and ‘neurological activity’ are very much bound up together, in that brains and neurological activity are required for thought, and we associate certain kinds of thought with certain neurological patterns. But neurological activity is not identical with thought. So what, exactly, do we mean when we say that something thinks?
Recent developments in AI have led many to believe that machines can think. After all, computers can carry out countless tasks far faster than their human creators: they can solve phenomenally complex mathematics problems in a few seconds; they have beaten the world’s best human chess masters; robots can play violins, and so on. Indeed, in Flesh and Machines: How Robots Will Change Us (2003), Rodney Brooks, Professor Emeritus of Robotics at MIT, argues that his students respond to robots as they do to their human peers (Brooks fails to notice that turning off a robot when leaving the lab is not reacting to it as one would to a human being). But does this mean that such robots and computers are becoming conscious? And is it possible for thought to take place without there being consciousness?
I will return to these questions presently. For the time being, I want to explore the relationship between what machines can do and the possibility of thought.

The Rules of Thought

There have been calculating machines of one form or another for thousands of years. The abacus, for example, was invented over 2,500 years ago. But can one legitimately say that an abacus is a basic form of computer? Many would say that an abacus is not a computer, not least because it is neither electronic nor automated. However, both the abacus and electronic computers operate through rules that relate to particular kinds of results: an abacus was designed to facilitate certain forms of calculations on the basis of particular types of operations; and computers function through the operation of algorithms. But do (any) computers follow rules, or merely act in accordance with rules? I think this Wittgensteinian distinction is important when considering the issue of machine consciousness.
Computers and robots act in the ways they do because of how they have been designed and made. Whether or not their tasks are performed correctly is down to their internal mechanisms. In other words, if all the connections in the computer’s hardware have been set up correctly, its program contains no mistakes and the inputs are correct, then the output will also be correct. Outputs are causally determined by the inputs and the processes that are designed to produce the outputs. This is also, in essence, the currently popular ‘functionalist’ model of the human mind. Outputs, therefore, are an example of causal inevitability. But is this an accurate description of the nature of thought? Again, it is not uncommon to hear computer scientists claim that it is. Let’s consider this for a moment.
In Volume 3 of his phenomenally acute commentary Wittgenstein: Meaning and Mind (1993), Peter Hacker points out that one could create a computer by building a very complex miniature railway set. Points, storage depots, different kinds of carriages and trucks, would all act in ways determined by the tracks and be used at different times in different combinations depending on the tasks involved. (The tracks themselves are the rules with which the computer must act in accordance). When this model train-set computer is in operation, would one say that it’s thinking? Obviously not. Indeed, as a boy, I built a fairly complex model train set, and I can confirm that it was not thinking! Hacker goes on to point out that today’s computers are, in essence, very fast and more complex versions of this idea. Does speed make a difference in terms of whether computers think? If so, at what speed might we say that thought emerges? Remember it is the same tasks that are being performed, just more quickly. What about ever-increasing degrees of complexity? One could build a massively complex but slow train set; would that think?
But there is another and more crucial pair of distinctions that relate to rule-following: the distinctions between causal and logical determination, and between acting in accordance with a rule and following a rule.
As I’ve already mentioned, causal determination is where the outputs of a machine are causally determined by its inputs and processes. Computers have been designed to act in accordance with rules. If something goes wrong with the internal mechanism, then the outputs will no longer be in accordance with the rules it has been designed to follow.
However, what makes the outputs correct or incorrect is not the causal inevitability of the process. Rather, logical determination is the practice of following a rule which establishes what counts as the correct or incorrect output of machines that have been designed to act in accordance with rules. Put another way: the determination of the correctness of any kind of computation cannot be causal ; it must be logical. There is nothing either logical or illogical about causal determination, or acting in accordance with rules. By contrast, logical determination is the process of following a rule. So just because a machine is causally determined to replicate or emulate human behaviour this does not mean that it thinks.
Now according to Wittgenstein in Philosophical Investigations (1953), the criteria for saying one is following a rule correctly, or not, are whether one’s behaviour fits with that specified by the rule. The correct following of rules therefore obviously requires certain forms of behaviour from us. But here the rules being followed do not cause our behaviour in the way that they cause a computer’s behaviour; rather, they express an understanding of what we are doing.
Radio head
Radio head old school © Woodrow Cowher 2019. Please visit woodrawspictures.com

Demonstrating Nothing Like Thought

So what does thinking amount to?
As Hacker points out, if a being can think it must also make sense to say of it that it can judge, reflect, be open-minded, dogmatic, impetuous, thoughtless, careless, sceptical, cynical, optimistic, unsure, and so on. All of these and many more traits only make sense within the context of human forms of life. In other words, the concept of thought only makes sense within a weave of life of which all these other attributes are a part. Not only that, but consciousness is presupposed in relation to something that is understood as reflective, contemplative, rash and so on. A mechanical process with a causally determined output neither establishes the rules for such forms of life nor is internally related to them
But surely one might still argue that algorithms of sufficient complexity can give rise to a machine that thinks? Surely, given sufficient computational power, a machine can be created that exhibits all the attributes I’ve just listed, and more?
Firstly, let’s suppose we agree that only conscious beings can think (whilst I am saying that consciousness is necessary for thought, I am not suggesting that thought is necessary for consciousness). If complexity gives rise to thought this implies that consciousness somehow magically emerges out of complexity in the brain. However, think about this in relation to the model railway computer again, and you’ll see that it is not just an issue of complexity. What’s more, machines already exist, such as a laptop, that can do some complex tasks far more quickly and efficiently than human beings, but which we are not even remotely inclined to class as conscious. By contrast, we think of our pet dogs and cats as conscious; but these animals are far less complex in terms of what they are able to do than many machines that already exist and to which we do not accord consciousness. It seems then that correctly applying the concept of consciousness to something does not depend on the computational complexity of that thing. And if we cannot derive consciousness from complexity then neither can we derive thought from complexity.
Secondly, a complex computer is still merely the product of the behaviour of its creator, and so it acts in accordance with rules as opposed to following them. In a similar sense to how aeroplanes and steam engines are the products of their creators, designed to make the task of travel easier and more efficient, so a computer is created with particular goals in mind – for example, to do calculations that would take human beings far longer to complete. Computers and robots are, in this respect, merely products and extensions of human behaviour. Accordingly, computers only act in accordance with rules, as opposed to following rules that are genuinely their own.
But what about computers that develop their own algorithms? Are these computers not thoughtful? Are they not conscious?
Computers that create their own algorithms will have thereby created a set of rules, perhaps ones not previously thought of by human beings. However, these rules will have been causally determined, as opposed to logically so. More to the point, we can only understand the rules that the computer has developed by reference to our existing practices. Put another way: if the computer’s algorithms came up with supposed ‘rules’ which resulted in output (that is, behaviour) that bore no resemblance to human forms of life, then they would not be rules, in the sense that they would not meet the required criteria in order to be understood by us as demonstrating rule-following.
Thirdly, the meaning and significance that the behaviour of others has for us – which may be grounded in our primitive reactions, but which also provides the criteria for saying whether or not a rule has been applied correctly – is what produces our conceptual landscape.
As demonstrated by the ‘brains in vats’ example, neurological activity in the brain is not in itself meaningful to us. Rather, we have discovered that such activity is correlated with our behaviour, which we do find meaningful; and we understand thought in terms of this meaningful behaviour. So one cannot define thought as neurological activity in the brain any more than one can define it as complexity in a computer. My brain itself cannot think. Rather, thought is known through what Wittgenstein referred to as ‘an attitude towards a soul’. He was not speaking of attitude in the way one might say that a gangsta rapper has attitude. ‘Attitude’ as Wittgenstein construes it is exemplified in the ways in which we are naturally disposed to react towards our fellow human beings, and other animals – what we might also think of as our primitive reactions. Attitude also links the concept of thought to that of consciousness insofar as it is our dispositions that determine the nature of our responses to people, animals, and machines. Our dispositions condition our conceptual landscape and so set the rule-governed criteria for the applications of our concepts. In other words, the different ways in which we are naturally disposed to respond to the world is both constitutive and expressive of how we conceive of the differences between people, animals, and machines. The correct use of the terms ‘thought’ and ‘consciousness’, therefore, is not a question of technological or neurological complexity, nor of the tasks that machines are able to perform, but rather, an aspect of the way in which we are disposed to respond to human beings as opposed to machines. Bluntly, we say that people think because we naturally respond to them as thinking beings. And at present we are unable to respond to machines as conscious beings.

Conclusion

Computers will become ever more complex, but nothing in their complexity guarantees that they think or are conscious. Only in the event that we create a machine that invites us to naturally respond to it as we do our fellow humans beings will we have created a machine that we can say thinks. But then, we will have created a person – except a person that’s made out of non-organic parts (paradoxically, that fact alone may be enough to prevent us from responding to it in ways that allow us to correctly attribute thought to it). Of course, such a machine should only perform tasks at the speed human beings can: were it to have much greater computational power or response speed, we would not respond to it as we do to our fellow creatures.
© Adrian Brockless 2019
Adrian Brockless has taught at Heythrop College, London, and at the University of Hertfordshire. He was Head of Philosophy at Sutton Grammar School from 2012 to 2015, and currently runs his own series of adult education classes in philosophy. Email: a.brockless@gmail.com.

Buddhism and self-deception








The Golden Rock at Shwe Pyi Daw, Kyaiktiyo, Burma (now Myanmar), in 1978. 
According to legend, the rock is balanced on a strand of Buddha’s hair. Photo by Hiroji Kubota/Magnum
Katie Javanaud
is a DPhil candidate in the faculty of theology and religion at Keble College at the University of Oxford. She is interested in Buddhist metaphysics and Buddhist ethics (theory and application).
Listen here
Brought to you by Curio, an Aeon partner
3,800 words
Edited by Sam Dresser
SYNDICATE THIS ESSAY
Self-deception seems inescapably paradoxical. For the self to be both the subject and the object of deceit, one and the same individual must devise the deceptive strategy by which they are hoodwinked. This seems impossible. For a trick to work effectively as a trick, one cannot know how it works. Equally, it is hard to see how someone can believe and disbelieve the same proposition. Holding p and not-p together is, straightforwardly, to contradict oneself. 
Despite its seemingly paradoxical qualities, many people claim to know first-hand what it is to be self-deceived. In fact, philosophers joke that only prolific self-deceivers would deny that they experience it. Nevertheless, there are skeptics who argue that self-deception is a conceptual impossibility so there can be no genuine cases, just as there can be no square-circles.
Yet self-deception seems undeniable in spite of its alleged incoherence. For the fact is, we are not always entirely rational. Certain situations, such as falling in love or being in the frenzied grips of grief, heighten susceptibility to self-deception. Betrayed lovers everywhere, anxious to discard the damning evidence of infidelity, know precisely Shakespeare’s meaning at sonnet 138:
When my love swears that she is made of truth, 
I do believe her, though I know she lies
Self-deception is so curious a thing that it is a source of intrigue in the arts and sciences alike. Biologists such as Robert Trivers, for example, have begun to investigate self-deception’s evolutionary origins, probing its function and potential value.
On the one hand, evidence suggests that specific instances of self-deception can enhance wellbeing and even prolong life. For example, multiple studies have found that optimistic individuals have better survival rates when diagnosed with cancer and other chronic illnesses, whereas ‘realistic acceptance’ of one’s prognosis has been linked to decreased life expectancy. On the other hand, self-deception seems like the ultimate delusion. Simultaneous belief and disbelief in a proposition is surely symptomatic of irrationality, placing one’s mental health and capacity for reason in jeopardy.
Existing debates face the challenge of connecting the philosophical and the practical aspects of the problem. Either self-deception is ruled out as incoherent, or it is accepted as a brute fact. If the former, the skeptic must justify the countless cases where it appears to occur. If the latter, some serious revisions to our conception of self are required.
Ideally, we should seek a single solution to both dimensions of the problem so that our explanation of self-deception also points the way to its prevention. For, while deceiving ourselves might occasionally seem to our advantage, in the long term it is self-alienating. And as we shall see, Buddhist approaches to self-deception achieve the synthesis of practical and philosophical resolutions more fully than do the dominant Western theories.
Self-deception belongs to a family of concepts involving psychological manipulation, such as wishful thinking, repression, denial and dissociation (emotionally removing oneself from a traumatic experience to avoid confronting it). Skeptics about self-deception struggle with all these concepts because they normally think of the self as internally unified and self-aware, making concealment of unwelcome self-knowledge impossible.
However, this contradicts psychoanalytic theories on the conscious and unconscious mind. It also goes against experience. We don’t always know ourselves as well as we think, and sometimes we convince ourselves of that which is evidently false or overwhelmingly improbable. The fine line between ambition and self-deception is often manifest around New Year, when many of us are forced to concede that our goals have crumbled from the heady heights of self-improvement plans into delusional wishful thinking.
If self-deception is paradoxical, the experience itself is even more perplexing. Unlike the immediacy of other experiences, how it feels to be self-deceived is knowable only retrospectively, after the spell has been broken.
Take Oedipus. Anxious that the prophecy of patricide and incest will be fulfilled, he leaves his home and family. Though he is genuinely shocked and sickened at the discovery of his true identity, there are indicators throughout the play to suggest his wilful ignorance. Given his fear of patricide, why does Oedipus continue blithely on his way after killing a man? Given his fear of committing incest, why does he marry a widow without first piecing the puzzle together? Such neglect leads the audience to suspect that, somehow, Oedipus was dimly aware of his identity before its full disclosure, and that he either repressed this awareness or deceived himself to avoid the painful truth.
Thankfully, for most of us, our small acts of repression, denial and self-deception are more mundane. For instance, data gathered through self-reporting on consumption often delivers distorted results, reflecting the respondents’ preferred self-image rather than any objective facts. It would be foolish to read self-deception into every omitted glass of wine or unrecorded biscuit – embarrassment and forgetfulness are equally plausible explanations. Even if self-deception is the root cause, this behaviour seems fairly harmless.
Strategies of postponement and misrepresentation allow us to conceal our true nature even from ourselves
Somewhere on the scale between extremely damaging and totally insignificant self-deception we find examples that resonate. If ancient wisdom traditions are right and the quest for self-knowledge is a fundamental part of human flourishing – as in the Socratic maxim ‘know thyself’ – then self-deception undermines the central aims of the good life. Convincing ourselves of what is manifestly false or impossible is both existentially crippling and socially harmful. This propensity is sometimes referred to as the mal du siècle: a general malaise triggered by unsettling awareness of our potential and identity.
In Being and Nothingness (1943), Jean-Paul Sartre invokes the concept of mauvaise foi, or bad faith, to explicate self-deception. He argues that many people are afraid to confront themselves, preferring to follow prescribed norms and fulfil pre-assigned roles rather than to strive for self-realisation. He illustrates bad faith with a few examples: a woman’s hesitant reaction to a man’s advances, a waiter’s self-identification as ‘nothing more’ than a waiter, a homosexual’s unwillingness to acknowledge his sexuality. These strategies of postponement and misrepresentation allow the person to conceal their true nature even from themselves. Sartre deplores this mode of life, for, while such strategies might serve as effective coping mechanisms in the short term, in the long run they are existentially paralysing.
This kind of self-deception, the sort backed up by conformity to norms or stereotypes, is extremely difficult to detect. And, naturally, the most pervasive forms of self-deceit are the hardest to root out. This is especially clear in cases of discrepancy between what a person professes, and how they feel or behave.
Of course, the presence of a bias does not automatically imply self-deception. People can discriminate unknowingly, even against their will, and there is a world of difference between ignorance, and wilful ignorance of one’s own biases and prejudices. As the US civil rights advocate Jesse Jackson put it in 1993: ‘There is nothing more painful to me at this stage in my life than to walk down the street and hear footsteps … then look around and see somebody white and feel relieved.’ Still, discrepancy between belief and behaviour can sometimes signal self-deception, as can the language we use.
It is a mistake to treat the philosophical and the practical aspects of the problem of self-deception as entirely distinct. For what use is an explanation of this phenomenon unaccompanied by a strategy for its alleviation? Prominent Western theories on self-deception tend to leave the practical problem unresolved. But there is an alternative, Buddhist approach. The artful combination of three Buddhist theories provides a philosophically therapeutic perspective on self-deception. Before turning to this response, however, let’s delve deeper into the concept of self on which the paradox depends.
Skeptics about self-deception claim that any genuine examples would need to satisfy impossible conditions, such as the knowing-dupe or the contradictory belief conditions. They claim that satisfying the first condition means being duped by one’s own duplicitous scheme while satisfying the second is tantamount to abandoning reason. Such skepticism represents the minority view, since so many examples of this supposedly impossible phenomenon are clear. Yet it remains a theoretical option.
For the skeptic’s defeat we must show either: (1) that the fact of somebody holding inconsistent beliefs is reconcilable with the idea of a unified centre of conscious beliefs; or (2) that the skeptic misconstrues the conditions under which self-deception occurs. Arguably, the skeptic’s account of self-deception reduces the complexities of human psychology to what is possible at one single moment in time, under the assumption that no sane, cognitively competent person simultaneously believes p and not-p.
But if this is an argument against self-deception, it is time to revise our model of selfhood. Indeed, far from precluding the possibility of self-deception, the multifaceted nature of consciousness might actually help to explain it.
Western philosophy has produced several responses to the paradox of self-deception, the most recurrent of which are the temporal partitioning and the psychological partitioningapproaches. Both challenge the (still dominant) conception of the self as completely internally unified and fully self-aware. They are designed to show that self-deception is paradoxical only if the Cartesian model of the self as a non-composite, immaterial substance – whose purity we imagine we partake of – is accepted. Without this idea of the self, self-deception is a puzzle, but it is not a paradox.
Deceiving oneself is just a more unusual case of lying
Some leading philosophers in consciousness studies and the nature of mind reject the Cartesian concept of self. Aside from the lack of empirical evidence for such a self, it would surely be too abstract and impersonal to bear a connection with the individual of lived experience, who engages and interacts in the temporal world. But the influence of the Cartesian model has historically been so significant that it continues to shape the debate. Although both temporal partitioning and psychological partitioning proposals challenge this model of the self, they do not resolve the practical problem of eliminating self-deception.
Advocates of temporal partitioning might invoke the appointment case to explain how self-deception works. The philosopher Brian McLaughlin at Rutgers University in New Jersey summarises it as follows:
In order to miss an unpleasant meeting three months ahead, Mary deliberately writes the wrong date for the meeting in her appointment book, a date later than the actual date of the meeting. She does this so that three months later when she consults the book, she will come mistakenly to believe the meeting is on that date and, as a result, miss the meeting.
This is supposed to show that self-deception does not require simultaneous belief in p and not-p. Instead, all that is required is an intention to induce the belief not-p at the time of believing p. In this case, there is no time when Mary believes both that her appointment is on Thursday and that it is on Friday. Rather, she relies on her faulty memory so that, when she eventually consults her diary, she will have forgotten her act of deception.
We can contest the likelihood of Mary’s forgetting. Indeed, if the prospective appointment (let’s say, with the dentist) elicits such a reaction, she will surely struggle to put it out of her mind. What matters though is that temporal partitioning challenges the idea that the act of deception and the experience of deceit must coincide. Deceiving oneself therefore largely resembles deceiving somebody else, and is just a more unusual case of lying.
For the skeptic, this account won’t cut it. The obvious objection is that temporal partitioning seems not so much to explain self-deception as to explain it away. After all, if after three months Mary has forgotten the true date of her appointment, doesn’t this show that the Mary who deceived is, in some sense, a different person from the Mary who is deceived? If we distinguish cases of self-deception from cases of self-induced deception, we might protest that the appointment case is an example only of the latter. And even if we are satisfied that temporal partitioning explains how self-deception occurs, it cannot tell us how to overcome it.
Another common explanation of self-deception appeals to psychological partitioning between the different facets of the self. On this view, self-deception does involve simultaneous assent to p and not-p but this is not paradoxical because of the multifaceted nature of the self. Rather than treat the self as fully integrated, we should see it as a process, the product of a complex structure composed of various elements. One part of the self can conceal its beliefs from another part, making self-deception possible.
It is only in moments of introspection that the illusion of a unified self is cast into doubt. An advantage of this theory is that it accommodates different levels of self-awareness within one individual, explaining discrepancies between the conscious and unconscious mind.
The philosopher Amélie Oksenberg Rorty at Harvard Medical School illustrates how this might work with the example of Dr Laetitia Androvna:
A specialist in the diagnosis of cancer, whose fascination for the obscure does not usually blind her to the obvious, she has begun to misdescribe and ignore symptoms that the most junior premedical student would recognise as the unmistakable symptoms of the late stages of a currently incurable form of cancer.
Androvna deflects the questions of her concerned colleagues away from her condition, though she does put her affairs in order (eg, by making a will). The mismatch between her behaviour and her consciously held beliefs suggests that, at some level, she recognises her illness but is finding ways to keep her conflicting acknowledgements apart.
Again, the skeptic argues that psychological partitioning is incompatible with genuine instances of self-deception because this approach likewise undermines the identity of deceiver and deceived. From this perspective, if the self is divisible then, to be sure, one part might deceive another, but is this self-deception? If we challenge the unity of the self, must we also challenge the idea of self-deception?
According to Buddhism, the answer is no.
Early Buddhists did not explicitly discuss the problem of self-deception, at least not as it’s understood in Western philosophy. What they did do, however, was provide detailed accounts of three theories that, collectively, provide a response to both the philosophical and practical aspects of the problem. These are (1) the theory of no-self (anātman); (2) the theory of wilful ignorance (avidyā); and (3) the theory of two truths (satyadvaya).
These teachings are variously interpreted within Buddhism, but all schools agree that they can provide transformative insights into our own nature, banishing our tendency for self-deception. While we’re inclined to treat self-deception as the exception rather than the rule, Buddhists see it as our default position. They claim that most people repress and deny uncomfortable truths, deceiving themselves on an almost unimaginable scale about all manner of things. From the Buddhist point of view, the skeptic’s only success lies in the extent of their self-deceit: by defining the self in ways that make it impervious to change, they also strip it of potential.
Specifically, Buddhists claim that we routinely convince ourselves that what is perishable and impermanent can be a lasting source of satisfaction. This illusion only reinforces our existential situation, which is one of profound suffering. From this perspective, even when false beliefs offer temporary relief from painful truths, self-deception merely prolongs the inevitable. Since none of us can stave off our demise forever, each of us is eventually forced to confront the reality of our own transience.
The remedy to all this is a fearless acceptance of our own impermanence and insubstantiality. By abandoning our self-image as fixed centres of agency, Buddhists argue that we eliminate the stultifying effects of greed and hatred borne from egoism. This process eventually leads to liberation through self-awareness, consisting of awareness of the fundamental lack of any self at all.
No-self (anātman) is Buddhism’s most famous, but also most frequently misunderstood, theory. Buddhists supply several arguments against the existence of an eternal, changeless, transcendental and metaphysical self. To understand these arguments, we must contextualise them against the backdrop of the Vedic view of the self, dominant in classical India. In the Vedic worldview, the innermost kernel of a person, the ātman, corresponds to the fundamental source and ground of reality, the Brahman, which is essentially unchanging.
Buddhists reject this on two fronts. First, if the self existed in this way, it could not engage in worldly experience but would instead stand inertly outside of time and space. It would thus bear no relation to the human person who lives and changes through time. Moreover, since experience confirms that everything is causally conditioned, hence subject to change and degradation, an immutable self could never be empirically observed. Second, Buddhists argue that obsession with a fixed self is morally problematic, and that this belief perpetuates selfishness. Belief in the self is therefore seen as both the symptom and the cause of deluded attachment.
The human person is a process, not a thing
Ironically, then, Buddhists are inclined to see belief in a single substantial self as the severest, most dangerous instance of self-deception. This immediately raises the question: if there is no-self, who can be the subject of self-deceit? To answer, we must invoke the theory of two truths (satyadvaya), which stipulates a distinction between ultimate and conventional truth.
In his study of self-deception in different traditions, the philosopher Eliot Deutsch of the University of Hawaii demonstrates one of the ways in which the no-self theory is often misconstrued. He argues that Buddhists can ‘have little to say’ about self-deception because they do not accept the ultimate reality of the metaphysical self. If this disqualifies Buddhists from debates on self-deception, it must also disqualify many Western philosophers who do not approach the paradox of self-deception with a unitary self in mind (including advocates of temporal and psychological partitioning).
On the contrary, although Buddhists reject the ultimate existence of the self, they accept the conventional (we might say, practical or functional) reality of conceptually constructed persons. And crucially, as we have seen, it is the conventional person – not an ultimate self– who expresses the full range of human emotions and deploys the tactics of self-manipulation, including self-deception.
Buddhists conceive of conventional reality in terms of conceptualisation. Hence, the concept of a person reflects nothing more than the imposition of this idea on to ephemeral elements of which we are composed, called the skandhas. The skandhas include: the physical body, sensory experiences, cognitive awareness of perceptions, intentional acts of will and consciousness. None of these remain stable over time but there is a causal connection between the past, present and future skandhas. This is sufficient for personal identity even though there is no such thing as a numerically identical self.
In other words, the human person is a process, not a thing. Our language typically fails to communicate this fact, and my repeated use of the word ‘I’ sustains the illusion of the self as an underlying, constant feature of reality.
To explain how self-deception occurs, Buddhists can distinguish ultimate from conventional truth. Like the temporal and psychological partitioning approaches, epistemic partitioning challenges the identity of deceiver and deceived. At the level of ultimate truth, there simply is no-self who could be self-deceived. At the level of conventional truth, we encounter the person (who is an illusion, better thought of as a sequence of person-stages). Unlike temporal and psychological partitioning, however, epistemic partitioning goes a step further. It not only explains the mechanism of self-deception but also contains the seeds of its elimination.
The Buddha’s teachings are renowned for their therapeutic orientation, and self-deception seems the antithesis of an authentic life of human flourishing. Buddhism stresses the link between discerning truth (with ‘right view’ as the first step on the noble eightfold path) and moral fulfilment. The distinction between ultimate and conventional truth not only explains the origins of our illusions but helps us to overcome, or see through, their deceptive character.
The final goal of this process is the complete alleviation of suffering, including the suffering borne out of self-deception. Epistemic partitioning of ultimate and conventional knowledge results in two modes of knowing, which we might call the cognitive/intellectual and the affective/practical. We might know ultimately that everything is impermanent and insubstantial yet remain attached to merely conventional things. Ourselves, for instance.
We display wilful ignorance of perishability as we cannot bear to lose the things we hold most dear
Once we internalise this truth, however, Buddhists suppose that delusional compulsions for transient things will be gradually undermined. Just as there is no ultimately real self, neither are there any ultimately real tables, chairs and so forth. The identity we assign to composite things made up of parts is just the product of mental construction, reflecting our ingrained tendency to impose structure, stability and substance.
Though we know that things change and degrade, we act as though they are permanent. Such a discrepancy between conscious cognitive belief and innate affective attitude signals self-deceit. Put simply, we display wilful ignorance (avidyā) of perishability because we cannot bear to lose the things we hold most dear.
Why, then, do Buddhists treat conventional truths as truths at all? Indeed, if they are nothing but convenient fictions, isn’t this a distortion of truth’s meaning? Again, Buddhists see the therapeutic dimension of their philosophy as justifying this manoeuvre: belief in the self would be both inaccurate and unhelpful, whereas belief in the person is key to accomplishing the goals of Buddhism. Mindfulness forces us first to confront the wide chasm between our self-image and the ultimate truth of our nature; and second, helps us to bridge that chasm by becoming increasingly aware of the workings of the mind and its deceptive strategies so that we no longer repress and deny our true feelings.
It might strike the modern reader as patently wrongheaded to suggest that any religioustradition contains the seeds of a solution more satisfying than secular proposals. For, understandably, many see religious belief as coterminous with wishful thinking and incompatible with reason. However, the Buddhist response sketched here depends exclusively on arguments about human nature that are equally open to dispute and defence. There is no recourse to mystical or non-empirical claims. And because the problem of self-deception is more personal than many of philosophy’s other problems, viable solutions must work both in theory and in practice. Though the many forms of self-deception make the effectiveness of a universally applicable remedy unlikely, Buddhists would concur with Macbeth’s doctor that ‘Therein the patient must minister to himself.’

The Mission to Save Vanishing Internet Art





The Mission to Save Vanishing Internet Art

An image from Petra Cortright’s video “VVEBCAM.”CreditPetra Cortright/Foxy Production
Image
An image from Petra Cortright’s video “VVEBCAM.”CreditCreditPetra Cortright/Foxy Production
By Frank Rose
In the 1990s, art found a new medium. Anarchic and unconstrained, the World Wide Web attracted an oddball collection of people ready to do almost anything and call it art. Often their work looked weird and amateurish, with pixelated graphics, tinny chiptune music and garish colors. But what it lacked aesthetically it made up for in conviction.
In Australia, four women who styled themselves VNS Matrix posted a “Cyberfeminist Manifesto for the 21st Century,” followed by a vagina-framed poster in which they joyously proclaimed themselves “saboteurs of big daddy mainframe.” In Moscow, a young woman named Olia Lialina created “My Boyfriend Came Back From the War,” a forking narrative, poignant and oblique, that combined text with grainy black-and-white imagery. An anonymous woman in Amsterdam, eventually identified as Martine Neddam, built a brightly colored site that purported to be the home page of a 13-year-old named Mouchette, after the girl in the 1967 Robert Bresson film who finds a life of torment and abuse too much to bear.
In the early days of the web, art was frequently a cause and the internet was an alternate universe in which to pursue it. Two decades later, preserving this work has become a mission. As web browsers and computer operating systems stopped supporting the software tools they were built with, many works have fallen victim to digital obsolescence. Later ones have been victims of arbitrary decisions by proprietary internet platforms — as when YouTube deleted Petra Cortright’s video “VVEBCAM” on the grounds that it violated the site’s community guidelines. Even the drip paintings Jackson Pollock made with house paint have fared better than art made by manipulating electrons.
Now the digital art organization Rhizome is setting out to bring some stability to this evanescent medium. At a symposium to be held Thursday, Oct. 27, at the New Museum, its longtime partner and backer, Rhizome plans to start an ambitious archiving project. Called Net Art Anthology, it is to provide a permanent home online for 100 important artworks, many of which have long since disappeared from view. With a $201,000 grant from the Chicago-based Carl & Marilynn Thoma Art Foundation, Rhizome will release a newly refurbished work once a week for the next two years, starting with the 1991 “Cyberfeminist Manifesto.” By 2018, Rhizome will be presenting works by artists such as Cory Arcangel and Ms. Cortright.
ADVERTISEMENT
In addition to salvaging the past, the aim is to tell the story of Internet-based art in an online gallery that serves much the same narrative function as the galleries in the Museum of Modern Art. “There’s a sense of amnesia about the history these things have,” Michael Connor, Rhizome’s artistic director, said as he sat in the New Museum’s ground-floor cafe. “This is an opportunity to really be rigorous.”
Olia Lialina‘s “My Boyfriend Came Back From the War.”CreditFranz Wamhof
Image
Olia Lialina‘s “My Boyfriend Came Back From the War.”CreditFranz Wamhof
Broadly speaking, the story Rhizome is telling can be divided into two parts, with the dot-com collapse of 2000-1 as the inflection point. The post-bubble side looks relatively familiar, facilitated as it is by high-speed, always-on connections and characterized by rapid commercialization and the emergence of social media and streaming video platforms like Facebook (2004), YouTube (2005) and Tumblr (2007).
You have 2 free articles remaining.

Subscribe to The Times
But before the bubble in internet stocks burst, before Google took over and Netscape collapsed and Apple was resurrected from near-death, the web was “an entirely new world,” as Mr. Arcangel, 38, put it in a telephone interview from his home in Norway. “Sometimes it was not even clear what you were looking at — was it an artwork or a web server that was broken?” Even so, he knew something exciting was happening.
Mark Tribe, Rhizome’s founder, who is now the department chair of the M.F.A. fine arts program at the School of Visual Arts in New York, described that era as feeling “very new and different from everything else that was going on.” It also felt anti-commercial — although as Mr. Tribe pointed out, “it’s easy to be anti-commercial when the market doesn’t care what you’re doing.”
Rhizome has been part of the Net Art story from the start. Mr. Tribe, the son of the Harvard law professor Laurence Tribe, was a 29-year-old artist living in Berlin when he established it in 1996. He’d delved into the nascent movement at Ars Electronica, the annual electronic art festival in Austria, and considered it “an online community waiting to happen” — which is why his organization started out as a mailing list.
Its name was inspired by the French post-structuralist philosophers Gilles Deleuze and Pierre-Félix Guattari, whose book “A Thousand Plateaus” Mr. Tribe had taken to Berlin. He encountered the word “rhizome” while poring over the index. A biological term referring to the laterally spreading, underground stem systems of plants like tulips and bamboo, it was applied here to the propagation of ideas. “It was a metaphor for a horizontally distributed, non-hierarchical network,” Mr. Tribe explained — in other words, for the internet.
Miltos Manetas, Jesus Swimming (2001).CreditMiltos Manetas
Image
Miltos Manetas, Jesus Swimming (2001). CreditMiltos Manetas
Net Art’s political posture was characteristic of the feverish, techno-utopian excitement shared by netheads in general. “There was this radical idea that the internet was going to change the way art is made and shared,” said Lauren Cornell, who was Rhizome’s executive director from 2005 to 2012 and who has since moved to the New Museum as a curator and associate director of technology initiatives. “That it might even do away with traditional institutions and gatekeepers” — that is, museums and curators.
Instead, it was Net Art that started to disappear. Rhizome began trying to preserve it in 1999 with the creation of ArtBase, an online archive that has since grown to more than 2,000 works. The organization became an affiliate of the New Museum in 2003, saving the group from almost-certain oblivion. But even then it was apparent that to keep Net Art from vanishing into the ether, something drastic would have to be done.
Preserving this work is not just a matter of uploading old computer files. “The files don’t mean anything without the browser,” Mr. Connor, 38, said. “And the browser doesn’t mean anything without the computer” it runs on. Yet browsers from 15 or 20 years ago won’t work on today’s computers, and computers from that era are hard to come by and even harder to keep working.
Dragan Espenschied, Rhizome’s preservation director, has been working with the University of Freiburg in Germany to develop a sophisticated software framework that emulates outdated computing environments on current machines.
Another iteration of this approach is oldweb.today, which Rhizome began in December as a free service. Oldweb lets you time-travel online, viewing archived web pages from sources such as the Library of Congress in a window that mimics an early browser. A second Rhizome initiative is Webrecorder, a free program that lets users build their own archives of currently available web pages. That can help preserve online works being created today.
“Kill That Cat” from Mouchette.org.CreditMouchette
Image
“Kill That Cat” from Mouchette.org.CreditMouchette
Too bad it wasn’t around in 2011, when YouTube deleted Ms. Cortright’s “VVEBCAM.” The video itself is innocuous enough: not quite two minutes of Ms. Cortright gazing impassively downward while cartoon figures — cats, dogs, parrots, pizza slices, what have you — drift across the screen. Less innocuous were the comments left by people who had been drawn to the video by the keywords she’d attached to it as bait — “names of celebrities, sex stuff, Pokemon, Nascar, sports, politics,” she said in a recent interview, a laundry list of topics that were completely irrelevant to what anyone actually saw.
“VVEBCAM” was provocative, and it got a strong response. “People were really nasty,” she said, “and my policy was always to respond in a way that was equal to or greater than the comments they made.” Rhizome intends to embed the video in a reconstructed YouTube player, but there’s no way to recreate the reaction the video provoked. “It’s not like you were taking screen shots,” she said. “When it’s gone, it’s gone.”
Which could be said for Net Art itself. “Net Art is not over,” Mr. Tribe said, “but it is over as an avant-garde art movement.” In its place is art posted to the internet not by art world renegades but by professionals for whom the internet is one medium among many — people like Ms. Cortright or Mr. Trecartin, whose deliriously disjointed videos are equally at home on YouTube and at the Saatchi Gallery in London.
The term that’s being used is Post-Internet Art — not “post” in the sense that the internet is over, but that it’s ubiquitous. In the post-internet era, the internet is simply assumed.
“It’s different, now that everybody’s online,” said Ms. Cortright, 30. “Even 10 years ago, it was not as much a part of people’s lives as it is today.” She considers it “admirable” that Rhizome has committed itself to preserving artifacts from a past that’s so recent and yet so distant, “Otherwise, they really would be lost.”
And yet, she added, “you can’t be too attached to something that’s completely fleeting. I don’t know. It just happens.”
A version of this article appears in print on , on Page AR18 of the New York edition with the headline: The Mission to Save Vanishing PixelsOrder Reprints | Today’s Paper | Subscribe