The following is an expanded version of a lecture delivered in the early 2000s.
When I was a kid of seven or eight, there was a TV show called Ripley’s Believe It or Not! I remember being particularly impressed by one episode about a guy who claimed to have eaten his car. It took him more than four years, but by chopping the car into tiny pieces and swallowing a little bit every day, this guy managed to eat the entire thing: steering wheel, chrome, tires, and all. He didn’t even know he was making art.
Here’s the situation as it stands today. Contemporary art is divided into two main camps. On the one side, we have the centuries-long continuity of work that is primarily pictorial in nature, and on the other, the growing body of work that is more presentational in attitude—that is, art that privileges intentionality and the delivery system, the context in which art appears. One type of art says, “Look at this,” while the other says, “Look at me looking at this,” or in a further evolution, “Look at me looking at you looking at this.” This not a distinction between painting and non-painting. There are plenty of non-painters making work with an outer-directed energy combined with pictorial intelligence.
Though intertwined in practice, the pictorial and the presentational represent two different worldviews, one identified with art as form, as something made, or something its maker arrives at, while the other regards art primarily as a set of cultural signs, or a strategy that produces an artifact, something meant to be read. This may sound like the old Duchampian distinction between the retinal and the cerebral, but the balance has tipped in a way that Duchamp could hardly have imagined 70 years ago. In the last few decades, the emphasis on theory, and on relational aesthetics generally has seriously eroded, if not invalidated, one of the core beliefs about how art functions. In times past, art was thought to possess a quality—something that stimulated the senses—called presence, or aura. Baldly put, a work of art was said to emanate this aura as a result of the transference of energy from the artist to the art, an aesthetic variant of the law of thermodynamics. Few people today would defend that idea. The question remains, what do we have to replace it with?
I recently visited the Zurich home of my friend, Bruno Bischofberger, the great collector and dealer who represents appropriation artist Mike Bidlo. In Bruno’s living room, by a window with a view onto Lake Zurich, was a Bidlo bicycle wheel sculpture, after Duchamp. You know—the wheel mounted upside-down on a simple wooden stool. Although an exact replica of the Duchamp original, which itself is an assemblage of commercially available objects, the Bidlo bicycle wheel lacked presence; it was, in fact, rather inert. Strange—how can that be? It’s an exact replica of a non-artisanal object. As we stood looking at Bidlo’s sculpture, Bruno’s wife, Yoyo, made the astute observation that “an artist’s work either has presence or it doesn’t, and although anything can have it, nothing has it necessarily.” It might sound like magical thinking, but the original—in this case a funny word to use—bicycle wheel is gratifying to look at. It has an aura. The replica, not so much. Is context alone, and the expectations that come with it, enough to explain the difference?
The disparity between the older view of art’s perceptual gestalt and that of Duchamp’s many descendants involves more than a distinction between expressionist and detached art, or warm art and cool. Cool art can be highly pictorial; perhaps most art that we remember is simultaneously pictorial and presentational. Like many things in life, it’s a matter of emphasis, a question of sensibility. Art is often bound up with certain abstract ideas: about space, materiality, cultural history, identity, narrative time, styles of representation, and the very nature of the image, to name only a handful. In the art that we tend to remember, those ideas are embodied by form. Advanced pictorial art also contains an element of the presentational; the presentational is baked in, in a way. Sophisticated paintings are self-aware, they present themselves. One thing art does is to strike a balance between those two aspects of the self, one quality acting as a brake on the other.
W. W. NORTON & COMPANY
However, with the canonization of Duchamp following his death in 1968, presentational art began proliferating, and, like a drug, it soon swamped the brain’s receptors for other kinds of sensations. As the audience for contemporary art grew, work that engaged the delivery system itself began to eclipse the thing being delivered. There are a number of reasons that this approach to art making flourished, principal among them is simple demographics: the great increase in the number of young people enrolled in art schools and the adjacent curatorial programs. Another reason, also demographically determined, is the rise of the tourist model of international art fairs and biennials—what Peter Schjeldahl has dubbed “festivalism.” The context for art does, to some extent, shape what will be created.
We have also seen a proliferation of art whose function is to deliver content of a specific, legible sort, which, if you’re of a certain age, calls to mind the joke about the painter’s reply when asked what his work meant: “When I want to send a message, I call Western Union.” In social realist painting of the 1930s, a work was judged by what it had to say about class conflict. The visuals have changed but the criteria for judgment of message-laden art are still with us. Message first, art as a thing, second.
I’ll confess straightaway that the proliferation of presentational art makes my heart sink. The inconvenient truth is this: It’s easier to present art than to make it. It’s easier to select than it is to invent. It gets confusing, because some of the great pictorial inventors of the twentieth century, like Andy Warhol, obviously, appeared to be doing nothing more than choosing—but that was an illusion, something borrowed from the beauty industry, where the amount of time spent in the makeup chair is meant to result in an effortlessly natural look. To make something that really holds our attention, especially over repeated viewings, requires levels of integration—intellectual, visual, cultural—expressed with a unique physicality. Art that eschews this integration is unlikely to be durably compelling for the simple reason that less is at stake. One component without the others is like an unstable chemical compound; it will degrade, or, to continue the chemistry analogy, it will fail to catalyze. Over time, the result will come to have the flavor of commentary.
Sometimes I think we don’t know what kind of artists we want. Shaman or perceptual psychologist? Poète maudit or activist? Inventor or theoretician? Sacred monster or good citizen? Of course, we want all of the above and more, but at certain times one image of the artist holds more allure than others. There have always been the kinds of artists who present themselves as avatars of our perceived cultural moment, as if that’s the job description. And no doubt for some it is.
From time to time, the slate is wiped clean. New audiences arrive. Life goes on, artists adapt to new technologies, as always. Art in the largely presentational mode has now evolved further; it has embraced its seeming opposite and merged with the iconic spectacle, which, far from denying art’s aura, has transferred the idea of auratic vibration to how something can be staged for the camera. In the past, iconic status was conferred by history. Today, an artist may see no reason to wait, and will try to preempt the ratification process—by going straight to spectacle. Spectacle is a form of illustration—it illustrates an idea, usually a big one. Shock and awe, indeed. There has emerged a newish form, a kind of art whose pictorial values are meant to be understood, perhaps can only be understood, within the framing device of a magazine page, or a screen. I don’t mean here the staged photographs of Cindy Sherman, an artist more or less universally admired. I refer to a more controlled use of pictures within the systems, both social and editorial, that deliver them. We may have invented a new hybrid form of journalism and art combined, one derived from the 70-year tradition of the picture press.
The implications are kind of interesting. We find among art students now a reluctance to make any meaningful distinction between art and ads. I am, perhaps, generalizing, but it’s a noticeable shift. Today’s art students can’t easily recognize the difference, and also don’t see any particular need to do so. As I’ve suggested, maybe this is simply a different kind of aura, one that the audience is already sensitized to. Why not? A picture is a picture. An example of what I mean might be Maurizio Cattelan’s re-creation of the Hollywood sign in the hills above Palermo; the photograph in Artforum makes us smile; we appreciate its complex layers of cheekiness. But how many of us really feel compelled to go to Sicily to see it?
Frank Stella, never one to shy away from a fight, mince words, or go with the flow, has this to say:
Owing to its reading of Duchamp, the literalist art of the last twenty-five years has defined itself by the act of presentation. Artists have tried to make a mountain out of a molehill, and celebrate their ability to select objects and activities from daily life and to present them in a different context, the context of the art museum or gallery. Where literalist art challenges painting by asserting that the art of presentation is the equal of the art of creation, we have to recognize its lack of seriousness.
The New York art market flourished for a time in the 1980s, and this attracted the attention of the mainstream media. The art world hadn’t been considered interesting to talk about for a while—all that conceptual art making people feel stupid—and now there was something to dress up for. The gossip was amusing, some of the personalities were colorful. Most talk about the art of the ’80s and ’90s is really talk about the art world as a social system, and while this may be mildly interesting, it’s not the same, nor as interesting, as the art itself. The art market was robust, briefly, after a period of quietude that had gone on so long it was considered the norm, and when it changed, some people, instead of taking the long view, had an attitude about it. They stopped looking at the work. I remember sometime in the early ’90s receiving a query from something called the Nordic Art Review that posed the stark question, “The 1980s: what was it good for?”
At least as far back as the Renaissance, the arts have been populated by eccentrics with strange and sometimes alarming personal habits—the Mannerist painter Il Rosso, for instance, reportedly lived with an ape as his domestic companion. Closer to our own time, Calvin Tomkins, in his biography of Duchamp, describes a peripheral artist of Duchamp’s circle of the late ’20s in New York, a proto-performance artist who used to walk down Fifth Avenue with live birds pinned to her skirts, as being “unhampered by sanity.” I don’t think we’d want it any other way. One way a work of art takes on meaning is when its formal, pictorial patterns resonate with systems of attention in the larger world. Another way is when the larger-than-life personality of the artist accomplishes a similar cultural rhyme. When something is judged to be passé, what’s really meant is that the image of the artist encoded in certain patterns of behavior is the wrong one for the moment. Hemline too long, or too short.
Fashions do change. A disheartening aspect of the art world of the 1980s was its willingness to indulge in ad hominem attacks disguised as a defense of certain values. Much of the criticism of ’80s art was nakedly elitist. People didn’t like a painting because they didn’t like the people who did. Critic Robert Hughes’s venomous attack on ’80s art included contempt for its collectors, and the smug tagline “newly minted art for newly minted money” was thrown around, as if the Farnese or Borghese were fundamentally different in their day. Today we can see Hughes’s rhetoric for the distasteful snobbery that it was.
But let’s return for a moment to the problem of representation. It’s been the case for quite a while—at least since Picasso—that how well a work reproduces in the media plays a significant role in its popularity; the work of the most acclaimed artists from the ’60s, for instance, look fabulous in reproduction. This isn’t to suggest that those works didn’t also have tremendous physical presence, but the fact remains, most people are familiar with a work of art through reproduction; those who have the good fortune to experience a painting firsthand are fewer in number, and those who have the luxury of actually living with it are very few indeed. But that’s different from the situation I’m describing: art that tangibly occupies three-dimensional space, yet seems to exist in more compelling form when seen in a magazine than it does in real life.
What is the difference exactly? Art conceived as spectacle comes from a different impulse, essentially that of an art director, and is the legacy of conceptual art fused with irony. Art direction is the science of directing attention, often to a con; a place of making you think you’re smarter or more attractive than you are. Increasingly, the art world is in thrall to the triumph of art direction, something which places art in the service of the ironic presentation of forms, our distance from which is the art’s message. As noted earlier, kids in art schools today don’t care about the distinction between art and ads. And why, you might ask, should they, as long as no one else does. I’m not referring to art made out of the raw material of advertising imagery—works like Richard Prince’s Marlborough Man—but rather to the increasingly porous boundary between public and private, between inner and outer directed impulses. This is in fact the generating impulse behind some of the more radical of the recent art. And in a way, I feel a little twinge of responsibility.
At CalArts in the halcyon early ’70s, when the school was still flush with Disney money, students could apply for grants to carry out special projects. I once sat on the panel to pick the winners. One guy asked for $3,000—a lot of money at the time—so that he could hoist a television and generator up to a remote mountaintop, where he intended to watch reruns of The Beverly Hillbillies and then, mid-episode, blast the TV screen with a 12-gauge shotgun. We offered him $300 with the suggestion that he check into the worst fleabag hotel in downtown Los Angeles and shoot out the television in his room with a BB gun. Sometimes, less is more. Since that innocent time, things have developed dramatically, and museums now routinely fly artists around the world to create works that are subsequently photographed and disseminated through the art publications and social media to produce the kind of photographic aura we’re talking about. Nice work if you can get it, as the saying goes, and what we’re left with is an image in a magazine.
Specialised skills; air traffic control. Photo courtesy US Navy/Flickr
Being an air-traffic controller is not easy. At the heart of the job is a cognitive ability called ‘situational awareness’ that involves ‘the continuous extraction of environmental information [and the] integration of this information with prior knowledge to form a coherent mental picture’. Vast amounts of fluid information must be held in the mind and, under extreme pressure, life-or-death decisions are made across rotating 24-hour work schedules. So stressful and mentally demanding is the job that, in most countries, air-traffic controllers are eligible for early retirement. In the United States, they must retire at 56 without exception.
In the 1960s, an interesting series of experiments was done on air-traffic controllers’ mental capacities. Researchers wanted to explore if they had a general enhanced ability to ‘keep track of a number of things at once’ and whether that skill could be applied to other situations. After observing them at their work, researchers gave the air-traffic controllers a set of generic memory-based tasks with shapes and colours. The extraordinary thing was that, when tested on these skills outside their own area of expertise, the air-traffic controllers did no better than anyone else. Their remarkably sophisticated cognitive abilities did not translate beyond their professional area.
Since the early 1980s, however, schools have become ever more captivated by the idea that students must learn a set of generalised thinking skills to flourish in the contemporary world – and especially in the contemporary job market. Variously called ‘21st-century learning skills’ or ‘critical thinking’, the aim is to equip students with a set of general problem-solving approaches that can be applied to any given domain; these are lauded by business leaders as an essential set of dispositions for the 21st century. Naturally, we want children and graduates to have a set of all-purpose cognitive tools with which to navigate their way through the world. It’s a shame, then, that we’ve failed to apply any critical thinking to the question of whether any such thing can be taught.
Get Aeon straight to your inbox
As the 1960s studies on air-traffic controllers suggested, to be good in a specific domain you need to know a lot about it: it’s not easy to translate those skills to other areas. This is even more so with the kinds of complex and specialised knowledge that accompanies much professional expertise: as later studies found, the more complex the domain, the more important domain-specific knowledge. This non-translatability of cognitive skill is well-established in psychological research and has been replicated many times. Other studies, for example, have shown that the ability to remember long strings of digits doesn’t transfer to the ability to remember long strings of letters. Surely we’re not surprised to hear this, for we all know people who are ‘clever’ in their professional lives yet who often seem to make stupid decisions in their personal lives.
In almost every arena, the higher the skill level, the more specific the expertise is likely to become. In a football team, for example, there are different ‘domains’ or positions: goalkeeper, defender, attacker. Within those, there are further categories: centre-back, full-back, attacking midfielder, holding midfielder, attacking player. Now, it might be fine for a bunch of amateurs, playing a friendly game, to move positions. But, at a professional level, if you put a left-back in a striker’s position or a central midfielder in goal, the players would be lost. For them to make excellent, split-second decisions, and to enact robust and effective strategies, they need thousands of specific mental models – and thousands of hours of practice to create those models – all of which are specific and exclusive to a position.
Of course, critical thinking is an essential part of a student’s mental equipment. However, it cannot be detached from context. Teaching students generic ‘thinking skills’ separate from the rest of their curriculum is meaningless and ineffective. As the American educationalist Daniel Willingham puts it:
[I]f you remind a student to ‘look at an issue from multiple perspectives’ often enough, he will learn that he ought to do so, but if he doesn’t know much about an issue, he can’t think about it from multiple perspectives … critical thinking (as well as scientific thinking and other domain-based thinking) is not a skill. There is not a set of critical thinking skills that can be acquired and deployed regardless of context.
This detachment of cognitive ideals from contextual knowledge is not confined to the learning of critical thinking. Some schools laud themselves for placing ‘21st-century learning skills’ at the heart of their mission. It’s even been suggested that some of these nebulous skills are now as important as literacy and should be afforded the same status. An example of this is brain-training games that claim to help kids become smarter, more alert and able to learn faster. However, recent research has shown that brain-training games are really only good for one thing – getting good a brain-training games. The claim that they offer students a general set of problem-solving skills was recently debunked by a study that reviewed more than 130 papers, which concluded:
[W]e know of no evidence for broad-based improvement in cognition, academic achievement, professional performance, and/or social competencies that derives from decontextualised practice of cognitive skills devoid of domain-specific content.
The same goes for teaching ‘dispositions’ such as the ‘growth mindset’ (focusing on will and effort as opposed to inherent talent) or ‘grit’(determination in the face of obstacles). It’s not clear that these dispositions can be taught, and there’s no evidence that teaching them outside a specific subject matter has any effect.
Instead of teachinggenericcritical-thinking skills, we ought to focus onsubject-specificcritical-thinking skills that seek to broaden a student’s individual subject knowledge and unlock the unique, intricate mysteries of each subject. For example, if a student of literature knows that Mary Shelley’s mother died shortly after Mary was born and that Shelley herself lost a number of children in infancy, that student’s appreciation of Victor Frankenstein’s obsession with creating life from death, and the language used to describe it, is more enhanced than approaching the text without this knowledge. A physics student investigating why two planes behave differently in flight might know how to ‘think critically’ through the scientific method but, without solid knowledge of contingent factors such as outside air temperature and a bank of previous case studies to draw upon, the student will struggle to know which hypothesis to focus on and which variables to discount.
As Willingham writes: ‘Thought processes are intertwined with what is being thought about.’ Students need to be given real and significant things from the world to think with and about, if teachers want to influence how they do that thinking.
Have you ever wondered how many contradictory thoughts you have in a day? How many times your thoughts contradict your actions? How often your feelings oppose your principles and beliefs? Most of the time, we don’t see our own contradictions – it’s often easier to observe such inconsistencies in others. But you are as full of contradictions as I am. We humans are structurally made of contradictions, living peacefully, sometimes painfully, with our oxymoronic selves. Walt Whitman got it right when he wrote in ‘Song of Myself’ (1855):
Do I contradict myself? Very well, then I contradict myself, (I am large, I contain multitudes.)
Think about buying technological gadgets while opposing child labour and ecological waste, or about condemning theft, then illegally downloading music and movies. Think of those who hold forth about respecting private life, and a moment later post personal photos to Facebook. There are environmentalists who constantly fly, finance traders who care about poverty, and sermonising priests who have lost their faith. Sebastián Marroquín remembers his father singing lullabies while he drifted off to sleep – and his father was the drugs lord Pablo Escobar, the greatest killer in Colombian history. Living a contradictory life is profoundly, perhaps definitively, human.
The American feminist historian Joan Wallach Scott argues that what characterises a critical thinker is the ability ‘to point a finger at contradictions’, but critical thinkers don’t escape contradiction either. In his book Le génie du mensonge (2016) – ‘the genius of lies’ – the French philosopher François Noudelmann portrays Michel Foucault invoking the ‘courage of truth’ while hiding his fatal illness, and Jean-Paul Sartre, the intellectuel engagé, playing a very ambiguous role during the Vichy era.
Get Aeon straight to your inbox
Today, some globalised academics make a lucrative business out of the critique of capitalism. Perhaps contradictions are a necessary ingredient for triggering intellectual creativity. While most humans struggle to maintain a sense of psychological unity, contradictions produce destabilising breaches in the self. Whether conscious or unconscious, these fissures nourish creative inspiration, which can be interpreted as a way to resolve or sublimate internal oppositions. I believe this can be said of all domains of creation. Perhaps art, literature, science or philosophy wouldn’t be possible without intrapersonal contradictions and the desire to resolve them.
Is there anyone who lives according to the Stoic principle of Plutarch, in ‘perfect agreement between the maxims of men and their conduct’? No, but this isn’t always a cause for crisis. We compartmentalise knowledge, practices and emotions. In certain domains of life, some behaviours and thoughts are acceptable but not in others. For instance, lying might be seen as a heroic act when done to protect victims from a brutal regime, but in a friendly relationship it is unbearable. In labs, scientists can produce evidence-based research in the context of their professional lives, then go home and attend religious prayers addressing the existence of invisible entities.
Humans live peacefully with contradictions precisely because of their capacity to compartmentalise. And when contradictory statements, actions or emotions jump out of their contextual box, we are very good, perhaps too good, at finding justifications to soothe cognitive dissonance. An environmentalist friend of mine, to whom I pointed out that smoking is not an ecological act, used to reply: ‘I know, David, but I smoke rolled cigarettes!’ as if rolled cigarettes are less toxic than industrial ones and don’t depend on the destructive industry of tobacco exploitation – which he, of course, condemns.
Contradictions are omnipresent in our intrapersonal life and they are particularly visible when strong beliefs come into play, such as faith, morality, militantism, and so on. In Guinea and Laos, where I’ve conducted my ethnographical research, most people are convinced of the existence of spiritual entities that can transform themselves into a multiplicity of incompatible forms, able to ontologically turn themselves into animals, plants or objects, or to even be invisible, all without the slightest concern for contradiction. Our own popular culture features zombies, alive and dead at the same time, and robots with all-too-human emotions. Our minds are, indeed, full of these entities imbued with contradictory qualities that defy the ‘principle of non-contradiction’. While one thinks that it is impossible to be A and non-A, in fact humans adore entities with incompatible properties. As psychologists of cognition have shown, such contradictions are particularly attractive to the human mind. They challenge core ontological expectations that we have about animals, artifacts or persons. As a consequence, they hold important cognitive salience and memorability.
Things get even more complicated when one moves beyond the confines of the self. Human communication consists of subtle manoeuvresbetween contradictions, for instance between what is said and what is expressed through gestures and tones. As an individual, one persistently strives at interpreting the contradictory messages of our interlocutors and decoding the inconsistent behaviours that one observes in social life. (The English anthropologist Gregory Bateson and his colleagues at the Palo Alto group in California have lucidly written about these phenomena.)
There are social situations where one is stuck in paradoxical injunctions, for instance when a teacher commands her students to ‘be spontaneous!’ The worst scenarios imply a ‘double bind’ in which infants are wedged in the contradictory emotional demands of their parents. But, there are also many non-pathological settings described by anthropologists, such as rituals, where contradictions are performed and valued as modes of communication. Take the ancient Jewish ‘slapping’ rituals performed at a girl’s first menstruation. In the past, among Eastern European Jews, when a girl told her mother that she’d got her first period, the mother would slap her daughter’s face and, at the same time, exclaim ‘Mazel tov!’ (congratulations). Here, the contradictory nature of the messages constitutes the foundation of the ritual and the necessary ingredients for its efficiency.
Building on the poet John Keats, the psychoanalyst Adam Phillips in Promises, Promises (2000) describes three ‘negative capabilities’ indispensable to growing into a mature human: the experience of being a nuisance, of getting lost, and of being powerless. I would add one more to that list: the ability to discover and accept our contradictions, even if, at times, we struggle to renounce them.